I’m sorry that I didn’t get my article on Artificial Intelligence last week during the ‘AI Safety Summit’ at Bletchley Park, the historic Second World War decoding centre in England. I got distracted by some other stuff that was happening in the Middle East.

The other reason for the delay was that whenever I looked at the video of Rishi Sunak, prime minister of Little Britain, sitting awestruck at the feet of Elon Musk and saying things like “Given that you are known for being such a brilliant innovator and technologist…,” I would collapse into helpless giggles.

Some people claim that Sunak was pitching for a job with Musk once he loses next year’s election and is defenestrated by his own Conservative Party, but that’s unfair. Sunak doesn’t need a post-politics job; his father-in-law owns half of India. He’s just an awkward nerd who wishes that he too was a tech bro.

Anyway, the topic at Bletchley Park was AI. Between Joe Biden’s announcement of a US ‘AI Safety Institute’ and Sunak’s ‘AI Safety Summit’ (graced by Vice-President Kamala Harris, King Charles III and Elon Musk), a lot was said about artificial intelligence. Most of it was nonsense.

Demis Hassabis, CEO of Google DeepMind, declared that “I am not in the pessimistic camp about AI, obviously, otherwise I wouldn’t be working on it,” but last May he warned that the threat of human extinction from AI should be treated as a societal risk comparable to pandemics or nuclear weapons.

Kamala Harris went for profundity: “Just as AI has the potential to do profound good, it also has the potential to do profound harm.” That’s equally true of drugs, money and sharp knives. She’s still not ready for prime time.

King Charles thought that “The rapid rise of powerful artificial intelligence is no less significant than…the harnessing of fire.” At the risk of committing lèse-majesté, one must reply: No it isn’t, and besides it hasn’t even happened yet.

Musk, never at a loss for words, opined that AI is an “existential threat” because human beings for the first time are faced with something “that is going to be far more intelligent than us”. It was a jamboree of the trite and the portentous.

These deep thinkers were all banging on about existential risk, but that is a contingency that would only arise if the machines were endowed with something called ‘artificial general intelligence’, that is, cognitive abilities in software comparable or superior to human intelligence.

Such AGI systems would have intellectual capabilities as flexible and comprehensive as those of human beings, but they would be faster and better informed because they could access and process huge amounts of data at incredible speed. They would be a real potential threat, but they don’t exist.

There is not even any evidence that we are closer to creating such software than we were five or ten years ago. There has been great progress in narrow forms of artificial intelligence, like self-driving vehicles and automated legal systems, but the only threat they pose, if any, is to jobs.

That’s not a minor issue, but it is hardly existential. And the advent of chatbots that can write essays and fill out job applications for you is not AGI either.

The ‘Large Language Models’ the chatbots are trained on making them expert in choosing the most plausible next word. That may occasionally produce random sentences containing useful new data or ideas, but there is no intellectual activity involved in the process except in the human who recognises that it is useful.

There is plenty to worry about in how ‘smarter’ computer programmes will destroy jobs (now including highly skilled jobs), and also in how easy it has become to manipulate opinion with ‘deepfakes’ and the like. But none of that needed a high-profile conference at Bletchley Park.

So why did they all go there and wind up talking about existential threats? Well, one possibility is that the leaders of the tech giants wanted to make sure that they were in on the rule-making from the start, for there will surely be new rules made about AI over the next few years.

Most of those rules will be about mundane commercial matters, not about threats to human existence. You might feel that it would be inappropriate for the people who will be making money from these commercial activities to be the ones making the rules.

On the other hand, they should certainly be involved in decisions about any existential threats arising from their new technologies, so tactically it makes more sense for them to steer the discussion in that direction. They’re not stupid, you know.

Gwynne Dyer’s latest book is ‘The Shortest History of War’