Home   News   National   Article

Talk of AI dangers has ‘run ahead of the technology’, says Nick Clegg


By PA News

Register for free to read more of the latest local news. It's easy and will only take a moment.



Click here to sign up to our free newsletters!
Sir Nick Clegg has worked at Meta since 2018 (Stefan Rousseau/PA)

Talk of artificial intelligence (AI) models posing a threat to humanity has “run ahead of the technology”, according to Sir Nick Clegg.

The former Liberal Democrat leader and deputy prime minister said concerns around “open-source” models, which are made freely available and can be modified by the public, were exaggerated, and the technology could offer solutions to problems such as hate speech.

It comes after Facebook’s parent company Meta said on Tuesday that it was opening access to its new large language model, Llama 2, which will be free for research and commercial use.

Generative AI tools such as ChatGPT, a chatbot that can provide detailed prose responses and engage in human-like conversations, have become widely used in the public domain in the last year.

The models that we’re open-sourcing are far, far, far short of that. In fact, in many ways they’re quite stupid
Sir Nick Clegg

Speaking on BBC Radio 4’s Today programme on Wednesday, Sir Nick, president of global affairs at Meta, said: “My view is that the hype has somewhat run ahead of the technology.

“I think a lot of the existential warnings relate to models that don’t currently exist, so-called super-intelligent, super-powerful AI models – the vision where AI develops an autonomy and agency on its own, where it can think for itself and reproduce itself.

“The models that we’re open-sourcing are far, far, far short of that. In fact, in many ways they’re quite stupid.”

Sir Nick said a claim by Dame Wendy Hall, co-chair of the Government’s AI Review, that Meta’s model could not be regulated and was akin to “giving people a template to build a nuclear bomb” was “complete hyperbole”, adding: “It’s not as if we’re at a T-junction where firms can choose to open source or not. Models are being open-sourced all the time already.”

He said Meta had 350 people “stress-testing” its models over several months to check for potential issues, and that Llama 2 was safer than any other large language models currently available on the internet.

Meta has previously faced questions around security and trust, with the company fined 1.2 billion euros (£1 billion) in May over the transfer of data from European users to US servers.

Do you want to respond to this article? If so, click here to submit your thoughts and they may be published in print.

Keep up-to-date with important news from your community, and access exclusive, subscriber only content online. Read a copy of your favourite newspaper on any device via the HNM App.

Learn more


This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies - Learn More