Receive free AI updates

Britain’s new AI czar has warned that AI could be used by malicious actors to attack the NHS, causing disruption in the fight against the Covid-19 pandemic, as he this week outlined priorities for his £100m task force .

Ian Hogarth, chairman of the UK government’s “Frontier AI” working group, said weaponizing technology to hinder the National Health Service or carry out “biological attacks” was one of the biggest AI risks his team wanted to tackle .

He suggested that AI systems could be used to enhance cyberattacks on UK healthcare services, or engineer pathogens or toxins.

Hogarth emphasized the need for global cooperation with countries around the world, including China, to address these issues.

“These are fundamentally global risks. Just as we have worked with China on biosecurity and cybersecurity, I think there is real value in international cooperation around risks on a larger scale,” he said.

“It’s like an epidemic. You can’t single-handedly try to contain these threats in a situation like this.”

follow the working group’s create In June, Hogarth appointed artificial intelligence pioneer Yoshua Bengio and GCHQ director Anne Keast-Butler to its external advisory board, with additional appointments to be announced on Thursday.

The organization has received initial funding of £100m from the government to conduct independent AI safety research to develop safe and reliable “cutting-edge” AI models, the underlying technology behind AI systems such as ChatGPT. Hogarth said this is the largest amount committed by any nation-state for cutting-edge AI safety.

Hogarth compared the scale of the threat facing the NHS to that of the Covid-19 pandemic, which caused years of disruption to England’s public health service, and the WannaCry ransomware attack in 2017, which cost the NHS an estimated £92m and led to the cancellation of 19,000 patient appointments.

“The risk we are most concerned about is the heightened national security risk,” Hogarth, a former tech entrepreneur and venture capitalist, told the Financial Times in an interview.

He added: “There’s a whole bunch of techies out there trying to build AI systems that have a superhuman ability to write code . barriers to cyberattacks or cybercrime.”

Hogarth said the UK needed to develop “a national capacity to understand…”. . . Hopefully, the risk is mitigated so we can understand how to put guardrails around this technology and get the most out of it. “

He was closely involved in the planning of the UK’s first Global AI Safety Summit at Bletchley Park in early November. The event aims to bring together national leaders with tech companies, academia and civil society to discuss artificial intelligence.

Hogarth’s team, modeled on the Covid-19 Vaccine Task Force, has recently recruited several independent academics, including David Kruger of Cambridge University and Yarin Gall of Oxford University.

“If you want strong regulation — if you want the state to be an active partner and understand the risks on the frontier, rather than just letting AI companies do their homework — then what you have to do is quickly bring that expertise into the government,” he said.


Leave a Reply

Your email address will not be published. Required fields are marked *