AI poses existential risk of 'people harmed or killed': Ex-Google CEO

Ex-Google CEO warns artificial intelligence could be used to kill ‘many, many people’

  • Former CEO said humanity is unprepared for an AI that’s ‘misused by evil people’
  • AI could develop ‘new kinds of biology’ for bioterror or new exploits for hackers
  • READ MORE:  ex-Google R&D chief warns AI likely to view humans as ‘scum’

A former Google CEO has warned that artificial intelligence be used to kill people in the future.

Eric Schmidt – who spent two decades at the helm of the search giant, told a gathering of senior executives Wednesday that he believes AI presents an ‘existential risk’ for humanity ‘defined as many, many, many, many people harmed or killed.’

The software PhD said the technology, which Google is helping spearhead through its relatively primitive Bard chatbot system – could be ‘misused by evil people’ when it becomes more advanced.

Schmidt, who recently chaired the US National Security Commission on AI, is the latest in a slew of former Google staffers to come out publicly against the rapid development of the technology in recent weeks.

Schmidt told a CEO summit in London that ‘misused’ AI could lead to ‘many, many, many, many people harmed or killed.’ Above, Schmidt recently chaired the US National Security Commission on Artificial Intelligence on behalf of the federal government to assess AI threats

Across Silicon Valley, brilliant minds are split about the progress of the AI systems — with some saying it will improve humanity and others fearing the technology will destroy it

Geoffrey Hinton, credited as the ‘Godfather of Artificial Technology’, sensationally resigned from Google earlier this spring, citing his AI fears. Hinton said that a part of him now regrets helping to make the systems. Above, Hinton speaking at a summit hosted by Thomson Reuters

Schmidt focused specifically on AI’s burgeoning ability to identify software vulnerabilities for hackers and the tech’s inevitable hunting down of new biological pathways, which could lead to the creation of fearsome new bioweapons. 

‘There are scenarios not today, but reasonably soon, where these systems will be able to find zero-day exploits in cyber issues, or discover new kinds of biology,’ Schmidt said before The Wall Street Journal’s CEO Council Summit in London.

So-called ‘zero-day exploits’ are security flaws in code — anywhere from personal computing to digital banking to infrastructure — that have only just been discovered and thus not yet patched-up by cybersecurity teams. Zero-days are the prized tools in hackers’ arsenal. 

Schmidt did not go into detail on what ‘new kinds of biology’ dreamed up by a maliciously run AI worry him most.   

‘Now, this is fiction today,’ Schmidt cautioned, ‘but its reasoning is likely to be true. And when that happens, we want to be ready to know how to make sure these things are not misused by evil people.’

Schmidt’s comments, which are not his first warnings, join a raucous debate across Silicon Valley over the moral questions and mortal dangers posed by AI.

Elon Musk, Apple co-founder Steve Wozniak and the late Stephen Hawking are among the most famous critics of AI who believe it poses a ‘profound risk to society and humanity’ and could have ‘catastrophic effects’.

Earlier this spring, the ‘Godfather of Artificial Intelligence’ sensationally resigned from Google, warning that AI technology could upend life as we know it.

Speaking to the New York Times about his resignation, he warned that in the near future, A.I. would flood the internet with false photos, videos and texts.

These would be of a standard, he added, where the average person would ‘not be able to know what is true anymore’.

But Bill Gates, My Pichai and futurist Ray Kurzweil are on the other side of the debate, hailing the technology as our time’s ‘most important’ innovation.

Schmidt helmed the creation of a mammoth 756-page report on the US national security risks posed by AI. The report advised that the US should renounce any calls for a global ban on AI-powered autonomous weapons, arguing that neither Russia nor China would uphold their end

Schmidt co-chaired the US National Security Commission on AI from 2019 to 2021. Their report warned that the US could lose its edge as an ‘AI superpower.’ Boston Dynamics robot dogs (pictured) are among the military machines that a misused AI could exploit

A picture from Paramount’s Terminator Genisys which explores the hypothetical dark side of artificial intelligence

But among these titans, only Schmidt helmed the creation of a mammoth 756-page report for the US government on the national security risks posed by AI.

‘America is not prepared to defend or compete in the AI era,’ wrote Schmidt and his vice chair on the US National Security Commission on AI in 2021. ‘This is the tough reality we must face.’ 

Schmidt, who spent three years chairing the fact-finding body alongside Bob Work, a previous deputy US secretary of Defense, argued that China was on track to outpace the US as planet Earth’s ‘AI superpower.’

‘We will not be able to defend against AI-enabled threats,’ Schmidt and Work wrote, ‘without ubiquitous AI capabilities and new warfighting paradigms.’

Their committee advised that the Biden administration to commit to doubling US government AI research and development spending to $32 billion-per-year by 2026, and to free itself from dependence on overseas microchip manufacturing.

Schmidt and his commission also suggested that the US should renounce any calls for a global ban on AI-powered autonomous weapons, arguing that neither Russia nor China would uphold their end of any treaties banning these weapons.

In London this week, however, Schmidt told the gathering of CEOs that he did not have any clear ideas, personally, on how AI should be or even could be regulated, suggesting that it should be a ‘broader question for society.’ 

He did voice his belief that there is unlikely to be a new regulatory agency created to regulate AI in the United States. 

Source: Read Full Article