Artificial Intelligence finally made it to the big leagues this year. First came the notice by the U.S. intelligence community that AI is a looming, potentially dangerous global threat. And now comes word that Stephen Hawking, the world's most well-known scientist, after years of warning about the threat, is going to do something about it.
Hawking announced on Wednesday that a new AI research center was opening at Cambridge University in London. The new center is named after its benefactor, the Leverhulme Trust, which provided a $12.3 million grant to Hawking and Cambridge with an eye toward ensuring that AI in the future is a benefit to humanity – not a candidate to harm or even end life as we know it.
"Alongside the benefits, AI will also bring dangers, like powerful autonomous weapons, or new ways for the few to oppress the many," Hawking said in the announcement.
In February, James Clapper, President Obama's director of national intelligence, said in his unclassified worldwide threat assessment presented to Congress AI is a concern in a myriad of ways, including cyber-attacks, and that it must be taken seriously.
"Although the United States leads AI research globally, foreign state research in AI is growing," Clapper said in his report to Congress. "The increased reliance on AI for autonomous decision-making is creating new vulnerabilities to cyber-attacks and influence operations."
The Leverhulme Centre for the Future of Intelligence (CFI) will research various ways AI applications could potentially be used, that have largely, until now, been the stuff of science fiction and Hollywood blockbuster films.
Some AI applications, like "smart" mobile devices, already use AI to sort through massive amounts of data in order to provide useful information for consumers. Others, like robot surgeons that could obviate the need for human surgeons in some cases, or military drones that use AI to micro-target enemies, are just now emerging from the drawing boards in research labs.
CFI intends to study them all, with a deliberate eye toward making sure that AI doesn't veer off into research directions that do more harm than good – especially applications that might allow AI to "learn" unchecked. If that happened, AI could become a super-human system that might decide carbon-based life forms (humans) are the ultimate threat to a viable planetary ecosystem.
Hawking's new AI research center is a collaboration between Oxford, Cambridge, Imperial College and the University of California–Berkeley. CFI will be designed to collaborate with researchers who come from different disciplines as well as from different types of technology-based industries that are advancing AI applications rapidly.
AI has the potential to cause massive economic disruption as well if smart AI robots begin to take over tasks that are now the basis of millions of industrial jobs. Hawking's new AI research center will study this problem too. It is precisely these sorts of threats – scientific, economic, and military – that triggered the inclusion of AI in the intelligence community's worldwide threat assessment this year.
"Implications of broader AI deployment include increased vulnerability to cyber-attack, difficulty in ascertaining attribution, facilitation of advances in foreign weapon and intelligence systems, the risk of accidents and related liability issues, and unemployment," Clapper wrote in the threat assessment.
The U.S. intelligence community chief also said that AI-operated computer schemes have unintentionally caused harm in global stock trading. It may only be a matter of time before cyberterrorists deliberately use AI to disrupt the foundations of the global economy.
"As we have already seen, false data and unanticipated algorithm behaviors have caused significant fluctuations in the stock market because of the reliance on automated trading of financial instruments," Clapper wrote.
AI systems of the future might also become susceptible to cyberterrorism efforts that are designed to be either disruptive or deceptive. AI could be deployed to harm critical national security systems that largely operate via computer systems, he wrote.
"Efforts to mislead or compromise automated systems might create or enable further opportunities to disrupt or damage critical infrastructure or national security networks," Clapper said.
But neither Hawking nor the intelligence community has addressed in any detail what may ultimately be the day of reckoning for humans as they create super AI systems – the notion that an AI system capable of human-level thought learns so rapidly that it exceeds our ability to direct or control it at some point.
When that day arrives – if it arrives - we may not be able to control such an AI system. It might quite literally develop the capacity to out-think all of us, whether we like it or not.