Robot apocalypse unlikely, but researchers need to understand AI risks

Experts say it's time to talk about some possible negative impacts of AI and how to avoid them

Concerns of a robot apocalypse may be overblown, some AI experts said.

Concerns of a robot apocalypse may be overblown, some AI experts said.

Recent concerns from tech luminaries about a robot apocalypse may be overblown, but artificial intelligence researchers need to start thinking about security measures as they build ever more intelligent machines, according to a group of AI experts.

The fields of AI and robotics can bring huge potential benefits to the human race, but many AI researchers don't spend a lot of time thinking about the societal implications of super intelligent machines, Ronald Arkin, an associate dean in the Georgia Tech College of Computing, said during a debate on the future of AI.

"Not all our colleagues are concerned with safety," Arkin said during the debate, which was hosted by the Information Technology and Innovation Foundation (ITIF) in Washington, D.C. "You cannot leave this up to the AI researchers. You cannot leave this up to the roboticists. We are an arrogant crew, and we think we know what's best."

While human-like intelligence in machines should still be a long time away, it's not too early to start thinking about policies and regulations to prepare for that future, Arkin and other AI researchers said.

Long-held fears of a robotic takeover of the world, voiced in science fiction stories for decades, have gained new traction in recent months, with tech thinkers including Bill Gates, Stephen Hawking and Elon Musk raising concerns about the dangers of AI.

Meanwhile, recent advances like Apple's Siri, Google's self-driving cars and the Deep-Q AI that has mastered dozens of Atari video games make some people believe that human-like machine intelligence is coming soon.

But it's hard to predict when human-like machine intelligence will happen, and it could still be decades away, said Nate Soares, executive director of the Machine Intelligence Research Institute. AI is now capable of "deep learning" involving specific tasks, but researchers need several more breakthroughs before they can design machines that can learn to accomplish a broad range of activities, like humans do, he said.

Super human intelligence from machines will happen "somewhere between five and 150 years, if I was going to be bold" about a prediction, Soares said.

Soares said he falls on "both sides" of the debate about the danger of super intelligence machines. "AI's going to bring lots and lots of benefits and if we do it poorly, it's going to bring lots and lots of risks," he said.

It's important not to overstate the risks, countered Robert Atkinson, ITIF's president. Some policymakers and members of the media will latch onto visions of a robot apocalypse when AI experts express concerns about the downsides of intelligent machines, he said.

Those fears, in turn, could lead to limits on government AI funding and stunt the growth of the technology, Atkinson said. Musk's recent statement suggesting AI is "summoning the demon" is demonizing the technology, he said.

Few other technologies generate the same level of fear, he said. "It's very different to say, 'Look, we are a community of responsible scientists who are building safety into this thing, and we're pretty sure it's going to work,'" Atkinson said.

The good news is that humans are still in control over how AI and robots will develop, but a more robust discussion about AI's future is needed, said Stuart Russell, a professor of electrical engineering and computer science at the University of California, Berkeley.

Even though Atkinson suggested that the danger is limited because it's still impossible to design a robot with intentionality, Russell suggested intentions aren't necessary for there to be a risk.

"If the system is better than you at taking into account more information and looking further ahead into the future, and it doesn't have exactly the same goals as you, then you have a problem," Russell said. "The difficulty is that we don't know what the human race's values are, so we don't know how to specify the right goals for a machine so that its behavior is something that we actually like."

In some cases, AI developers might think they're giving the right instructions to an intelligent machine, but the results aren't what they expected, like in the legend of King Midas, Russell said. "What happens when you don't like what they're doing?" he said. "You could say, 'Shut them down,' but a super intelligent system ... knows that one of the biggest risks to it is being shut down, so it's already outthought you."

With many AI researchers working on a small piece of the general-purpose intelligence puzzle, policymakers and scientists should talk about the potential negative implications instead of "keeping our fingers crossed that we'll run out of gas before we run off the cliff," Russell added.

Some people are more optimistic about super intelligent machines coexisting with humans, said Manuela Veloso, a computer science professor at Carnegie Mellon University. Service robots now escort visitors at Carnegie Mellon to Veloso's office and surf the Web to learn new information, she noted.

Robots are reaching a point where they will provide benefits to many people, she said. Research on coexistence will teach intelligent machines "not be taught to be outside of the scope of humankind but to be part of humankind," she said. "We will have humans, dogs, cats and robots."

Grant Gross covers technology and telecom policy in the U.S. government for The IDG News Service. Follow Grant on Twitter at GrantGross. Grant's email address is

Join the CSO newsletter!

Error: Please check your email address.

Tags BerkeleyInformation Technology and Innovation FoundationNate SoaresroboticsUniversity of CaliforniaGeorgia Techbill gatesMachine Intelligence Research InstituteCarnegie Mellon UniversityRonald Arkinstephen hawkingManuela VelososecurityElon MuskStuart RussellRobert Atkinsonpopular science

More about AppleBillGoogleIDGMellonNewsQTechnologyTwitter

Show Comments

Featured Whitepapers

Editor's Recommendations

Solution Centres

Stories by Grant Gross

Latest Videos

  • 150x50

    CSO Webinar: The Human Factor - Your people are your biggest security weakness

    ​Speakers: David Lacey, Researcher and former CISO Royal Mail David Turner - Global Risk Management Expert Mark Guntrip - Group Manager, Email Protection, Proofpoint

    Play Video

  • 150x50

    CSO Webinar: Current ransomware defences are failing – but machine learning can drive a more proactive solution

    Speakers • Ty Miller, Director, Threat Intelligence • Mark Gregory, Leader, Network Engineering Research Group, RMIT • Jeff Lanza, Retired FBI Agent (USA) • Andy Solterbeck, VP Asia Pacific, Cylance • David Braue, CSO MC/Moderator What to expect: ​Hear from industry experts on the local and global ransomware threat landscape. Explore a new approach to dealing with ransomware using machine-learning techniques and by thinking about the problem in a fundamentally different way. Apply techniques for gathering insight into ransomware behaviour and find out what elements must go into a truly effective ransomware defence. Get a first-hand look at how ransomware actually works in practice, and how machine-learning techniques can pick up on its activities long before your employees do.

    Play Video

  • 150x50

    CSO Webinar: Get real about metadata to avoid a false sense of security

    Speakers: • Anthony Caruana – CSO MC and moderator • Ian Farquhar, Worldwide Virtual Security Team Lead, Gigamon • John Lindsay, Former CTO, iiNet • Skeeve Stevens, Futurist, Future Sumo • David Vaile - Vice chair of APF, Co-Convenor of the Cyberspace Law And Policy Community, UNSW Law Faculty This webinar covers: - A 101 on metadata - what it is and how to use it - Insight into a typical attack, what happens and what we would find when looking into the metadata - How to collect metadata, use this to detect attacks and get greater insight into how you can use this to protect your organisation - Learn how much raw data and metadata to retain and how long for - Get a reality check on how you're using your metadata and if this is enough to secure your organisation

    Play Video

  • 150x50

    CSO Webinar: How banking trojans work and how you can stop them

    CSO Webinar: How banking trojans work and how you can stop them Featuring: • John Baird, Director of Global Technology Production, Deutsche Bank • Samantha Macleod, GM Cyber Security, ME Bank • Sherrod DeGrippo, Director of Emerging Threats, Proofpoint (USA)

    Play Video

  • 150x50

    IDG Live Webinar:The right collaboration strategy will help your business take flight

    Speakers - Mike Harris, Engineering Services Manager, Jetstar - Christopher Johnson, IT Director APAC, 20th Century Fox - Brent Maxwell, Director of Information Systems, THE ICONIC - IDG MC/Moderator Anthony Caruana

    Play Video

More videos

Blog Posts