Top

How Dangerous Is AI Really?

January 10, 2018

Category:

“I keep sounding the alarm bell, but until people see robots going down the street killing people, they don’t know how to react, because it seems so ethereal”—this is precisely what visionary entrepreneur Elon Musk told a gathering of U.S. governors just last month.

This popular science fiction theme is beginning to permeate real life apprehensions concerning mankind’s future; the fear that one day, robots will become so intelligent that they’ll rise up and overthrow humanity.

But are there reasons to believe that this fear might become a reality?

Both scientific and anecdotal facts seem to point to a strong possibility because every day AI is becoming a more real and looming prospect, and perhaps we’re starting to lose sight of its true power.

You may be tempted to believe that AI has got nothing to do with you, and therefore, why should you care whether it’s dangerous or not – right? Wrong. If you’ve chatted Apple’s personal assistant Siri, shopped on Amazon or chilled on Netflix then you have had contact with a form of pseudo-AI, as software engineer R.L. Adams explains it.

Like it or not, AI is slowly but surely seeping its way into our lives, from voice-powered personal assistants like Siri and Alexa, to more underlying and fundamental technologies such as behavioral algorithms and suggestive searches. And each time you open your browser to surf the net, there’s another article telling you how this technology has evolved since you last read about it, 5 days ago. It’s astonishing, yes, but it also creates grounds for concern.

Bots Can Now Communicate in a Language of Their Own

Just less than a month ago, Facebook researchers had to shut down an AI they invented after the system created its own language. The bots, Bob and Alice, had suddenly abandoned the English language and began communicating in a language which initially seemed nothing more than gibberish to the researchers. Bob began the conversation by saying, “I can i i everything else,” which prompted Alice to respond, “balls have zero to me to me to me…”. The conversation went on in that manner.

Apparently, the bots decided to switch from English because “there was no reward to sticking to English language,” says Dhruv Batra, visiting research scientist from Georgia Tech at Facebook AI Research (FAIR). “Agents will drift off understandable language and invent codewords for themselves. Like if I say ‘the’ five times, you interpret that to mean I want five copies of this item. This isn’t so different from the way communities of humans create short hands.”

Nevertheless, shortly after witnessing the bots communicate in a language of their own, the scientists decided to shut them down, arguing that the purpose was to eventually get the bots to communicate with humans. “Our interest was having bots who could talk to people,” says Mike Lewis, research scientist at FAIR. But seeing how Bob and Alice found the English language a bit dull for their taste, we’re not going to chat with bots very soon.

The other issue, as Facebook admits, is that we currently have no way of truly understanding any divergent computer language. “It’s important to remember, there aren’t bilingual speakers of AI and human languages,” says Batra. We don’t even fully understand how complex AIs think because we can’t really see inside their thought process. Which could make adding AI-to-AI conversations to this scenario way worse than we’d think. If this doesn’t get you thinking about the potential danger that AI poses, then I don’t know what will.

Which brings us back to the beginning, wondering how dangerous AI really is?

What Makes AI Dangerous?

Thanks to the inevitable rise of quantum computing, AI will become smarter, faster and more human-like. And there’s nothing we can do about it, apart from keeping a close eye on its evolution.

The question is, what will happen once the first quantum computer will go online? Software engineer and serial entrepreneur R.L Adams argues the following:

“Considering that the world lacks any formidable quantum resistant cryptography (QRC), how will a country like the United States or Russia protect its assets from rogue nations or bad actors that are hellbent on using quantum computers to hack the world’s most secretive and lucrative information?”

We still have at least 5 good years until that happens. However, once the first quantum computer is built, Nigel Smart, founder of Dyadic Security and Vice President of the International Association of Cryptologic Research, believes that:

“The internet will not be secure, as we rely on algorithms which are broken by quantum computers to secure our connections to web sites, download emails and everything else. Even updates to phones, and downloading applications from App stores will be broken and unreliable. Banking transactions via chip-and-PIN could [also] be rendered insecure.”

What’s more, just this week 116 experts in robotics and artificial intelligence, including SpaceX founder Elon Musk and Google Deepmind co-founder Mustafa Suleyman, penned an open letter to the UN asking to ban the development and use of artificially intelligent bots in combat, and to add them to a list of “morally wrong” weapons along blinding lasers and chemical weapons.

So far, 19 out of 123 member states are asking for a total ban on lethal autonomous weapons. This marks the first instance of this kind in which experts in AI and robotics have come together to take a stance against the issue.

Here are some excerpts from the letter:

“Lethal autonomous weapons threaten to become the third revolution in warfare. Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways.”

The letter calls upon immediate action asking UN officials “to find a way to protect us all from these dangers.”

While it’s true that all technology can be used both for good and bad, AI is not only dangerous, but hard to regulate. And because AI’s value comes from its decision-making technology, artificially intelligent bots could very easily undermine basic commands and perhaps even initiate action.

This dilemma presents yet one more challenge of the potent thrust of the double edged sword of current technology into our everyday lives. How it is dealt with demands prescient and innovative solutions.