What happens when the robots take over?
AI programs are winning board games & turning into Nazis on Twitter. Should we be afraid?
By Tom Bailey
If you’re the kind of person who lies awake at night worried about the rise of the machines and their subjugation of the human race, here’s a comforting fact: that gooey blob of protein, salt and water in your skull is capable of issuing 100 trillion instructions per second. Compare that to what even the fastest supercomputers can do and, well, there is no comparison. Actually, that’s not quite true.
In 2013 a Japanese supercomputer did manage to simulate one second of brain activity – but it took 83,000 processors 40 minutes to do it. By that yardstick, you can rest easy; ‘glorified calculators’, as Jeremy Clarkson might say, probably aren’t plotting your downfall.
Well, not yet. “The scope of artificial intelligence is huge,” says Dr Kevin Curran of Ulster University. “But while we tend to imagine AI in its grandest form, as a Blade Runner-style humanoid robot, the truth is more mundane. AI software is running underneath all sorts of modern technological tasks from autopilot to the gyroscope ability of Segways. Anywhere that ‘fast fuzzy type’ decisions needs to be made, there is some AI involved. In fact, a simple Google search is putting AI to work.”
Skype can already translate language in real time and will soon be able to book your hotel just by listening in to you chat about a holiday. CaptionBot uses image recognition AI to add witty captions to your Instagram photos. Last year, drivers took their hands off the steering wheel for the first time, thanks to Google’s driverless car.
BOTS: THE STORY
OK, so AI isn’t ‘cognitive’ yet – it’s not HAL, Wall-E or AVA but it’s closer than ever to gaining an IQ. In March, Google’s AlphaGo AI platform shocked experts by trouncing South Korean Go champion Lee Sedol. If you haven’t heard of Go, it’s a 3,000-year-old board game that’s exponentially more complex than chess, with more moves than there are atoms in the universe.
The difference between AlphaGo and IBM’s Deep Blue, the AI chess computer that beat Garry Kasparov back in 1997, is that AlphaGo didn’t just mimic the best human players – it discovered new strategies to beat them. Strategies that are unknown to human intelligence. Watch the video of the second game and you’ll see AlphaGo make a move so mind-blowing that Sedol has to leave the room to recover.
“Super-intelligent AI was long thought of as science fiction, centuries or more away,” says Max Tegmark, Massachusetts Institute of Technology professor and president of the Future Of Life Institute, a kind of AI watchdog. “But thanks to recent breakthroughs, many AI milestones, which experts viewed as decades away merely five years ago, have now been reached.”
Provided we can keep AI under control, cognitive machines could one day fly our planes, fill our fridges with food, represent us at meetings, perform hazardous tasks, break up with our partners for us, eradicate war, predict global catastrophes, solve poverty – and take care of grandma.
They could also, of course, render many of us redundant, as tomorrow’s ‘strong AI’ becomes capable of highly skilled jobs. It’s a threat serious enough for the Labour party to consider a universal basic income to counter “any revolution in jobs and technology to come”, according to shadow chancellor John McDonnell.
At the centre of this good vs evil debate is the futurist Ray Kurzweil. He says that within 30 years we’ll have the hardware and software to create super-human intelligence but that, while machines “may keep us around, we won’t be able to keep up with them”.
Basically, shit will get real. Which is why more than 100 of the world’s greatest thinkers recently signed an open letter warning mankind of the dangers of AI. While speech recognition, image analysis and robot vacuum cleaners are improving our quality of life, AI capable of passing on its knowledge from one generation to the next could take control of autonomous weapons and wipe us off the face of the Earth, for example.
Theoretical physicist Stephen Hawking said: “The development of full artificial intelligence could spell the end of the human race.” Steve Wozniak, the co-founder of Apple, warned us that “computers are going to take over from humans”. Elon Musk, the genius behind Tesla electric cars, said AI was “potentially more dangerous than nukes” and described the development of intelligent machines as akin to “summoning the demon”.
A tad melodramatic. But the recent – and very public – failure of Microsoft’s ‘teen AI’ chat-bot, Tay, proved that Musk had good reason to get his panties in a bunch.
Tay was given her own Twitter account, and provided with the capability to learn from millennial Twitter users. But after the cheery “Humans are super cool!”, Tay’s algorithms quickly developed a darker side. Feminism was ‘a cancer’, she said, before denying the Holocaust with the substantially less cheery tweet, “Hitler was right”.
Microsoft swiftly ended the brief life of the world’s first AI racist.
GOOD BOT, BAD BOT
In a straight developmental race between computers and humans, the glacial pace of human evolution certainly doesn’t help. In the 65 years since British mathematician Alan Turing asked, “Can machines think?”, AI has beaten us at the world’s hardest board game, diagnosed diseases by studying x-rays, replied to emails in a human-like tone, penned Shakespeare sonnets so convincing that experts can’t tell the difference, painted Picassos and won the US quiz show Jeopardy. And yet, say experts, AI still hasn’t had its ‘big bang’.
“Amplifying human intelligence with artificial intelligence has the potential of helping civilisation flourish like never before,” says Tegmark. “But it could also trigger an intelligence explosion leaving human intellect far behind.
You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green energy project and there’s an anthill in the region to be flooded, too bad for the ants.
“A key goal of AI safety research is to never place humanity in the position of those ants.”
“There are many who predict a time soon when AI will win over humans” says Dr Curran. “It’s known as technological singularity – a hypothetical event in which AI builds ever smarter and more powerful machines than itself, right up to the point of a ‘runaway effect’. Basically, AI would result in an intelligence surpassing all existing human intelligence.”
By 2050, the average desktop computer will have the power of nine billion brains. So, having already taken a beating from a string of AI computers, where would that leave humanity?
Let’s give the last word to a man with one of the greatest blobs of protein, salt and water on the planet – Stephen Hawking. “Success in creating AI would be the biggest event in human history,” he said. “Unfortunately, it might also be the last, unless we learn how to avoid the risks.”
As Tay might have tweeted, had she not turned genocidal, “*drops mic*”.
(Images: Kobal/PA)