When future generations catalogue humanity's naive handling of artificial intelligence, this experiment is going to look like we were asking for it.
Word comes from the New Scientist that Google's artificial intelligence software has learnt how to play 49 old Atari games just from observing humans play them.
The Google DeepMind, built by the search giant's London subsidiary, has even managed to set new record top scores on 23 of the retro titles, despite not being programmed on the rules or dynamics of each game. Based on an algorithm system often used in image recognition, DeepMind is able to "watch" a human tapping away at a game. By combining this "deep neural network" with a learning tool that rewards the system for performing certain actions, DeepMind became a master of titles including Breakout. This video shows the system mastering the game after some 600 attempts.
While teaching a computer to play video games might sound like the first step in unleashing robot soldiers on battlefields, the DeepMind experiment is actually part of a process of teaching an AI to process complex sets of data. "We can't say anything publicly about this but the system is useful for any sequential decision making task," DeepMind co-founder Demis Hassabis told New Scientist. "You can imagine there are all sorts of things that fit into that description."
Rather than watching your special tactics in a Call of Duty deathmatch, Google could use the system to place better ads in front of your mouse as you peruse the internet.
"What happens if we combine this with a Drone that delivers payloads to key targets" is not an idea we hope anyone in the Google office bothers playing around with.
[Via: New Scientist/Kotaku]