The short answer: sort of.
Google's AI specialists have trained their creation, AlphaGo, to beat a human at a game previously thought too difficult for any artificial intelligence to win.
The London-based team of DeepMind have been building AI systems since 2010, creating "deep neural networks" that can do far more than the average computer. In 2014, they caught the interest of Google, who helped kick their research up a gear.
They took on the challenge of building an AI that could play the ancient Chinese game Go - a tactical board game with thousands of possible permutations (it's a lot more complex than chess). Due to the subtleties of the game, humans have long had an edge over computer players, with no AI player capable of holding their own over a learned human player.
Go has thus been the 'holy grail' of AI tests ever since computers started beating humans at chess. Facebook recently suggested it had build an AI that was close to beating a human player - but Google's DeepMind AI has somewhat spoilt the party.
In October 2015, DeepMind's AlphaGo AI played five games against reigning European Go champion Fan Hui, one of the greatest Go player of his generation.
AlphaGo won all five games.
So what? AI isn't going to take over the world via board games.
No, but the hope of DeepMind is that it can use the AlphaGo system to develop a generic, general purpose algorithm for similar AI learning environments.
If they can harness the game-play of Go to other complex real-life scenarios, AI - or robots controlled by AI - could soon start outlearning humans.
And we all know what that means...