In a recent post (http://trueeconomics.blogspot.com/2017/10/221017-robot-builders-future-its-all.html) I mused about the deep-reaching implications of the Google's AlphaZero or AlphaGo in its earliest incarnation capabilities to develop independent (of humans) systems of logic. And now we have another breakthrough in the Google's AI saga.
According to the report in the Guardian (https://www.theguardian.com/technology/2017/dec/07/alphazero-google-deepmind-ai-beats-champion-program-teaching-itself-to-play-four-hours),:
"AlphaZero, the game-playing AI created by Google sibling DeepMind, has beaten the world’s best chess-playing computer program, having taught itself how to play in under four hours. The repurposed AI, which has repeatedly beaten the world’s best Go players as AlphaGo, has been generalised so that it can now learn other games. It took just four hours to learn the rules to chess before beating the world champion chess program, Stockfish 8, in a 100-game match up."
"After winning 25 games of chess versus Stockfish 8 starting as white, with first-mover advantage, a further three starting with black and drawing a further 72 games, AlphaZero also learned shogi in two hours before beating the leading program Elmo in a 100-game matchup. AlphaZero won 90 games, lost eight and drew 2."
In my view, that is important, because as I argued some years ago in a research paper, such 'leaps of faith' in logical systems are indicative of the basic traits of humanity, as being distinct from other forms of conscious life. In other words, can machines be rationally irrational, like humans?..