Showing posts with label AlphaGo Zero. Show all posts
Showing posts with label AlphaGo Zero. Show all posts

Wednesday, January 3, 2018

2/1/18: Limits to Knowledge or Infinity of Complexity?


Occasionally, mass media produces journalism worth reading not to extract a momentary piece of information (the news) of relevance to our world, but to remind ourselves of the questions, quests, phenomena and thoughts worth carrying with us through our conscious lives (assuming we still have these lives left). 

With that intro, a link to just such a piece of journalism: https://www.theatlantic.com/science/archive/2017/12/limits-of-science/547649/. This piece, published in The Atlantic, is worth reading. For at least two reasons:

Reason 1: it posits the key question of finiteness of human capacity to know; and
Reason 2: it posits a perfect explanation as to why truly complex, non-finite (or non-discrete) phenomena are ultimately not knowable in perfect sense.

Non-discrete/non-finite phenomena belong human and social fields of inquiry (art, mathematics, philosophy, and, yes, economics, psychology, sociology etc). They are defined by the absence of the end-of-the-game rule. Chess, go, any and all games invented by us, humans, have a logical conclusion - a rule that defines the end of the game. They are discrete (in terms of ability to identify steps that sequentially lead to the end-rule realisation) and they are finite (because they always, by definition of each game, result in either a draw or a win/loss - they are bounded by the end-of-game rule). 

Knowledge is, well, we do not know what it is. And hence, we do not know if the end-of-game rule even exists, let alone what it might be. 


Worth a read, folks.

Sunday, December 10, 2017

10/12/17: Rationally-Irrational AI, yet?..


In a recent post (http://trueeconomics.blogspot.com/2017/10/221017-robot-builders-future-its-all.html) I mused about the deep-reaching implications of the Google's AlphaZero or AlphaGo in its earliest incarnation capabilities to develop independent (of humans) systems of logic. And now we have another breakthrough in the Google's AI saga.

According to the report in the Guardian (https://www.theguardian.com/technology/2017/dec/07/alphazero-google-deepmind-ai-beats-champion-program-teaching-itself-to-play-four-hours),:

"AlphaZero, the game-playing AI created by Google sibling DeepMind, has beaten the world’s best chess-playing computer program, having taught itself how to play in under four hours. The repurposed AI, which has repeatedly beaten the world’s best Go players as AlphaGo, has been generalised so that it can now learn other games. It took just four hours to learn the rules to chess before beating the world champion chess program, Stockfish 8, in a 100-game match up."

Another quote worth considering:
"After winning 25 games of chess versus Stockfish 8 starting as white, with first-mover advantage, a further three starting with black and drawing a further 72 games, AlphaZero also learned shogi in two hours before beating the leading program Elmo in a 100-game matchup. AlphaZero won 90 games, lost eight and drew 2."

Technically, this is impressive. But the real question worth asking at this stage is whether the AI logic is capable of intuitive sensing, as opposed to relying on self-generated libraries of moves permutations. The latter is a form of linear thinking, as opposed to highly non-linear 'intuitive' logic which would be consistent with discrete 'jumping' from one logical moves tree to another based not on history of past moves, but on strategy that these moves reveal to the opponent. I don't think we have an answer to that, yet.

In my view, that is important, because as I argued some years ago in a research paper,  such 'leaps of faith' in logical systems are indicative of the basic traits of humanity, as being distinct from other forms of conscious life. In other words, can machines be rationally irrational, like humans?..


Monday, October 23, 2017

22/10/17: Robot builders future: It's all a game of Go...


This, perhaps, is the most important development in the AI (Artificial Intelligence) to-date: "DeepMind’s new self-taught Go-playing program is making moves that other players describe as “alien” and “from an alternate dimension”", as described in The Atlantic article, published this week (The AI That Has Nothing to Learn From Humans - The Atlantic
https://www.theatlantic.com/technology/archive/2017/10/alphago-zero-the-ai-that-taught-itself-go/543450/?utm_source=atltw).

The importance of the Google DeepMind's AlphaGo Zero AI program is not that it plays Go with frightening level of sophistication. Instead, it true importance is in self-sustaining nature of the program that can learn independently of external information inputs, by simply playing against itself. In other words, Google has finally cracked the self-replicating algorithm.

Yes, there is a 'new thinking' dimension to this as well. Again, quoting from The Atlantic: "A Go enthusiast named Jonathan Hop ...calls the AlphaGo-versus-AlphaGo face-offs “Go from an alternate dimension.” From all accounts, one gets the sense that an alien civilization has dropped a cryptic guidebook in our midst: a manual that’s brilliant—or at least, the parts of it we can understand."

But the real power of AlphaGo Zero version is its autonomous nature.

From the socio-economic perspective, this implies machines that can directly learn complex (extremely complex), non-linear and creative (with shifts of nodes) tasks. This, in turn, opens the AI to the prospect of writing own code, as well as executing tasks that to-date have been thought of as impossible for machines (e.g. combining referential thinking with creative thinking). The idea that coding skills of humans can ever keep up with this progression has now been debunked. Your coding and software engineering degree is not yet obsolete, but your kid's one will be obsolete, and very soon.

Welcome to the AphaHuman Zero, folks.  See yourself here?..