Showing posts with label artificial intelligence. Show all posts
Showing posts with label artificial intelligence. Show all posts

Friday, December 25, 2015

25/12/15: WLASZE: Weekend Links on Arts, Sciences and Zero Economics


Merry Christmas to all! And in spirit of the holiday, time to revive my WLASZE: Weekend Links on Arts, Sciences and Zero Economics postings that wilted away under the snowstorm of work and minutiae, but deserve to be reinstated in 2016.

[Fortunately for WLASZE and unfortunately for die harder economics readers of the blog, I suspect my work commitments in 2016 will be a little more balanced to allow for this...]


Let's start with Artificial Intelligence - folks at ArsTechnica are running an excellent essay, debunking some of the AI myths. Read it here. The list is pretty much on the money:

  • Is AI about machines that can think (in human intelligence sense)? Answer: predictably No.  
  • Is AI capable of outstripping human ethics? Answer: not necessarily.
  • Will AI be a threat to humanity? Answer: not any time soon.
  • Can the AI system acquire sudden singularity? Answer: sort of too far away and doubtful even then.
The topic is hugely important, extremely exciting and virtually open-ended. Perhaps of interest, I wrote back in 2005 about the non-linearity and discontinuity of our intelligence as a 'unique' identifier of humanity. The working paper on this (I have not revisited it since 2005) is still available here.

And to top the topic up, here is a link on advances in robotics over the grand year of 2015: http://qz.com/569285/2015-was-a-year-of-dumb-robots/. The title says it all... "dumb robots"... or does it?..

Update: another thought-provoking essay - via QZ - on the topic of AI and its perceived dangers. A quote summarising the story:
"Elon Musk and Stephen Hawking are right: AI is dangerous. But they are dangerously wrong about why. I see two fairly likely futures:

  • Future one: AI destroys itself, humanity, and most or all life on earth, probably a lot sooner than within 1000 years.
  • Future two: Humanity radically restructures its institutions to empower individuals, probably via trans-humanist modification that effectively merges us with AI. We go to the stars."
Personally, I am not sure which future will emerge, but I am sure that there is only one future in which we - humans - can have a stable, liberty-based society. And it is the second one. Hence my concerns - expressed in public speeches and blog posts - with the effects of technological innovation and the emergence of the Gig-Economy on the fabric of our socio-economic interactions.

At any rate... that is a cool dystopian pic from QZ


Dangers of AI or not, I do hope we sort out architecture before robots either consume or empower us...

On the lighter side, or may be on a brighter side - for the art cannot really be considered a lighter side - Saatchi Art are running their Best of 2015 online show here: http://www.saatchiart.com/shows/best-of-2015 that is worth running through. It is loaded with some younger and excitingly fresher works than make traditional art shows. 

Like Jonas Fisch's vibrantly rough, Gears of Power 


All the way to the hyper-expressionist realism of Tom Pazderka, here is an example of his Elegies to Failed Revolutions, Right Wing Rock'n'Roll 



And for that Christmas spirit in us, by Joseph Brodsky, translated by Derek Walcott (for a double-Nobel take):


The air—fierce frost and pine-boughs.

We’ll cram ourselves in thick clothes,

stumbling in drifts till we’re weary—

better a reindeer than a dromedary.

In the North if faith does not fail

God appears as the warden of a jail

where the kicks in our ribs were rough

but what you hear is “They didn’t get enough.”

In the South the white stuff’s a rare sight,

they love Christ who was also in flight,

desert-born, sand and straw his welcome,

he died, so they say, far from home.

So today, commemorate with wine and bread,

a life with just the sky’s roof overhead

because up there a man escapes

the arresting earth—plus there’s more space.


Merry Christmas to all!

Saturday, June 20, 2015

20/6/15: WLASze: Weekend Links of Arts, Sciences & zero economics


Couple of non-economics related, but hugely important links worth looking into... or an infrequent entry into my old series of WLASze: Weekend Links of Arts, Sciences and zero economics...

Firstly, via Stanford, we have a warning about the dire state of naturehttp://news.stanford.edu/news/2015/june/mass-extinction-ehrlich-061915.html. A quote: "There is no longer any doubt: We are entering a mass extinction that threatens humanity's existence." if we think we can't even handle a man-made crisis of debt overhang in the likes of Greece, what hope do we have in handling the existential threat?

Am I overhyping things? May be. Or may be not. As population ages, our ability to sustain ourselves is increasingly dependent on better food, nutrition, quality of environment etc. Not solely because we want to eat/breath/live better, but also because of brutal arithmetic: economic activity that sustains our lives depends on productivity. And productivity declines precipitously with ageing population.

So even if you think the extinction event is a rhetorical exaggeration by a bunch of scientists, brutal (and even linear - forget complex) systems of our socio-economic models imply serious and growing inter-connection between our man-made shocks and natural systems capacity to withstand them.


Secondly, via the Slate, we have a nagging suspicion that not everything technologically smart is... err... smart: "Meet the Bots: Artificial stupidity can be just as dangerous as artificial intelligence
http://www.slate.com/articles/technology/future_tense/2015/04/artificial_stupidity_can_be_just_as_dangerous_as_artificial_intelligence.html.

"Bots, like rats, have colonized an astounding range of environments. …perhaps the most fascinating element here is that [AI sceptics] warnings focus on hypothetical malicious automatons while ignoring real ones."

The article goes on to list examples of harmful bots currently populating the web. But it evades the key question asked in the heading: what if AI is not intelligent at all, but is superficially capable of faking intelligence to a degree? Imagine the world where we co-share space with bots that can replicate emotional, social, behavioural and mental intelligence up to a high degree, but fail beyond certain bound. What then? Will the average / median denominator of human interactions converge to that bound as well? Will we gradually witness disappearance of human capacity of by-pass complex, but measurable or mappable systems of logic, thus reducing the richness and complexity of our own world? If so, how soon will humanity become a slightly improved model of today's Twitter?


Thirdly, "What happens when we can’t test scientific theories?" via the Prospect Mag: http://www.prospectmagazine.co.uk/features/what-happens-when-we-cant-test-scientific-theories
"Scientific knowledge is supposed to be empirical: to be accepted as scientific, a theory must be falsifiable… This argument …is generally accepted by most scientists today as determining what is and is not a scientific theory. In recent years, however, many physicists have developed theories of great mathematical elegance, but which are beyond the reach of empirical falsification, even in principle. The uncomfortable question that arises is whether they can still be regarded as science."

The reason why this is important to us is that the question of falsifiability of modern theories is non-trivial to the way we structure our inquiry into the reality: the distinction between art, science and philosophy becomes blurred when one set of knowledge relies exclusively on the tools used in the other. So much so, that even the notion of knowledge, popularly associated with inquiry delivered via science, is usually not extendable to art and philosophy. Example in a quote: “Mathematical tools enable us to
investigate reality, but the mathematical concepts themselves do not necessarily imply physical reality”.

Now, personally, I don't give a damn if something implies physical reality or not, as long as that something is not designed to support such an implication. Mathematics, therefore, is a form of knowledge and we don't care if there are physical reality implications of it or not. But physical sciences purport to hold a specific, more qualitatively important corner of knowledge: that of being physically grounded in 'reality'. In other words, the very alleged supremacy of physical sciences arises not from their superiority as fields of inquiry (quality of insight is much higher in art, mathematics and philosophy than in, say, biosciences and experimental physics), but in their superiority in application (gravity has more tangible applications to our physical world than, say, topology).

So we have a crisis of sorts for physical sciences: their superiority is now run out of the road and has to yield to the superiority of abstract fields of knowledge. Bad news for humanity: deterministic nature of experimental knowledge is getting exhausted. With it, determinism surrounding our concept of knowledge diminishes too. Good news for humanity: this does not change much. Whether or not the string theory is provable is irrelevant to us. As soon as it becomes relevant, it will be, by Popperian definition, falsifiable. Until then, marvel of the infinite world of abstract.

Tuesday, September 11, 2012

11/9/2012: Inherent limit to artificial intelligence?


In a rather common departure from economics (as defined by rational expectations subset of the discipline) on this blog - here's a fascinating thinking about the artificial intelligence and the bounds of model-induced systems.

Especially close to me, as it explores that which I thought about back in 2003-2004 when I wrote an essay on the role of leaps of faith (irrational and discontinuous jumps in human creativity and thinking) as the foundation for humanity and, thus, a foundation for recognition of the property rights over uncertainty.