Showing posts with label behavioural science. Show all posts
Showing posts with label behavioural science. Show all posts

Saturday, December 19, 2015

19/12/15: Another Un-glamour Moment for Economics


Much of the current fascination with behavioural economics is well deserved - the field is a tremendously important merger of psychology and economics, bringing economic research and analysis down to the granular level of human behaviour. However, much of it is also a fad - behavioural economics provide a convenient avenue for advertising companies, digital marketing agencies, digital platforms providers and aggregators, as well as congestion-pricing and Gig-Economy firms to milk strategies for revenue raising that are anchored in common sense. In other words, much of behavioural economics use in real business (and in Government) is about convenient plucking out of strategy-confirming results. It is marketing, not analysis.

A lot of this plucking relies on empirically-derived insights from behavioural economics, which, in turn, often rely on experimental evidence. Now, experimental evidence in economics is very often dodgy by design: you can’t compel people to act, so you have to incentivise them; you can quite select a representative group, so you assemble a ‘proximate’ group, and so on. Imagine you want to study intervention effects on a group of C-level executives. Good luck getting actual executives to participate in your study and good luck getting selection biases sorted out in analysing the results. Still, experimental economics continues to gain prominence, as a backing for behavioural economics. A still, companies and governments spend millions on funding such research.

Now, not all experiments are poorly structured and not all evidence derived from is dodgy. So to alleviate nagging suspicion as to how much error is carried in experiments, a recent paper by Alwyn Young of London School of Economics, titled “Channelling Fisher: Randomization Tests and the Statistical Insignificance of Seemingly Significant Experimental Results” (http://personal.lse.ac.uk/YoungA/ChannellingFisher.pdf) used  “randomization statistical inference to test the null hypothesis of no treatment effect in a comprehensive sample of 2003 regressions in 53 experimental papers drawn from the journals of the American Economic Association.”

The attempt is pretty darn good. The study uses robust methodology to test a statistically valid hypothesis: has there been a statically significant result derived in the studies arising from experimental treatment or not? The paper tests a large sample of studies published (having gone through peer and editorial reviews) in perhaps the most reputable economics journals. This is creme-de-la-creme of economics studies.

The findings, to put this scientifically: “Randomization tests reduce the number of regression specifications with statistically significant treatment effects by 30 to 40 percent. An omnibus randomization test of overall experimental significance that incorporates all of the regressions in each paper finds that only 25 to 50 percent of experimental papers, depending upon the significance level and test, are able to reject the null of no treatment effect whatsoever. Bootstrap methods support and confirm these results. “

In other words, in majority of studies claiming to have achieved statistically significant results from experimental evidence, such results were not really statistically significantly attributable to experiments.

Now, the author is cautious in his conclusions. “Notwithstanding its results, this paper confirms the value of randomized experiments. The methods used by authors of experimental papers are standard in the profession and present throughout its journals. Randomized statistical inference provides a solution to the problems and biases identified in this paper. While, to date, it rarely appears in experimental papers, which generally rely upon traditional econometric methods, it can easily be incorporated into their analysis. Thus, randomized experiments can solve both the problem of identification and the problem of accurate statistical inference, making them doubly reliable as an investigative tool. “

But this is hogwash. The results of the study effectively tell us that large (huge) proportion of papers on experimental economics published in the most reputable journals have claimed significant results attributable to experiments where no such significance really was present. Worse, the methods that delivered these false significance results “are standard in the profession”. 


Now, consider the even more obvious: these are academic papers, written by highly skilled (in econometrics, data collection and experiment design) authors. Imagine what drivel passes for experimental analysis coming out of marketing and surveying companies? Imagine what passes for policy analysis coming out of public sector outfits? Without peer reviews and without cross-checks like those performed by Young?

Saturday, June 29, 2013

29/6/2013: WLASze Part 2: Weekend Links on Arts, Sciences and Zero Economics


This is the second part of my usual Weekend Links on Arts, Sciences and zero economics (WLASze). The first part is linked here.


An insightful piece on what philosophers as a group believe in:
http://www.openculture.com/2013/06/what_do_most_philosophers_believe_.html
Very interesting and can be followed by the very brief (and as such not very deep, but still interesting)
http://www.openculture.com/2010/11/do_physicists_believe_in_god_.html
and by brilliantly extensive http://www.sixtysymbols.com/ .
The latter literally is a sort of a merger of art (of symbol or word or meaning) and sciences.
And while on the above topics, here's John Lennox of Oxford on science and belief… http://johnlennox.org/


Back to art-meets-science, a major mapping/visualization geek alert:
http://www.wired.com/design/2013/06/infographic-this-detailed-map-shows-every-river-in-the-united-states/?cid=co9216134#slideid-152839
Love the images:
Laborious, but beautiful mapping, sadly in relatively low res only...


But blending cheeky with complex does not make it either art or science in the end, in my opinion, of course:
http://www.guardian.co.uk/science/alexs-adventures-in-numberland/2013/jun/26/mathematics
"And when you slice a scone in the shape of a cone, you get a sconic section – the latest craze in edible mathematics, a vibrant new culinary field" Err… not really.


http://www.prokopchik.com/ @pavelprokopchik great photo by Pavel Prokopchik for NY Times: http://www.nytimes.com/2013/06/21/world/europe/a-sea-of-bikes-swamps-amsterdam-a-city-fond-of-pedaling.html?_r=0


Sadly, only in low res quality, again...


Good review via @farnamstreet of a very interesting book on occasionally mindless fascination we hold for scientific explaining away of reality (or is this fascination itself an behavioural bias?):
http://www.linkedin.com/today/post/article/20130607125052-5506908-what-if-capitalism-could-be-artistic?trk=mp-details-rr-rmpost
Which makes me wonder, are biases endogenous to biases? Liam, your suggestions?!. And to MrsG a gentle suggestion: my birthday is coming up...


Bad news:

"Art Southampton Presented by Art Miami for Art Collectors NYC and In-Crowd East Coast with Cars Italia and Galleries Kitsch USA" for the crowd of those who think a horse bronze with polished detail is worth a silver metal couch and all that shines…
You can almost see the parallel to the previous screenshot: animate duo 'racing' to the cocktails counter with an enlightened look about them of a floodlight set to highlight the Maserati... being vs object - all denoting the same fake-ness of the art world that fits a dressed-up-white hangar… in dressed-up Hamptons… Watch the preview slideshow… http://www.art-southampton.com/ it is frightening (and as such so anti-artistic as to become almost artful).