Showing posts with label behavioral economics. Show all posts
Showing posts with label behavioral economics. Show all posts

Sunday, October 22, 2017

22/10/17: Framing Effects and S&P500 Performance


A great post highlighting the impact of framing on our perception of reality: https://fat-pitch.blogspot.com/2017/10/using-time-scaling-and-inflation-to.html.

Take two charts of the stock market performance over 85 odd years:


The chart on the left shows nominal index reading for S&P500. The one on the right shows the same, adjusted for inflation and using log scale to control for long term duration of the time series. In other words, both charts, effectively, contain the same information, but presented in a different format (frame).

Spot the vast difference in the way we react to these two charts...

Tuesday, October 3, 2017

3/10/17: Ambiguity Fun: Perceptions of Rationality?



Here is a very insightful and worth studying set of plots showing the perceived range of probabilities under subjective measure scenarios. Source: https://github.com/zonination/perceptions




The charts above speak volumes about both, our (human) behavioural biases in assessing probabilities of events and the nature of subjective distributions.

First on the former. As our students (in all of my courses, from Introductory Statistics, to Business Economics, to advanced courses of Behavioural Finance and Economics, Investment Analysis and Risk & Resilience) would have learned (to a varying degree of insight and complexity), the world of Rational expectations relies (amongst other assumptions) on the assumption that we, as decision-makers, are capable of perfectly assessing true probabilities of uncertain outcomes. And as we all have learned in these classes, we are not capable of doing this, in part due to informational asymmetries, in part due to behavioural biases and so on. 

The charts above clearly show this. There is a general trend in people assigning increasingly lower probabilities to less likely events, and increasingly larger probabilities to more likely ones. So far, good news for rationality. The range (spread) of assignments also becomes narrower as we move to the tails (lower and higher probabilities assigned), so the degree of confidence in assessment increases. Which is also good news for rationality. 

But at that, evidence of rationality falls. 

Firstly, note the S-shaped nature of distributions from higher assigned probabilities to lower. Clearly, our perceptions of probability are non-linear, with decline in the rate of likelihoods assignments being steeper in the middle of perceptions of probabilities than in the extremes. This is inconsistent with rationality, which implies linear trend. 

Secondly, there is a notable kick-back in the Assigned Probability distribution for Highly Unlikely and Chances Are Slight types of perceptions. This can be due to ambiguity in wording of these perceptions (order can be viewed differently, with Highly Unlikely being precedent to Almost No Chance ordering and Chances Are Slight being precedent to Highly Unlikely. Still, there is a lot of oscillations in other ordering pairs (e.g. Unlikely —> Probably Not —> Little Chance; and We Believe —> Probably. This also consistent with ambiguity - which is a violation of rationality.

Thirdly, not a single distribution of assigned probabilities by perception follows a bell-shaped ‘normal’ curve. Not for a single category of perceptions. All distributions are skewed, almost all have extreme value ‘bubbles’, majority have multiple local modes etc. This is yet another piece of evidence against rational expectations.

There are severe outliers in all perceptions categories. Some (e.g. in the case of ‘Probably Not’ category appear to be largely due to errors that can be induced by ambiguous ranking of the category or due to judgement errors. Others, e.g. in the case of “We Doubt” category appear to be systemic and influential. Dispersion of assignments seems to be following the ambiguity pattern, with higher ambiguity (tails) categories inducing greater dispersion. But, interestingly, there also appears to be stronger ambiguity in the lower range of perceptions (from “We Doubt” to “Highly Unlikely”) than in the upper range. This can be ‘natural’ or ‘rational’ if we think that less likely event signifier is more ambiguous. But the same holds for more likely events too (see range from “We Believe” to “Likely” and “Highly Likely”).

There are many more points worth discussing in the context of this exercise. But on the net, the data suggests that the rational expectations view of our ability to assess true probabilities of uncertain outcomes is faulty not only at the level of the tail events that are patently identifiable as ‘unlikely’, but also in the range of tail events that should be ‘nearly certain’. In other words, ambiguity is tangible in our decision making. 



Note: it is also worth noting that the above evidence suggests that we tend to treat inversely certainty (tails) and uncertainty (centre of perceptions and assignment choices) to what can be expected under rational expectations:
In rational setting, perceptions that carry indeterminate outruns should have greater dispersion of values for assigned probabilities: if something is is "almost evenly" distributed, it should be harder for us to form a consistent judgement as to how probable such an outrun can be. Especially compared to something that is either "highly unlikely" (aka, quite certain not to occur) and something that is "highly likely" (aka, quite certain to occur). The data above suggests the opposite.

Friday, January 13, 2017

12/1/17: Betrayal Aversion, Populism and Donald Trump Election


In their 2003 paper, Koehler and Gershoff provide a definition of a specific behavioural phenomenon, known as betrayal aversion. Specifically, the authors state that “A form of betrayal occurs when agents of protection cause the very harm that they are entrusted to guard against. Examples include the military leader who commits treason and the exploding automobile air bag.” The duo showed - across five studies - that people respond differently “to criminal betrayals, safety product betrayals, and the risk of future betrayal by safety products” depending on who acts as an agent of betrayal. Specifically, the authors “found that people reacted more strongly (in terms of punishment assigned and negative emotions felt) to acts of betrayal than to identical bad acts that do not violate a duty or promise to protect. We also found that, when faced with a choice among pairs of safety devices (air
bags, smoke alarms, and vaccines), most people preferred inferior options (in terms of risk exposure) to options that included a slim (0.01%) risk of betrayal. However, when the betrayal risk was replaced by an equivalent non-betrayal risk, the choice pattern was reversed. Apparently, people are willing to incur greater risks of the very harm they seek protection from to avoid the mere possibility of betrayal.”

Put into different context, we opt for suboptimal degree of protection against harm in order to avoid being betrayed.

Now, consider the case of political betrayal. Suppose voters vest their trust in a candidate for office on the basis of the candidate’s claims (call these policy platform, for example) to deliver protection of the voters’ interests. One, the relationship between the voters and the candidate is emotionally-framed (this is important). Two, the relationship of trust induces the acute feeling of betrayal if the candidate does not deliver on his/her promises. Three, past experience of betrayal, quite rationally, induces betrayal aversion: in the next round of voting, voters will prefer a candidate who offers less in terms of his/her platform feasibility (aka: the candidate less equipped or qualified to run the office).

In other words, betrayal aversion will drive voters to prefer a poorer quality candidate.

Sounds plausible? Ok. Sounds like something we’ve seen recently? You bet. Let’s go over the above steps in the context of the recent U.S. presidential contest.


One: emotional basis for selection (vesting trust). The U.S. voters had eight years of ‘hope’ from President Obama. Hope based on emotional context of his campaigns, not on hard delivery of his policies. In fact, the entire U.S. electoral space has become nothing more than a battlefield of carefully orchestrated emotional contests.

Two: an acute feeling of betrayal is clearly afoot in the case of the U.S. electorate. Whether or not the voters today blame Mr. Obama for their feeling of betrayal, or they blame the proverbial Washington ’swamp’ that includes the entire lot of elected politicians (including Mrs. Clinton and others) is immaterial. What is material is that many voters do feel betrayed by the elites (both the Burn effect and the Trump campaign were based on capturing this sentiment).

Three: of the two candidates that did capture the minds of swing voters and marginalised voters (the types of voters who matter in election outrun in the end) were both campaigning on razor-thin policies proposals and more on general sentiment basis. Whether you consider these platforms feasible or not, they were not articulated with the same degree of precision and competency as, say, Mrs Clinton’s highly elaborate platform.

Which means the election of Mr Trump fits (from pre-conditions through to outcome) the pattern of betrayal aversion phenomena: fleeing the chance of being betrayed by the agent they trust, American voters opted for a populist, less competent (in traditional Washington’s sense) choice.

Now, enter two brainiacs from Harvard. Rafael Di Tella and Julio Rotemberg were quick on their feet recognising the above emergence of betrayal avoidance or aversion in voting decisions. In their December 2016 NBER paper, linked below, the authors argue that voters preference for populism is the form of “rejection of “disloyal” leaders.” To do this, the authors add an “assumption that people are worse off when they experience low income as a result of leader betrayal”, than when such a loss of income “is the result of bad luck”. In other words, they explicitly assume betrayal aversion in their model of a simple voter choice. The end result is that their model “yields a [voter] preference for incompetent leaders. These deliver worse material outcomes in general, but they reduce the feelings of betrayal during bad times.”

More to the point, just as I narrated the logical empirical hypothesis (steps one through three) above, Di Tella and Rotemberg “find some evidence consistent with our model in a survey carried out on the eve of the recent U.S. presidential election. Priming survey participants with questions about the importance of competence in policymaking usually reduced their support for the candidate who was perceived as less competent; this effect was reversed for rural, and less educated white, survey participants.”

Here you have it: classical behavioural bias of betrayal aversion explains why Mrs Clinton simply could not connect with the swing or marginalised voters. It wasn’t hope that they sought, but avoidance of putting hope/trust in someone like her. Done. Not ‘deplorables’ but those betrayed in the past have swung the vote in favour of a populist, not because he emotionally won their trust, but because he was the less competent of the two standing candidates.



Jonathan J. Koehler, and Andrew D. Gershof, “Betrayal aversion: When agents of protection become agents of harm”, Organizational Behavior and Human Decision Processes 90 (2003) 244–261: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.11.1841&rep=rep1&type=pdf

Di Tella, Rafael and Rotemberg, Julio J., Populism and the Return of the 'Paranoid Style': Some Evidence and a Simple Model of Demand for Incompetence as Insurance Against Elite Betrayal (December 2016). NBER Working Paper No. w22975: https://ssrn.com/abstract=2890079

Friday, May 11, 2012

11/5/2012: Ignoring that which almost happened?

In recent years, I am finding myself migrating more firmly toward behavioralist views on finance and economics. Not that this view, in my mind, is contradictory to the classes of models and logic I am accustomed to. It is rather an additional enrichment of them, adding toward completeness.

With this in mind - here's a fascinating new study.

How Near-Miss events Amplify or Attenuate Risky Decision Making, written by Catherine Tinsley, Robin Dillon and Matthew Cronin and published in April 2012 issue of Management Science studied the way people change their risk attitudes "in the aftermath of many natural and man-made disasters".

More specifically, "people often wonder why those affected were underprepared, especially when the disaster was the result of known or regularly occurring hazards (e.g., hurricanes). We study one contributing factor: prior near-miss experiences. Near misses are events that have some nontrivial expectation of ending in disaster but, by chance, do not."

The study shows that "when near misses are interpreted as disasters that did not occur, people illegitimately underestimate the danger of subsequent hazardous situations and make riskier decisions (e.g., choosing not to engage in mitigation activities for the potential hazard). On the other hand, if near misses can be recognized and interpreted as disasters that almost happened, this will counter the basic “near-miss” effect and encourage more mitigation. We illustrate the robustness of this pattern across populations with varying levels of real expertise with hazards and different hazard contexts (household evacuation for a hurricane, Caribbean cruises during hurricane season, and deep-water oil drilling). We conclude with ideas to help people manage and communicate about risk."

An interesting potential corollary to the study is that analytical conclusions formed ex post near misses (or in the wake of significant increases in the risk) matter to the future responses. Not only that, the above suggests that the conjecture that 'glass half-full' type of analysis should be preferred to 'glass half-empty' position might lead to a conclusion that an event 'did not occur' rather than that it 'almost happened'.

Fooling yourself into safety by promoting 'optimism' in interpreting reality might be a costly venture...