Showing posts with label behavioral finance. Show all posts
Showing posts with label behavioral finance. Show all posts

Sunday, October 22, 2017

22/10/17: Framing Effects and S&P500 Performance


A great post highlighting the impact of framing on our perception of reality: https://fat-pitch.blogspot.com/2017/10/using-time-scaling-and-inflation-to.html.

Take two charts of the stock market performance over 85 odd years:


The chart on the left shows nominal index reading for S&P500. The one on the right shows the same, adjusted for inflation and using log scale to control for long term duration of the time series. In other words, both charts, effectively, contain the same information, but presented in a different format (frame).

Spot the vast difference in the way we react to these two charts...

Tuesday, October 3, 2017

3/10/17: Ambiguity Fun: Perceptions of Rationality?



Here is a very insightful and worth studying set of plots showing the perceived range of probabilities under subjective measure scenarios. Source: https://github.com/zonination/perceptions




The charts above speak volumes about both, our (human) behavioural biases in assessing probabilities of events and the nature of subjective distributions.

First on the former. As our students (in all of my courses, from Introductory Statistics, to Business Economics, to advanced courses of Behavioural Finance and Economics, Investment Analysis and Risk & Resilience) would have learned (to a varying degree of insight and complexity), the world of Rational expectations relies (amongst other assumptions) on the assumption that we, as decision-makers, are capable of perfectly assessing true probabilities of uncertain outcomes. And as we all have learned in these classes, we are not capable of doing this, in part due to informational asymmetries, in part due to behavioural biases and so on. 

The charts above clearly show this. There is a general trend in people assigning increasingly lower probabilities to less likely events, and increasingly larger probabilities to more likely ones. So far, good news for rationality. The range (spread) of assignments also becomes narrower as we move to the tails (lower and higher probabilities assigned), so the degree of confidence in assessment increases. Which is also good news for rationality. 

But at that, evidence of rationality falls. 

Firstly, note the S-shaped nature of distributions from higher assigned probabilities to lower. Clearly, our perceptions of probability are non-linear, with decline in the rate of likelihoods assignments being steeper in the middle of perceptions of probabilities than in the extremes. This is inconsistent with rationality, which implies linear trend. 

Secondly, there is a notable kick-back in the Assigned Probability distribution for Highly Unlikely and Chances Are Slight types of perceptions. This can be due to ambiguity in wording of these perceptions (order can be viewed differently, with Highly Unlikely being precedent to Almost No Chance ordering and Chances Are Slight being precedent to Highly Unlikely. Still, there is a lot of oscillations in other ordering pairs (e.g. Unlikely —> Probably Not —> Little Chance; and We Believe —> Probably. This also consistent with ambiguity - which is a violation of rationality.

Thirdly, not a single distribution of assigned probabilities by perception follows a bell-shaped ‘normal’ curve. Not for a single category of perceptions. All distributions are skewed, almost all have extreme value ‘bubbles’, majority have multiple local modes etc. This is yet another piece of evidence against rational expectations.

There are severe outliers in all perceptions categories. Some (e.g. in the case of ‘Probably Not’ category appear to be largely due to errors that can be induced by ambiguous ranking of the category or due to judgement errors. Others, e.g. in the case of “We Doubt” category appear to be systemic and influential. Dispersion of assignments seems to be following the ambiguity pattern, with higher ambiguity (tails) categories inducing greater dispersion. But, interestingly, there also appears to be stronger ambiguity in the lower range of perceptions (from “We Doubt” to “Highly Unlikely”) than in the upper range. This can be ‘natural’ or ‘rational’ if we think that less likely event signifier is more ambiguous. But the same holds for more likely events too (see range from “We Believe” to “Likely” and “Highly Likely”).

There are many more points worth discussing in the context of this exercise. But on the net, the data suggests that the rational expectations view of our ability to assess true probabilities of uncertain outcomes is faulty not only at the level of the tail events that are patently identifiable as ‘unlikely’, but also in the range of tail events that should be ‘nearly certain’. In other words, ambiguity is tangible in our decision making. 



Note: it is also worth noting that the above evidence suggests that we tend to treat inversely certainty (tails) and uncertainty (centre of perceptions and assignment choices) to what can be expected under rational expectations:
In rational setting, perceptions that carry indeterminate outruns should have greater dispersion of values for assigned probabilities: if something is is "almost evenly" distributed, it should be harder for us to form a consistent judgement as to how probable such an outrun can be. Especially compared to something that is either "highly unlikely" (aka, quite certain not to occur) and something that is "highly likely" (aka, quite certain to occur). The data above suggests the opposite.

Sunday, May 22, 2016

21/5/16: Manipulating Markets in Everything: Social Media, China, Europe


So, Chinese Government swamps critical analysis with ‘positive’ social media posts, per Bloomberg report: http://www.bloomberg.com/news/articles/2016-05-19/china-seen-faking-488-million-internet-posts-to-divert-criticism.

As the story notes: “stopping an argument is best done by distraction and changing the subject rather than more argument”.

So now, consider what the EU and European Governments (including Irish Government) have been doing since the start of the Global Financial Crisis.

They have hired scores of (mostly) mid-educated economists to write, what effectively amounts to repetitive reports on the state of economy . All endlessly cheering the state of ‘recovery’.

In several cases, we now have statistics agencies publishing data that was previously available in a singular release across two separate releases, providing opportunity to up-talk the figures for the media. Example: Irish CSO release of the Live Register stats. In another example, the same data previously available in 3 files - Irish Exchequer results - is being reported and released through numerous channels and replicated across a number of official agencies.

The result: any critical opinion is now drowned in scores of officially sanctioned presentations, statements, releases, claims and, accompanied by complicit media and professional analysts (e.g. sell-side analysts and bonds placing desks) puff pieces.

Chinese manipulating social media, my eye… take a mirror and add lights: everyone’s holding the proverbial bag… 

Saturday, May 21, 2016

20/5/16: Business Owners: Not Great With Counterfactuals


A recent paper, based on a “survey of participants in a large-scale business plan competition experiment, [in Nigeria] in which winners received an average of US$50,000 each, is used to elicit beliefs about what the outcomes would have been in the alternative treatment status.”

So what exactly was done? Business owners were basically asked what would have happened to their business had an alternative business investment process taken place, as opposed to the one that took place under the competition outcome. “Winners in the treatment group are asked subjective expectations questions about what would have happened to their business should they have lost, and non‐winners in the control group asked similar questions about what would have happened should they have won.”

“Ex ante one can think of several possibilities as to the likely accuracy of the counterfactuals”:

  1. “…business owners are not systematically wrong about the impact of the program, so that the average treatment impact estimated using the counterfactuals should be similar to the experimental treatment effect. One potential reason to think this is that in applying for the competition the business owners had spent four days learning how to develop a business plan… outlining how they would use the grant to develop their business. The control group [competition losers] have therefore all had to previously make projections and plans for business growth based on what would happen if they won, so that we are asking about a counterfactual they have spent time thinking about.”
  2. ”…behavioral factors lead to systematic biases in how individuals think of these counterfactuals. For example, the treatment group may wish to attribute their success to their own hard work and talent rather than to winning the program, in which case they would underestimate the program effect. Conversely they may fail to take account of the progress they would have made anyway, attributing all their growth to the program and overstating the effect. The control group might want to make themselves feel better about missing out on the program by understating its impact (...not winning does not matter that much). Conversely they may want to make themselves feel better about their current level of business success by overstating the impact of the program (saying to themselves I may be small today, but it is only because I did not win and if I had that grant I would be very successful).”


The actual results show that business owners “do not provide accurate counterfactuals” even in this case where competition awards (and thus intervention or shock) was very large.

  • The authors found that “both the control and treatment groups systematically overestimate how important winning the program would be for firm growth… 
  • “…the control group thinks they would grow more had they won than the treatment group actually grew”
  • “…the treatment group thinks they would grow less had they lost than the control group actually grew” 

Or in other words: losers overestimate benefits of winning, winners overestimate the adverse impact from losing... and no one is capable of correctly analysing own counterfactuals.


Full paper is available here: McKenzie, David J., Can Business Owners Form Accurate Counterfactuals? Eliciting Treatment and Control Beliefs About Their Outcomes in the Alternative Treatment Status (May 10, 2016, World Bank Policy Research Working Paper No. 7668: http://ssrn.com/abstract=2779364)

Tuesday, November 27, 2012

27/11/2012: Neural Data and Investor Behavior


Fascinating stuff... really: a new study, titled "Testing Theories of Investor Behavior Using Neural Data" by Cary Frydman, Nicholas Barberis, Colin Camerer, Peter Bossaerts and Antonio Rangel (link) finds that "...measures of neural activity provided by functional magnetic resonance imaging (fMRI) can be used to test between theories of investor behavior that are difficult to distinguish using behavioral data alone."

How so? "Subjects traded stocks in an experimental market while we measured their brain activity. Behaviorally, we find that, our average subject exhibits a strong disposition effect [the robust empirical fact that individual investors have a greater propensity to sell stocks trading at a gain relative to purchase price, rather than stocks trading at a loss] in his trading, even though it is suboptimal."

More so: "We then use the neural data to test a specific theory of the disposition effect, the “realization utility” hypothesis, which argues that the effect arises because people derive utility directly from the act of realizing gains and losses. [Note to my Investment Theory (TCD) and Financial & Business Environments (UCD) students - we talked about direct utility derived from actual transactions, plus indirect utility effects of learning from same... remember?..] Consistent with this hypothesis, we find that

  • activity in an area of the brain known to encode the value of decisions correlates with the capital gains of potential trades, 
  • that the size of these neural signals correlates across subjects with the strength of the behavioral disposition effects, and that 
  • activity in an area of the brain known to encode experienced utility exhibits a sharp upward spike in activity at precisely the moment at which a subject issues a command to sell a stock at a gain."
Awesome! We might not be wired for living in the world of uncertainty, but we might be somewhat wired for deriving utility out of uncertain gambles?

Now, that's what I call taking investment to MRI and getting results... well, might be not investable results, but...

Monday, November 5, 2012

5/11/2012: Academic research and market efficiency


Fascinating article on both the issue of markets efficiency (pricing-in of newsflows) and the impact of herding via learning (triggered by academic research) in finance: here.

A nice addition to our discussions both in TCD and UCD courses.

Friday, May 11, 2012

11/5/2012: Ignoring that which almost happened?

In recent years, I am finding myself migrating more firmly toward behavioralist views on finance and economics. Not that this view, in my mind, is contradictory to the classes of models and logic I am accustomed to. It is rather an additional enrichment of them, adding toward completeness.

With this in mind - here's a fascinating new study.

How Near-Miss events Amplify or Attenuate Risky Decision Making, written by Catherine Tinsley, Robin Dillon and Matthew Cronin and published in April 2012 issue of Management Science studied the way people change their risk attitudes "in the aftermath of many natural and man-made disasters".

More specifically, "people often wonder why those affected were underprepared, especially when the disaster was the result of known or regularly occurring hazards (e.g., hurricanes). We study one contributing factor: prior near-miss experiences. Near misses are events that have some nontrivial expectation of ending in disaster but, by chance, do not."

The study shows that "when near misses are interpreted as disasters that did not occur, people illegitimately underestimate the danger of subsequent hazardous situations and make riskier decisions (e.g., choosing not to engage in mitigation activities for the potential hazard). On the other hand, if near misses can be recognized and interpreted as disasters that almost happened, this will counter the basic “near-miss” effect and encourage more mitigation. We illustrate the robustness of this pattern across populations with varying levels of real expertise with hazards and different hazard contexts (household evacuation for a hurricane, Caribbean cruises during hurricane season, and deep-water oil drilling). We conclude with ideas to help people manage and communicate about risk."

An interesting potential corollary to the study is that analytical conclusions formed ex post near misses (or in the wake of significant increases in the risk) matter to the future responses. Not only that, the above suggests that the conjecture that 'glass half-full' type of analysis should be preferred to 'glass half-empty' position might lead to a conclusion that an event 'did not occur' rather than that it 'almost happened'.

Fooling yourself into safety by promoting 'optimism' in interpreting reality might be a costly venture...