Showing posts with label MIIS. Show all posts
Showing posts with label MIIS. Show all posts

Thursday, April 16, 2020

16/4/20: The BRICS+ challenge to institutional unipolarity and U.S. hegemony 2020 Lecture


My slides from the talk I gave yesterday to the MA in Non-Proliferation and Terrorism Studies students @MIIS on the COVID-updated topic of challenges to Pax-Americana in post-Bretton Woods institutional frameworks (IMF, WB, etc). You can click on each slide to enlarge.



























Friday, January 11, 2019

11/1/19: A Behavioral Experiment: Irish License Plates and Household Demand for Cars


While a relatively well known and understood fact in Ireland, this is an interesting snapshot of data for our students in Behavioral Finance and Economics course at MIIS.


In 2013, Ireland introduced a new set of car license plates that created a de facto natural experiment in behavioural economics. Prior to 2013, Irish license plates contained, as the first two digits, the year of car production (see lower two images). Since 2013, prompted by the ‘fear of the number ’13’’, the license plates contain three first digits designating the year and the half-year of the make.


Prior to 2013 change in licenses, Irish car buyers were heavily concentrated in the first two months of each year - a ‘vanity effect’ of license plates that provided additional utility to the earlier months’ car purchasers from having a vehicle with current year identifier for a longer period of time. Post-2013 changes, therefore can be expected to yield two effects:
1) The ‘vanity effect’ should be split between the first two months of 1H of the year, and the first two months of 2H of the year; and
2) Overall, ‘vanity effect’ across two segments of the year should be higher than the same for th period pre-2013 change.


As chart above illustrates, both of these factors are confirmed in the data. Irish buyers are now (post-2013) more concentrated in the January, February, July and August months than prior to 2013. In 2009-2012, average share of annual sales that fell onto these four months stood at 44.8 percent. This rose to 55.75 percent for the period starting in 2014. This difference is statistically significant at 5% percent level.

The share of annual sales that fell onto January-February remained statistically unchanged, nominally rising from 31.77 percent for 2009-2012 average to 32.56 percent since 2014. This difference is not statistically significant at even 10%. However, share of sales falling into July-August period rose from 13.04 percent in 2009-2012 to 23.19 percent since the start of 2014 This increase is statistically significantly greater than zero at 1 percent level.

Similar, qualitatively and statistically, results can be gained from looking at 2002-2008 average. Moving out to pre-2002 average, the only difference is that increases in concentration of sales in January-February period become statistically significant.

In simple terms, what is interesting about the Irish data is the fact that license plate format - in particular identification of year of the car make - strongly induces a ‘vanity effect’ in purchaser behaviour, and that this effect is sensitive to the granularity of the signal contained in the license plate format. What would be interesting at this point is to look at seasonal variation of pricing data, including that for used vehicles, controlling for hedonic characteristics of cars being sold and accounting for variable promotions and discounts applied by brokers.

Thursday, June 7, 2018

6/6/2018: Monopsony Power in US labour market


I have recently written about rising firm power in labour markets, driven by monopsonisation of the markets thanks to the continued development of the contingent workforce: http://trueeconomics.blogspot.com/2018/05/23518-contingent-workforce-online.html. In this, I reference a new paper "Concentration in US labour markets: Evidence from online vacancy data" by  Azar, J A, I Marinescu, M I Steinbaum and B Taska. The authors have just published a VOX blog post on their research, worth reading: https://voxeu.org/article/concentration-us-labour-markets.


Wednesday, May 23, 2018

23/5/18: Contingent Workforce, Online Labour Markets and Monopsony Power


The promise of the contingent workforce and technological enablement of ‘shared economy’ is that today’s contingent workers and workers using own capital to supply services are free agents, at liberty to demand their own pay, work time, working conditions and employment terms in an open marketplace that creates no asymmetries between their employers and themselves. In economics terms, thus, the future of technologically-enabled contingent workforce is that of reduced monopsonisation.

Reminding the reader: monopsony, as defined in labour economics, is the market power of the employer over the employees. In the past, monopsonies primarily were associated with 'company towns' - highly concentrated labour markets dominated by a single employer. This notion seems to have gone away as transportation links between towns improved. In this context, increasing technological platforms penetration into the contingent / shared economies (e.g. creation of shared platforms like Uber and Lyft) should contribute to a reduction in monopsony power and the increase in the employee power.

Two recent papers: Azar, J A, I Marinescu, M I Steinbaum and B Taska (2018), “Concentration in US labor markets: Evidence from online vacancy data”, NBER Working paper w24395, and Dube, A, J Jacobs, S Naidu and S Suri (2018), “Monopsony in online labor markets”, NBER, Working paper 24416, dispute this proposition by finding empirical evidence to support the thesis that monopsony powers are actually increasing thanks to the technologically enabled contingent employment platforms.

Online labour markets are a natural testing ground for the proposition that technological transformation is capable of reducing monopsony power of employers, because they, in theory, offer a nearly-frictionless information and jobs flows between contractors and contractees, transparent information about pay and employment terms, and low cost of switching from one job to another.

The latter study mentioned above attempts to "rigorously estimate the degree of requester market power in a widely used online labour market – Amazon Mechanical Turk, or MTurk... the most popular online micro-task platform, allowing requesters (employers) to post jobs which workers can complete for."

The authors "provide evidence on labour market power by measuring how sensitive workers’ willingness to work is to the reward offered", by using the labour supply elasticity facing a firm (a standard measure of wage-setting (monopsony) power). "For example, if lowering wages by 10% leads to a 1% reduction in the workforce, this represents an elasticity of 0.1." To make their findings more robust, the authors use two methodologies for estimating labour supply elasticities:
1) Observational approach, which involves "data from a near-universe of tasks scraped from MTurk" to establish "how the offered reward affected the time it took to fill a particular task", and
2) Randomised experiments approach, uses "experimental variation, and analyse data from five previous experiments that randomised the wages of MTurk subjects. This randomised reward-setting provides ‘gold-standard’ evidence on market power, as we can see how MTurk workers responded to different wages."

The authors "empirically estimate both a ‘recruitment’ elasticity (comparable to what is recovered from the observational data) where workers see a reward and associated task as part of their normal browsing for jobs, and a ‘retention’ elasticity where workers, having already accepted a task, are given an opportunity to perform additional work for a randomised bonus payment."

The findings from both approaches are strikingly similar. Both "provide a remarkably consistent estimate of the labour supply elasticity facing MTurk requesters. As shown in Figure 2, the precision-weighted average experimental requester’s labour supply elasticity is 0.13 – this means that if a requester paid a 10% lower reward, they’d only lose around 1% of workers willing to perform the task. This suggests a very high degree of market power. The experimental estimates are quite close to those produced using the machine-learning based approach using observational data, which also suggest around 1% reduction in the willing workforce from a 10% lower wage."


To put these findings into perspective, "if requesters are fully exploiting their market power, our evidence implies that they are paying workers less than 20% of the value added. This suggests that much of the surplus created by this online labour market platform is captured by employers... [the authors] find a highly robust and surprisingly high degree of market power even in this large and diverse spot labour market."

In evolutionary terms, "MTurk workers and their advocates have long noted the asymmetry in market structure among themselves. Both efficiency and equality concerns have led to the rise of competing, ‘worker-friendly’ platforms..., and mechanisms for sharing information about good and bad requesters... Scientific funders such as Russell Sage have instituted minimum wages for crowd-sourced work. Our results suggest that these sentiments and policies may have an economic justification. ...Moreover, the hope that information technology will necessarily reduce search frictions and monopsony power in the labour market may be misplaced."

My take: the evidence on monopsony power in web-based contingent workforce platforms dovetails naturally into the evidence of monopolisation of the modern economies. Technological progress, that held the promise of freeing human capital from strict contractual limits on its returns, while delivering greater scope for technology-aided entrepreneurship and innovation, as well as the promise of the contingent workforce environment empowering greater returns to skills and labour are proving to be the exact opposites of what is being delivered by the new technologies which appear to be aiding greater transfer of power to technological, financial and even physical capital.

The 'free to work' nirvana ain't coming folks.

Monday, October 9, 2017

9/10/17: Nature of our reaction to tail events: ‘odds’ framing


Here is an interesting article from Quartz on the Pentagon efforts to fund satellite surveillance of North Korea’s missiles capabilities via Silicon Valley tech companies: https://qz.com/1042673/the-us-is-funding-silicon-valleys-space-industry-to-spot-north-korean-missiles-before-they-fly/. However, the most interesting (from my perspective) bit of the article relates neither to North Korea nor to Pentagon, and not even to the Silicon Valley role in the U.S. efforts to stop nuclear proliferation. Instead, it relates to this passage from the article:



The key here is an example of the link between the our human (behavioral) propensity to take action and the dynamic nature of the tail risks or, put more precisely, deeper uncertainty (as I put in my paper on the de-democratization trend https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2993535, the deeper uncertainty as contrasted by the Knightian uncertainty).

Deeper uncertainty involves a dynamic view of the uncertain environment in which potential tail events evolve before becoming a quantifiable and forecastable risks. This environment is different from the classical Knightian uncertainty in so far as evolution of these events is not predictable and can be set against perceptions or expectations that these events can be prevented, while at the same time providing no historical or empirical basis for assessment of actual underlying probabilities of such events.

In this setting, as opposed to Knightian set up with partially predictable and forecastable uncertainty, behavioral biases (e.g. confirmation bias, overconfidence, herding, framing, base rate neglect, etc) apply. These biases alter our perception of evolutionary dynamics of uncertain events and thus create a referencing point of ‘odds’ of an event taking place. The ‘odds’ view evolves over time as new information arrives, but the ‘odds’ do not become probabilistically defined until very late in the game.

Deeper uncertainty, therefore, is not forecastable and our empirical observations of its evolution are ex ante biased to downplay one, or two, or all dimensions of its dynamics:
- Impact - the potential magnitude of uncertainty when it materializes into risk;
- Proximity - the distance between now and the potential materialization of risk;
- Speed - the speed with which both impact and proximity evolve; and
- Similarity - the extent to which our behavioral biases distort our assessment of the dynamics.

Knightian uncertainty is a simple, one-shot, non-dynamic tail risk. As such, it is similar both in terms of perceived degree of uncertainty (‘odds’) and the actual underlying uncertainty.

Now, materially, the outrun of these dimensions of deeper uncertainty is that in a centralized decision-making setting, e.g. in Pentagon or in a broader setting of the Government agencies, we only take action ex post transition from uncertainty into risk. The bureaucracy’s reliance on ‘expert opinions’ to assess the uncertain environment only acts to reinforce some of the biases listed above. Experts generally do not deal with uncertainty, but are, instead, conditioned to deal with risks. There is zero weight given by experts to uncertainty, until such a moment when the uncertain events become visible on the horizon, or when ‘the odds of an event change’, just as the story told by Andrew Hunter in the Quartz article linked above says. Or in other words, once risk assessment of uncertainty becomes feasible.

The problem with this is that by that time, reacting to the risk can be infeasible or even irrelevant, because the speed and proximity of the shock has been growing along with its impact during the deeper uncertainty stage. And, more fundamentally, because the nature of underlying uncertainty has changed as well.

Take North Korea: current state of uncertainty in North Korea’s evolving path toward fully-developed nuclear and thermonuclear capabilities is about the extent to which North Korea is going to be willing to use its nukes. Yet, the risk assessment framework - including across a range of expert viewpoints - is about the evolution of the nuclear capabilities themselves. The train of uncertainty has left the station. But the ticket holders to policy formation are still standing on the platform, debating how North Korea can be stopped from expanding nuclear arsenal. Yes, the risks of a fully-armed North Korea are now fully visible. They are no longer in the realm of uncertainty as the ‘odds’ of nuclear arsenal have become fully exposed. But dealing with these risks is no longer material to the future, which is shaped by a new level of visible ‘odds’ concerning how far North Korea will be willing to go with its arsenal use in geopolitical positioning. Worse, beyond this, there is a deeper uncertainty that is not yet in the domain of visible ‘odds’ - the uncertainty as to the future of the Korean Peninsula and the broader region that involves much more significant players: China and Russia vs Japan and the U.S.

The lesson here is that a centralized system of analysis and decision-making, e.g. the Deep State, to which we have devolved the power to create ‘true’ models of geopolitical realities is failing. Not because it is populated with non-experts or is under-resourced, but because it is Knightian in nature - dominated by experts and centralized. A decentralized system of risk management is more likely to provide a broader coverage of deeper uncertainty not because its can ‘see deeper’, but because competing for targets or objectives, it can ‘see wider’, or cover more risk and uncertainty sources before the ‘odds’ become significant enough to allow for actual risk modelling.

Take the story told by Andrew Hunter, which relates to the Pentagon procurement of the Joint Light Tactical Vehicle (JLTV) as a replacement for a faulty Humvee, exposed as inadequate by the events in Iraq and Afghanistan. The monopoly contracting nature of Pentagon procurement meant that until Pentagon was publicly shown as being incapable of providing sufficient protection of the U.S. troops, no one in the market was monitoring the uncertainties surrounding the Humvee performance and adequacy in the light of rapidly evolving threats. If Pentagon’s procurement was more distributed, less centralized, alternative vehicles could have been designed and produced - and also shown to be superior to Humvee - under other supply contracts, much earlier, and in fact before the experts-procured Humvees cost thousands of American lives.

There is a basic, fundamental failure in our centralized public decision making bodies - the failure that combines inability to think beyond the confines of quantifiable risks and inability to actively embrace the world of VUCA, the world that requires active engagement of contrarians in not only risk assessment, but in decision making. That this failure is being exposed in the case of North Korea, geopolitics and Pentagon procurement is only the tip of the iceberg. The real bulk of challenges relating to this modus operandi of our decision-making bodies rests in much more prevalent and better distributed threats, e.g. cybersecurity and terrorism.

Tuesday, May 16, 2017

16/5/17: Insiders Trading: Concentration and Liquidity Risk Alpha, Anyone?


Disclosed insiders trading has long been used by both passive and active managers as a common screen for value. With varying efficacy and time-unstable returns, the strategy is hardly a convincing factor in terms of identifying specific investment targets, but can be seen as a signal for validation or negation of a previously established and tested strategy.

Much of this corresponds to my personal experience over the years, and is hardly that controversial. However, despite sufficient evidence to the contrary, insiders’ disclosures are still being routinely used for simultaneous asset selection and strategy validation. Which, of course, sets an investor for absorbing the risks inherent in any and all biases present in the insiders’ activities.

In their March 2016 paper, titled “Trading Skill: Evidence from Trades of Corporate Insiders in Their Personal Portfolios”, Ben-David, Itzhak and Birru, Justin and Rossi, Andrea, (NBER Working Paper No. w22115: http://ssrn.com/abstract=2755387) looked at “trading patterns of corporate insiders in their own personal portfolios” across a large dataset from a retail discount broker. The authors “…show that insiders overweight firms from their own industry. Furthermore, insiders earn substantial abnormal returns only on stocks from their industry, especially obscure stocks (small, low analyst coverage, high volatility).” In other words, insiders returns are not distinguishable from liquidity risk premium, which makes insiders-strategy alpha potentially as dumb as blind ‘long lowest percentile returns’ strategy (which induces extreme bias toward bankruptcy-prone names).

The authors also “… find no evidence that corporate insiders use private information and conclude that insiders have an informational advantage in trading stocks from their own industry over outsiders to the industry.”

Which means that using insiders’ disclosures requires (1) correcting for proximity of insider’s own firm to the specific sub-sector and firm the insider is trading in; (2) using a diversified base of insiders to be tracked; and (3) systemically rebalance the portfolio to avoid concentration bias in the stocks with low liquidity and smaller cap (keep in mind that this applies to both portfolio strategy, and portfolio trading risks).


Thursday, November 10, 2016

9/11/16: Bitcoin vs Ether: MIIS Students Case Study


Following last night's election results, Bitcoin rose sharply in value, in line with gold, while other digital currencies largely failed to provide a safe haven against the extreme spike in markets volatility.

In a recent project, our students @MIIS have looked at the relative valuation of Bitcoin and Ether (cryptocurrency backing Ethereum blockchain platform) highlighting

  1. Fundamental supply and demand drivers for both currencies; and
  2. Assessing both currencies in terms of their hedging and safe haven properties
The conclusion of the case study was squarely in line with Bitcoin and Ether behaviour observed today: Bitcoin outperforms Ether as both a hedge and a safe haven, and has stronger risk-adjusted returns potential over the next 5 years.



Monday, May 23, 2016

22/5/16: House Prices & Household Consumption: From One Bust to the Other


In their often-cited 2013 paper, titled “Household Balance Sheets, Consumption, and the Economic Slump” (The Quarterly Journal of Economics, 128, 1687–1726, 2013), Mian, Rao, and Sufi used geographic variation in changes house prices over the period 2006-2009 and household balance sheets in 2006, to estimate the elasticity of consumption expenditures to changes in the housing share of household net worth. In other words, the authors tried to determine how responsive is consumption to changes in house prices and housing wealth. The study estimated that 1 percent drop in housing share of household net worth was associated with 0.6-0.8 percent decline in total consumer expenditure, including durable and non-durable consumption.

The problem with Mian, Rao and Sufi (2013) estimates is that they were derived from a proprietary data. And their analysis used proxy data for total expenditure.

Still, the paper is extremely influential because it documents a significant channel for shock transmission from property prices to household consumption, and thus aggregate demand. And the estimated elasticities are shockingly large. This correlates strongly with the actual experience in the U.S. during the Great Recession, when the drop in household consumption expenditures was much sharper, significantly broader and much more persistent than in other recessions. As referenced in Kaplan, Mitman and Violante (2016) paper (see full reference below), “… unlike in past recessions, virtually all components of consumption expenditures, not just durables, dropped substantially. The leading explanation for these atypical aggregate consumption dynamics is the simultaneous extraordinary destruction of housing net worth: most aggregate house price indexes show a decline of around 30 percent over this period, and only a partial recovery towards trend since.”

With this realisation, Kaplan, Mitman and Violante (2016) actually retests Mian, Rao and Sufi (2013) results, using this time around publicly available data sources. Specifically, Kaplan, Mitman and Violante (2016) ask the following question: “To what extent is the plunge in housing wealth responsible for the decline in the consumption expenditures of US households during the Great Recession?”

To answer it, they first “verify the robustness of the Mian, Rao and Sufi (2013) findings using different data on both expenditures and housing net worth. For non-durable expenditures, [they] use store-level sales from the Kilts-Nielsen Retail Scanner Dataset (KNRS), a panel dataset of total sales (quantities and prices) at the UPC (barcode) level for around 40,000 geographically dispersed stores in the US. …To construct [a] measure of local housing net worth, [Kaplan, Mitman and Violante (2016)] use house price data from Zillow…”

Kaplan, Mitman and Violante (2016)findings are very reassuring: “When we replicate MRS using our own data sources, we obtain an OLS estimate of 0.24 and an IV estimate of 0.36 for the elasticity of non-durable expenditures to housing net worth shocks. Based on Mastercard data on non-durables alone, MRS report OLS estimates of 0.34-0.38. Using the KNRS expenditure data together with a measure of the change in the housing share of net worth provided by MRS, we obtain an OLS estimate of 0.34 and an IV estimate of 0.37 – essentially the same elasticities that MRS find. …Overall, we find it encouraging that two very different measures of household spending yield such similar elasticity estimates.” The numerical value differences between the two studies are probably due to different sources of house price data, so they are not material to the studies.

Meanwhile, “…the interaction between the fall in local house prices and the size of initial leverage has no statistically significant effect on nondurable expenditures, once the direct effect of the fall in local house prices has been controlled for.”

Beyond this, the study separates “the price and quantity components of the fall in nominal consumption expenditures. …When we control for …changes in prices, we find an elasticity that is 20% smaller than our baseline estimates for nominal expenditures.” In other words, deflation and moderation in inflation did ameliorate overall impact of property prices decline on consumption.

Lastly, the authors use a much more broadly-based data for consumption from the Diary Survey of the Consumer Expenditure Survey “to estimate the elasticity of total nondurable goods and services” to the consumer expenditure survey counterpart of expenditures in the more detailed data set used for original estimates. The authors “obtain an elasticity between 0.7 and 0.9 … when applied to total non-durable goods and services.”

Overall, the shock transmission channel that works from declining house prices and housing wealth to household consumption is not only non-trivial in scale, but is robust to different sources of data being used to estimate this channel. House prices do have significant impact on household demand and, thus, on aggregate demand. And house price busts do lead to economic growth drops.



Full paper: Kaplan, Greg and Mitman, Kurt and Violante, Giovanni L., "Non-Durable Consumption and Housing Net Worth in the Great Recession: Evidence from Easily Accessible Data" (May 2016, NBER Working Paper No. w22232: http://ssrn.com/abstract=2777320)

Sunday, May 22, 2016

22/5/16: Lying and Making an Effort at It


Dwenger, Nadja and Lohse, Tim paper “Do Individuals Put Effort into Lying? Evidence from a Compliance Experiment” (March 10, 2016, CESifo Working Paper Series No. 5805: http://ssrn.com/abstract=2764121) looks at “…whether individuals in a face-to-face situation can successfully exert some lying effort to delude others.”

The authors use a laboratory experiment in which “participants were asked to assess videotaped statements as being rather truthful or untruthful. The statements are face-to-face tax declarations. The video clips feature each subject twice making the same declaration. But one time the subject is reporting truthfully, the other time willingly untruthfully. This allows us to investigate within-subject differences in trustworthiness.”

What the authors found is rather interesting: “a subject is perceived as more trustworthy if she deceives than if she reports truthfully. It is particularly individuals with dishonest appearance who manage to increase their perceived trustworthiness by up to 15 percent. This is evidence of individuals successfully exerting lying effort.”

So you are more likely to buy a lemon from a lemon-selling dealer, than a real thing from an honest one... doh...



Some more ‘beef’ from the study:

“To deceive or not to deceive is a question that arises in basically all spheres of life. Sometimes the stakes involved are small and coming up with a lie is hardly worth it. But sometimes putting effort into lying might be rewarding, provided the deception is not detected.”

However, “whether or not a lie is detected is a matter of how trustworthy the individual is perceived to be. When interacting face-to-face two aspects determine the perceived trustworthiness:

  • First, an individual’s general appearance, and 
  • Second, the level of some kind of effort the individual may choose when trying to make the lie appear truthful. 


The authors ask a non-trivial question: “do we really perceive individuals who tell the truth as more trustworthy than individuals who deceive?”

“Despite its importance for social life, the literature has remained surprisingly silent on the issue of lying effort. This paper is the first to shed light on this issue.”

The study actually uses two types of data from two types of experiments: “An experiment with room for deception which was framed as a tax compliance experiment and a deception-assessment experiment. In the compliance experiment subjects had to declare income in face-to-face situations vis-a-vis an officer, comparable to the situation at customs. They could report honestly or try to evade taxes by deceiving. Some subjects received an audit and the audit probabilities were influenced by the tax officer, based on his impression of the subject. The compliance interviews were videotaped and some of these video clips were the basis for our deception-assessment experiment: For each subject we selected two videos both showing the same low income declaration, but once when telling the truth and once when lying. A different set of participants was asked to watch the video clips and assess whether the recorded subject was truthfully reporting her income or whether she was lying. These assessments were incentivised. Based on more than 18,000 assessments we are able to generate a trustworthiness score for each video clip (number of times the video is rated "rather truthful" divided by the total number of assessments). As each individual is assessed in two different video clips, we can exploit within-subject differences in trustworthiness. …Any difference in trust-worthiness scores between situations of honesty and dishonesty can thus be traced back to the effort exerted by an individual when lying. In addition, we also investigate whether subjects appear less trustworthy if they were audited and had been caught lying shortly before. …the individuals who had to assess the trustworthiness of a tax declarer did not receive any information on previous audits.

The main results are as follows:

  • “Subjects appear as more trustworthy in compliance interviews in which they underreport than in compliance interviews in which they report truthfully. When categorizing individuals in subjects with a genuine dishonest or honest appearance, it becomes obvious that it is mainly individuals of the former category who appear more trustworthy when deceiving.”
  • “These individuals with a dishonest appearance are able to increase their perceived trustworthiness by up to 15 percent. This finding is in line with the hypothesis that players with a comparably dishonest appearance, when lying, expend effort to appear truthful.”
  • “We also find that an individual’s trustworthiness is affected by previous audit experiences. Individuals who were caught cheating in the previous period, appear significantly less trustworthy, compared to individuals who were either not audited or who reported truthfully. This effect is exacerbated for individuals with a dishonest appearance if the individual is again underreporting but is lessened if the individual is reporting truthfully.”


21/5/16: Manipulating Markets in Everything: Social Media, China, Europe


So, Chinese Government swamps critical analysis with ‘positive’ social media posts, per Bloomberg report: http://www.bloomberg.com/news/articles/2016-05-19/china-seen-faking-488-million-internet-posts-to-divert-criticism.

As the story notes: “stopping an argument is best done by distraction and changing the subject rather than more argument”.

So now, consider what the EU and European Governments (including Irish Government) have been doing since the start of the Global Financial Crisis.

They have hired scores of (mostly) mid-educated economists to write, what effectively amounts to repetitive reports on the state of economy . All endlessly cheering the state of ‘recovery’.

In several cases, we now have statistics agencies publishing data that was previously available in a singular release across two separate releases, providing opportunity to up-talk the figures for the media. Example: Irish CSO release of the Live Register stats. In another example, the same data previously available in 3 files - Irish Exchequer results - is being reported and released through numerous channels and replicated across a number of official agencies.

The result: any critical opinion is now drowned in scores of officially sanctioned presentations, statements, releases, claims and, accompanied by complicit media and professional analysts (e.g. sell-side analysts and bonds placing desks) puff pieces.

Chinese manipulating social media, my eye… take a mirror and add lights: everyone’s holding the proverbial bag… 

21/5/16: Banks Deposit Insurance: Got Candy, Mate?…


Since the end of the [acute phase] Global Financial Crisis, European banking regulators have been pushing forward the idea that crisis response measures required to deal with any future [of course never to be labeled ‘systemic’] banking crises will require a new, strengthened regime based on three pillars of regulatory and balance sheet measures:

  • Pillar 1: Harmonized regulatory supervision and oversight over banking institutions (micro-prudential oversight);
  • Pillar 2: Stronger capital buffers (in quantity and quality) alongside pre-prescribed ordering of bailable capital (Tier 1, intermediate, and deposits bail-ins), buffered by harmonized depositor insurance schemes (also covered under micro-prudential oversight); and
  • Pillar 3: Harmonized risk monitoring and management (macro-prudential oversight)


All of this firms the core idea behind the European System of Financial Supervision. Per EU Parliament (http://www.europarl.europa.eu/atyourservice/en/displayFtu.html?ftuId=FTU_3.2.5.html): “The objectives of the ESFS include developing a common supervisory culture and facilitating a single European financial market.”

Theory aside, the above Pillars are bogus and I have commented on them on this blog and elsewhere. If anything, they represent a singular, infinitely deep confidence trap whereby policymakers, supervisors, banks and banks’ clients are likely to place even more confidence at the hands of the no-wiser regulators and supervisors who cluelessly slept through the 2000-2007 build up of massive banking sector imbalances. And there is plenty of criticism of the architecture and the very philosophical foundations of the ESFS around.

Sugar buzz!...


However, generally, there is at least a strong consensus on desirability of the deposits insurance scheme, a consensus that stretches across all sides of political spectrum. Here’s what the EU has to say about the scheme: “DGSs are closely linked to the recovery and resolution procedure of credit institutions and provide an important safeguard for financial stability.”

But what about the evidence to support this assertion? Why, there is an fresh study with ink still drying on it via NBER (see details below) that looks into that matter.

Per NBER authors: “Economic theories posit that bank liability insurance is designed as serving the public interest by mitigating systemic risk in the banking system through liquidity risk reduction. Political theories see liability insurance as serving the private interests of banks, bank borrowers, and depositors, potentially at the expense of the public interest.” So at the very least, there is a theoretical conflict implied in a general deposit insurance concept. Under the economic theory, deposits insurance is an important driver for risk reduction in the banking system, inducing systemic stability. Under the political theory - it is itself a source of risk and thus can result in a systemic risk amplification.

“Empirical evidence – both historical and contemporary – supports the private-interest approach as liability insurance generally has been associated with increases, rather than decreases, in systemic risk.” Wait, but the EU says deposit insurance will “provide an important safeguard for financial stability”. Maybe the EU knows a trick or two to resolve that empirical regularity?

Unlikely, according to the NBER study: “Exceptions to this rule are rare, and reflect design features that prevent moral hazard and adverse selection. Prudential regulation of insured banks has generally not been a very effective tool in limiting the systemic risk increases associated with liability insurance. This likely reflects purposeful failures in regulation; if liability insurance is motivated by private interests, then there would be little point to removing the subsidies it creates through strict regulation. That same logic explains why more effective policies for addressing systemic risk are not employed in place of liability insurance.”

Aha, EU would have to become apolitical when it comes to banking sector regulation, supervision, policies and incentives, subsidies and markets supports and interventions in order to have a chance (not even a guarantee) the deposits insurance mechanism will work to reduce systemic risk not increase it. Any bets for what chances we have in achieving such depolitization? Yeah, right, nor would I give that anything above 10 percent.

Worse, NBER research argues that “the politics of liability insurance also should not be construed narrowly to encompass only the vested interests of bankers. Indeed, in many countries, it has been installed as a pass-through subsidy targeted to particular classes of bank borrowers.”

So in basic terms, deposit insurance is a subsidy; it is in fact a politically targeted subsidy to favor some borrowers at the expense of the system stability, and it is a perverse incentive for the banks to take on more risk. Back to those three pillars, folks - still think there won’t be any [though shall not call them ‘systemic’] crises with bail-ins and taxpayers’ hits in the GloriEUs Future?…


Full paper: Calomiris, Charles W. and Jaremski, Matthew, “Deposit Insurance: Theories and Facts” (May 2016, NBER Working Paper No. w22223: http://ssrn.com/abstract=2777311)

21/5/16: Voters selection biases and political outcomes


A recent study based on data from Austria looked at the impact of compulsory voting laws on voter quality.

Based on state and national elections data from 1949-2010, the authors “show that compulsory voting laws with weakly enforced fines increase turnout by roughly 10 percentage points. However, we find no evidence that this change in turnout affected government spending patterns (in levels or composition) or electoral outcomes. Individual-level data on turnout and political preferences suggest these results occur because individuals swayed to vote due to compulsory voting are more likely to be non-partisan, have low interest in politics, and be uninformed.”

In other words, it looks like there is a selection bias being triggered by compulsory voting: lower quality of voters enter the process, but due to their lower quality, these voters do not induce a bias away from state quo. Whatever the merit of increasing voter turnouts via compulsory voting requirements may be, it does not appear to bring about more enlightened choices in policies.

Full study is available here: Hoffman, Mitchell and León, Gianmarco and Lombardi, María, “Compulsory Voting, Turnout, and Government Spending: Evidence from Austria” (May 2016, NBER Working Paper No. w22221: http://ssrn.com/abstract=2777309)

So can you 'vote out' stupidity?..



Saturday, May 21, 2016

20/5/16: Business Owners: Not Great With Counterfactuals


A recent paper, based on a “survey of participants in a large-scale business plan competition experiment, [in Nigeria] in which winners received an average of US$50,000 each, is used to elicit beliefs about what the outcomes would have been in the alternative treatment status.”

So what exactly was done? Business owners were basically asked what would have happened to their business had an alternative business investment process taken place, as opposed to the one that took place under the competition outcome. “Winners in the treatment group are asked subjective expectations questions about what would have happened to their business should they have lost, and non‐winners in the control group asked similar questions about what would have happened should they have won.”

“Ex ante one can think of several possibilities as to the likely accuracy of the counterfactuals”:

  1. “…business owners are not systematically wrong about the impact of the program, so that the average treatment impact estimated using the counterfactuals should be similar to the experimental treatment effect. One potential reason to think this is that in applying for the competition the business owners had spent four days learning how to develop a business plan… outlining how they would use the grant to develop their business. The control group [competition losers] have therefore all had to previously make projections and plans for business growth based on what would happen if they won, so that we are asking about a counterfactual they have spent time thinking about.”
  2. ”…behavioral factors lead to systematic biases in how individuals think of these counterfactuals. For example, the treatment group may wish to attribute their success to their own hard work and talent rather than to winning the program, in which case they would underestimate the program effect. Conversely they may fail to take account of the progress they would have made anyway, attributing all their growth to the program and overstating the effect. The control group might want to make themselves feel better about missing out on the program by understating its impact (...not winning does not matter that much). Conversely they may want to make themselves feel better about their current level of business success by overstating the impact of the program (saying to themselves I may be small today, but it is only because I did not win and if I had that grant I would be very successful).”


The actual results show that business owners “do not provide accurate counterfactuals” even in this case where competition awards (and thus intervention or shock) was very large.

  • The authors found that “both the control and treatment groups systematically overestimate how important winning the program would be for firm growth… 
  • “…the control group thinks they would grow more had they won than the treatment group actually grew”
  • “…the treatment group thinks they would grow less had they lost than the control group actually grew” 

Or in other words: losers overestimate benefits of winning, winners overestimate the adverse impact from losing... and no one is capable of correctly analysing own counterfactuals.


Full paper is available here: McKenzie, David J., Can Business Owners Form Accurate Counterfactuals? Eliciting Treatment and Control Beliefs About Their Outcomes in the Alternative Treatment Status (May 10, 2016, World Bank Policy Research Working Paper No. 7668: http://ssrn.com/abstract=2779364)