Showing posts with label Risk and Resilience. Show all posts
Showing posts with label Risk and Resilience. Show all posts

Wednesday, February 20, 2019

20/2/19: Crack and Opioids of Corporate Finance


More addictive than crack or opioids, corporate debt is the sand-castle town's equivalent of water: it holds the 'marvels of castles' together, util it no longer does...

Source: https://twitter.com/lisaabramowicz1/status/1098200828010287104/photo/1

Firstly, as @Lisaabramowicz correctly summarises: "American companies look cash-rich on paper, but average leverage ratios don't tell the story. 5% of S&P 500 companies hold more than half the overall cash; the other 95% of corporations have cash-to-debt levels that are the lowest in data going back to 2004". Which is the happy outrun of the Fed and rest of the CBs' exercises in Quantitive Hosing of the economies with cheap credit over the recent years. So much 'excessive' it hurts: a 1 percentage point climb in corporate debt yields, over the medium term (3-5 years) will shave off almost USD40 billion in annual EBITDA, although tax shields on that debt are likely to siphon off some of this pain to the Federal deficits.

Secondly, this pile up of corporate debt has come with little 'balancesheet rebuilding' or 'resilience to shocks' capacity. Much of the debt uptake in recent years has been squandered by corporates on dividend finance and stock repurchases, superficially boosting the book value and the market value of the companies involved, without improving their future cash flows. And, to add to that pain, without improving future growth prospects.

Thursday, January 17, 2019

17/1/19: Why limits to AI are VUCA-rich and human-centric


Why ethics, and proper understanding of VUCA environments (environments characterized by volatility/risk, uncertainty, complexity and ambiguity) will matter more in the future than they matter even today? Because AI will require human control, and that control won't happen along programming skills axis, but will trace ethical and VUCA environments considerations.

Here's a neat intro: https://qz.com/1211313/artificial-intelligences-paper-clip-maximizer-metaphor-can-explain-humanitys-imminent-doom/. The examples are neat, but now consider one of them, touched in passim in the article: translation and interpretation. Near-perfect (native-level) language capabilities for AI are not only 'visible on the horizon', but are approaching us with a break-neck speed. Hardware - bio-tech link that can be embedded into our hearing and speech systems - is 'visible on the horizon'. With that, routine translation-requiring exchanges, such as basic meetings and discussions that do not involve complex, ambiguous and highly costly terms, are likely to be automated or outsourced to the AI. But there will remain the 'black swan' interactions - exchanges that involve huge costs of getting the meaning of the exchange exactly right, and also trace VUCA-type environment of the exchange (ambiguity and complexity are natural domains of semiotics). Here, human oversight over AI and even human displacement of AI will be required. And this oversight will not be based on technical / terminological skills of translators or interpreters, but on their ability to manage ambiguity and complexity. That, and ethics...

Another example is even closer to our times: AI-managed trading in financial assets.  In normal markets, when there is a clear, stable and historically anchored trend for asset prices, AI can't be beat in terms of efficiency of trades placements and execution. By removing / controlling for our human behavioral biases, AI can effectively avoid big risk spillovers across traders and investors sharing the same information in the markets (although, AI can also amplify some costly biases, such as herding). However, this advantage becomes turns a loss, when markets are trading in a VUCA environment. When ambiguity about investors sentiment and/or direction, or complexity of counterparties underlying a transaction, or uncertainty about price trends enters the decision-making equation, algorithmic trading platforms have three sets of problems they must confront simultaneously:

  1. How do we detect the need for, structure, price and execute a potential shift in investment strategy (for example, from optimizing yield to maximizing portfolio resilience)? 
  2. How do we use AI to identify the points for switching from consensus strategy to contrarian strategy, especially if algos are subject to herding risks?
  3. How do we migrate across unstable information sets (as information fades in and out of relevance or stability of core statistics is undermined)?

For a professional trader/investor, these are 'natural' spaces for decision making. They are also VUCA-rich environments. And they are environments in which errors carry significant costs. They can also be coincident with ethical considerations, especially for mandated investment undertakings, such as ESG funds. Like in the case of translation/interpretation, nuance can be more important than the core algorithm, and this is especially true when ambiguity and complexity rule.

Wednesday, April 25, 2018

25/4/18: 90 years of Volatility: VIX & S&P


A great chart from Goldman Sachs via @Schuldensuehner showing extreme events in markets volatility using overlay of VIX and realised volatility from 1928 on through March 2018:


For all risk / implied risk metrics wonks, this is cool.

25/4/18: Dombret on the Future of Europe


An interesting speech by y Dr Andreas Dombret, Member of the Executive Board of the Deutsche Bundesbank, on the future of Europe, with direct referencing to the issues of systemic financial risks (although some of these should qualify as uncertainties) and resilience of the regulatory/governance systems (I wish he focused more on these, however).

25/4/18: Tesla: Lessons in Severe and Paired Risks and Uncertainties


Tesla, the darling of environmentally-sensible professors around the academia and financially ignorant herd-following investors around the U.S. urban-suburban enclaves of Tech Roundabouts, Silicon Valleys and Alleys, and Social Media Cul-de-Sacs, has been a master of cash raisings, cash burnings, and target settings. To see this, read this cold-blooded analysis of Tesla's financials: https://www.forbes.com/sites/jimcollins/2018/04/25/a-brief-history-of-tesla-19-billion-raised-and-9-billion-of-negative-cash-flow/2/#3364211daf3d.

Tesla, however, isn't that great at building quality cars in sustainable and risk-resilient ways. To see that, consider this:

  1. Tesla can't procure new parts that would be consistent with quality controls norms used in traditional automotive industry: https://www.thecarconnection.com/news/1116291_tesla-turns-to-local-machine-shops-to-fix-parts-before-theyre-installed-on-new-cars.
  2. Tesla's SCM systems are so bad, it is storing faulty components at its factory. As if lean SCM strategies have some how bypassed the 21st century Silicon Valley: http://www.thedrive.com/news/20114/defective-tesla-parts-are-stacked-outside-of-california-machine-shop-report-shows.
  3. It's luxury vehicles line is littered with recalls relating to major faults: https://www.wired.com/story/tesla-model-s-steering-bolt-recall/. Which makes one pause and think: if Tesla can't secure quality design and execution at premium price points, what will you get for $45,000 Model 3?
  4. Tesla burns through billions of cash year on year, and yet it cannot deliver on volume & quality mix for its 'make-or-break' Model 3: http://www.thetruthaboutcars.com/2018/04/hitting-ramp-tesla-built-nearly-21-percent-first-quarter-model-3s-last-week/.
  5. Tesla's push toward automation is an experiment within an experiment, and, as such, it is a nesting of one tail risk uncertainty within another tail risk uncertainty. We don't have many examples of such, but here is one: https://arstechnica.com/cars/2018/04/experts-say-tesla-has-repeated-car-industry-mistakes-from-the-1980s/ and it did not end too well. The reason why? Because uncertainty is hard to deal with on its own. When two sources of uncertainty correlate positively in terms of their adverse impact, likelihood, velocity of evolution and proximity, you have a powerful conventional explosive wrapped around a tightly packed enriched uranium core. The end result can be fugly.
  6. Build quality is poor: https://cleantechnica.com/2018/02/03/munro-compares-tesla-model-3-build-quality-kia-90s/.  So poor, Tesla is running "reworking" and "remanufacturing" poor quality cars facilities, including a set-aside factory next to its main production facilities, which takes in faulty vehicles rolled off the main production lines: https://www.bloomberg.com/view/articles/2018-03-22/elon-musk-is-a-modern-henry-ford-that-s-bad.
  7. Meanwhile, and this is really a black eye for Tesla-promoting arm-chair tenured environmentalists, there is a pesky issue with Tesla's predatory workforce practices, ranging from allegations of discrimination https://www.sfgate.com/business/article/Tesla-Racial-Bias-Suit-Tests-the-Rights-of-12827883.php, to problems with unfair pay practices https://www.technologyreview.com/the-download/610186/tesla-says-it-has-a-plan-to-improve-working-conditions/, and unions busting: http://inthesetimes.com/working/entry/21065/tesla-workers-elon-musk-factory-fremont-united-auto-workers.  To be ahead of the curve here, consider Tesla an Uber-light governance minefield. The State of California, for one, is looking into some of that already: https://gizmodo.com/california-is-investigating-tesla-following-a-damning-r-1825368102.
  8. Adding insult to the injury outlined in (7) above, Tesla seems to be institutionally unable to cope with change. In 2017, Musk attempted to address working conditions issues by providing new targets for fixing these: https://techcrunch.com/2017/02/24/elon-musk-addresses-working-condition-claims-in-tesla-staff-wide-email/. The attempt was largely an exercise in ignoring the problems, stating they don't exist, and then promising to fix them. A year later, problems are still there and no fixes have been delivered: https://www.buzzfeed.com/carolineodonovan/tesla-fremont-factory-injuries?utm_term=.qa8EzdgEw#.dto7Dnp7A. Then again, if Tesla can't deliver on core production targets, why would anyone expect it to act differently on non-core governance issues?
Here's the problem, summed up in a tight quote:


Now, personally, I admire Musk's entrepreneurial spirit and ability. But I do not own Tesla stock and do not intend to buy its cars. Because when on strips out all the hype surrounding this company, it's 'disruption' model borrows heavily from governance paradigms set up by another Silicon Valley 'disruption darling' - Uber, its financial model borrows heavily from the dot.com era pioneers, and its management model is more proximate to the 20th century Detroit than to the 21st century Germany.

If you hold Tesla stock, you need to decide whether all of the 8 points above can be addressed successfully, alongside the problems of production targets ramp up, new models launches and other core manufacturing bottlenecks, within an uncertain time frame that avoids triggering severe financial distress? If your answer is 'yes' I would love to hear from you how that can be possible for a company that never in its history delivered on a major target set on time. If your answer is 'no', you should consider timing your exit.


Sunday, October 15, 2017

15/10/17: Concentration Risk & Beyond: Markets & Winners


An excellent summary of several key concepts in investment worth reading: "So Few Market Winners, So Much Dead Weight" by Barry Ritholtz of Bloomberg View.  Based on an earlier NY Times article that itself profiles new research by Hendrik Bessembinder from Arizona State University, Ritholtz notes that:

  • "Only 4 percent of all publicly traded stocks account for all of the net wealth earned by investors in the stock market since 1926, he has found. A mere 30 stocks account for 30 percent of the net wealth generated by stocks in that long period, and 50 stocks account for 40 percent of the net wealth. Let that sink in a moment: Only one in 25 companies are responsible for all stock market gains. The other 24 of 25 stocks -- that’s 96 percent -- are essentially worthless ballast."
Which brings us to the key concepts related to this observation:
  1. Concentration risk: This an obvious one. In today's markets, returns are exceptionally concentrated within just a handful of stocks. Which puts the argument in favour of diversification through a test. Traditionally, we think of diversification as a long-term protection against risks of markets decline. But it can also be seen as coming at a cost of foregone returns. Think of holding 96 stocks that have zero returns against four stocks that yield high returns, and at the same time weighing these holdings in return-neutral fashion, e.g. by their market capitalization.  
  2. Strategic approaches to capturing growth drivers in your portfolio: There are, as Ritholtz notes, two: exclusivity (active winners picking) and exclusivity (passive market indexing). Which also rounds off to diversification. 
  3. Behavioral drivers matter: Behavioral biases can wreck havoc with both selecting and holding 'winners-geared' portfolios (as noted by Rithholtz's discussion of exclusivity approach). But inclusivity  or indexing is also biases -prone, although Ritholtz does not dig deeper into that. In reality, the two approaches are almost symmetric in behavioral biases impacts. Worse, as proliferation of index-based ETFs marches on, the two approaches to investment are becoming practically indistinguishable. In pursuit of alpha, investors are increasingly being caught in chasing more specialist ETFs (index-based funds), just as they were before caught in a pursuit of more concentrated holdings of individual 'winners' shares.
  4. Statistically, markets are neither homoscedastic nor Gaussian: In most cases, there are deeper layers of statistical meaning to returns than simple "Book Profit" or "Stop-loss" heuristics can support. Which is not just a behavioral constraint, but a more fundamental point about visibility of investment returns. As Ritholtz correctly notes, long-term absolute winners do change. But that change is not gradual, even if time horizons for it can be glacial. 
All of these points is something we cover in our Investment Theory class and Applied Investment and Trading course, and some parts we also touch upon in the Risk and Resilience course. Point 4 relates to what we do, briefly, discuss in Business Statistics class. So it is quite nice to have all of these important issues touched upon in a single article.




Tuesday, October 3, 2017

3/10/17: Ambiguity Fun: Perceptions of Rationality?



Here is a very insightful and worth studying set of plots showing the perceived range of probabilities under subjective measure scenarios. Source: https://github.com/zonination/perceptions




The charts above speak volumes about both, our (human) behavioural biases in assessing probabilities of events and the nature of subjective distributions.

First on the former. As our students (in all of my courses, from Introductory Statistics, to Business Economics, to advanced courses of Behavioural Finance and Economics, Investment Analysis and Risk & Resilience) would have learned (to a varying degree of insight and complexity), the world of Rational expectations relies (amongst other assumptions) on the assumption that we, as decision-makers, are capable of perfectly assessing true probabilities of uncertain outcomes. And as we all have learned in these classes, we are not capable of doing this, in part due to informational asymmetries, in part due to behavioural biases and so on. 

The charts above clearly show this. There is a general trend in people assigning increasingly lower probabilities to less likely events, and increasingly larger probabilities to more likely ones. So far, good news for rationality. The range (spread) of assignments also becomes narrower as we move to the tails (lower and higher probabilities assigned), so the degree of confidence in assessment increases. Which is also good news for rationality. 

But at that, evidence of rationality falls. 

Firstly, note the S-shaped nature of distributions from higher assigned probabilities to lower. Clearly, our perceptions of probability are non-linear, with decline in the rate of likelihoods assignments being steeper in the middle of perceptions of probabilities than in the extremes. This is inconsistent with rationality, which implies linear trend. 

Secondly, there is a notable kick-back in the Assigned Probability distribution for Highly Unlikely and Chances Are Slight types of perceptions. This can be due to ambiguity in wording of these perceptions (order can be viewed differently, with Highly Unlikely being precedent to Almost No Chance ordering and Chances Are Slight being precedent to Highly Unlikely. Still, there is a lot of oscillations in other ordering pairs (e.g. Unlikely —> Probably Not —> Little Chance; and We Believe —> Probably. This also consistent with ambiguity - which is a violation of rationality.

Thirdly, not a single distribution of assigned probabilities by perception follows a bell-shaped ‘normal’ curve. Not for a single category of perceptions. All distributions are skewed, almost all have extreme value ‘bubbles’, majority have multiple local modes etc. This is yet another piece of evidence against rational expectations.

There are severe outliers in all perceptions categories. Some (e.g. in the case of ‘Probably Not’ category appear to be largely due to errors that can be induced by ambiguous ranking of the category or due to judgement errors. Others, e.g. in the case of “We Doubt” category appear to be systemic and influential. Dispersion of assignments seems to be following the ambiguity pattern, with higher ambiguity (tails) categories inducing greater dispersion. But, interestingly, there also appears to be stronger ambiguity in the lower range of perceptions (from “We Doubt” to “Highly Unlikely”) than in the upper range. This can be ‘natural’ or ‘rational’ if we think that less likely event signifier is more ambiguous. But the same holds for more likely events too (see range from “We Believe” to “Likely” and “Highly Likely”).

There are many more points worth discussing in the context of this exercise. But on the net, the data suggests that the rational expectations view of our ability to assess true probabilities of uncertain outcomes is faulty not only at the level of the tail events that are patently identifiable as ‘unlikely’, but also in the range of tail events that should be ‘nearly certain’. In other words, ambiguity is tangible in our decision making. 



Note: it is also worth noting that the above evidence suggests that we tend to treat inversely certainty (tails) and uncertainty (centre of perceptions and assignment choices) to what can be expected under rational expectations:
In rational setting, perceptions that carry indeterminate outruns should have greater dispersion of values for assigned probabilities: if something is is "almost evenly" distributed, it should be harder for us to form a consistent judgement as to how probable such an outrun can be. Especially compared to something that is either "highly unlikely" (aka, quite certain not to occur) and something that is "highly likely" (aka, quite certain to occur). The data above suggests the opposite.

Saturday, July 29, 2017

28/7/17: Risk, Uncertainty and Markets


I have warned about the asymmetric relationship between markets volatility and leverage inherent in lower volatility targeting strategies, such as risk-parity, CTAs, etc for some years now, including in 2015 posting for GoldCore (here: http://www.goldcore.com/us/gold-blog/goldcore-quarterly-review-by-dr-constantin-gurdgiev/). And recently, JPMorgan research came out with a more dire warning:

This is apt and timely, especially because volatility (implied - VIX, realized - actual bi-directional or semi-var based) and uncertainty (implied metrics and tail events frequencies) have been traveling in the opposite direction  for some time.

Which means (1) increasing (trend) uncertainty is coinciding with decreasing implied risks perceptions in the markets.

Meanwhile, markets indices are co-trending with uncertainty:
Which means (2) increasing markets valuations are underpricing uncertainty, while focusing on decreasing risk perceptions.

In other words, both barrels of the proverbial gun are now loaded, when it comes to anyone exposed to leverage.

Saturday, April 22, 2017

22/4/17: Two Regimes of Whistle-Blower Protection


“Corporate fraud is a major challenge in both developing and advanced economies, and employee whistle-blowers play an important role in uncovering it.” A truism that is, despite being quite obvious, has been a subject of too little research to-date. One recent study by the Association of Certified Fraud Examiners (2014), found that the average loss to organisations experiencing fraud that occurs due to financial statement fraud, asset misappropriation, and corruption is estimated losses from impact of corporate fraud globally at around $3.7 trillion. Such estimates are, of course, only remotely accurate. The Global Fraud Report" (2016) showed that 75% of surveyed senior executives stated that their company was a fraud victim in the previous year and in 81% of those cases, at least one company insider was involved, with a large share of such perpetrators (36%) coming from the ranks of company senior or middle management.

Beyond aggregate losses, whistleblowers are significantly important to detection of fraud cases. A 2010 study showed that whistleblowers have been responsible for some 17 percent of fraud discoveries over the period of 1996-2004 for fraud occurrences amongst the large U.S. corporations. And, according to the Association of Certified Fraud Examiners (2014), “employees were the source in 49% of tips leading to the detection of fraud”.


In line with this and other evidence on the impact of fraud-induced economic and social costs, whistleblower protection has been promoted and advanced across a range of countries and institutional frameworks in recent years. An even more glaring gap in our empirical knowledge arises when we attempt to analyse the extent to which such protection has been effective in creating the right legal and operational conditions for whistleblowers to be able to provide our societies with improved information disclosure and corporate governance and regulatory enforcement.

Somewhat filling the latter research gap, a recent working paper, titled “Whistle-Blower Protection: Theory and Experimental Evidence” by Lydia Mechtenberg, Gerd Muehlheusser, and Andreas Roider (CESIFO WORKING PAPER NO. 6394, March 2017) performed “a theory-guided lab experiment in which we analyze the impact of introducing whistle-blower protection. In particular, we compare different legal regimes (“belief-based" versus “fact-based") with respect to their effects on employers' misbehavior, employees' truthful and fraudulent reports, prosecutors' investigations, and employers' retaliation.”

In basic terms, there are two key approaches to structuring whistleblowers protection: belief-based regime (with “less stringent requirements for granting protection to whistle-blowers”) and fact-based regime (with greater hurdles of proof required from whistleblowers in order to avail of the legal protection). The authors’ “results suggest that the latter lead to better outcomes in terms of reporting behavior and deterrence.” The reason is that “when protection is relatively easy to (obtain as under belief-based regimes), fraudulent claims [by whistleblowers] indeed become a prevalent issue. This reduces the informativeness of reports to which prosecutors respond with a lower propensity to investigate. As a consequence, the introduction of such whistle-blower protection schemes might not lead to the intended reduction of misbehavior. In contrast, these effects are mitigated under a fact-based regime where the requirements for protection are more stringent.”

In a sense, the model and the argument behind it is pretty straight forward and intuitive. However, the conclusions are far reaching, given that recent U.S. and UK direction in advancing whistleblowers protection has been in favour of belief-based systems, while european ‘continental’ tradition has been to support fact-based thresholds. As authors do note, we need more rigorous empirical analysis of the effectiveness of two regimes in delivering meaningful discoveries of fraud, while accounting for false cases of disclosures; analysis that captures financial, economic, institutional and social benefits of the former, and costs of the latter.

Friday, April 21, 2017

21/4/17: Millennials, Property ‘Ladders’ and Defaults


In a recent report, titled “Beyond the Bricks: The meaning of home”, HSBC lauded the virtues of the millennials in actively pursuing purchases of homes. Mind you - keep in mind the official definition of the millennials as someone born  1981 and 1998, or 28-36 years of age (the age when one is normally quite likely to acquire a mortgage and their first property).

So here are the HSBC stats:


As the above clearly shows, there is quite a range of variation across the geographies in terms of millennials propensity to purchase a house. However, two things jump out:

  1. Current generation is well behind the baby boomers (when the same age groups are taken for comparatives) in terms of home ownership in all advanced economies; and
  2. Millennials are finding it harder to purchase homes in the countries where homeownership is seen as the basic first step on the investment and savings ladder to the upper middle class (USA, Canada, UK and Australia).


All of which suggests that the millennials are severely lagging previous generations in terms of both savings and investment. This is especially true as the issues relating to preferences (as opposed to affordability) are clearly not at play here (see the gap between ‘ownership’ and intent to own).

That point - made above - concerning the lack of evidence that millennials are not purchasing homes because their preferences might have shifted in favour of renting and way from owning is also supported by a sky-high proportions of millennials who go to such lengths as borrow from parents and live with parents to save for the deposit on the house:


Now, normally, I would not spend so much time talking about property-related surveys by the banks. But here’s what is of added interest here. Recent evidence suggests that millennials are quite different to previous generations in terms of their willingness to default on loans. Watch U.S. car loans (https://www.ft.com/content/0f17d002-f3c1-11e6-8758-6876151821a6 and https://www.experian.com/blogs/insights/2017/02/auto-loan-delinquencies-extending-beyond-subprime-consumers/) going South and the millennials are behind the trend (http://newsroom.transunion.com/transunion-auto-loan-growth-driven-by-millennial-originations-auto-delinquencies-remain-stable) on the origination side and now on the default side too (http://www.zerohedge.com/news/2017-04-13/ubs-explains-whos-behind-surging-subprime-delinquencies-hint-rhymes-perennials).

Which, paired with the HSBC analysis that shows significant financial strains the millennials took on in an attempt to jump onto the homeownership ‘ladder’, suggests that we might be heading not only into another wave of high risk borrowing for property purchases, but that this time around, such borrowings are befalling and increasingly older cohort of first-time buyers (leaving them less time to recover from any adverse shock) and an increasingly willing to default cohort of first-time buyers (meaning they will shit some of the burden of default onto the banks, faster and more resolutely than the baby boomers before them). Of course, never pay any attention to the reality is the motto for the financial sector, where FHA mortgages drawdowns by the car loans and student loans defaulting millennials (https://debtorprotectors.com/lawyer/2017/04/06/Student-Loan-Debt/Student-Loan-Defaults-Rising,-Millions-Not-Making-Payments_bl29267.htm) are hitting all time highs (http://www.heraldtribune.com/news/20170326/kenneth-r-harney-why-millennials-are-flocking-to-fha-mortgages)

Good luck having a sturdy enough umbrella for that moment when that proverbial hits the fan… Or you can always hedge that risk by shorting the millennials' favourite Snapchat... no, wait...

Tuesday, April 18, 2017

18/4/17: S&P500 Concentration Risk Impact


Recently I posted on FactSet data relating earnings within S&P500 across U.S. vs global markets, commenting on the inherent risk of low degree sales/revenues base diversification present across a range of S&P500 companies and industries. The original post is provided here.

Now, FactSet have provided another illustration of the 'concentration risk' within the S&P500 by mapping earnings and revenues growth across two sets of S&P500 companies: those with more than 50% of earnings coming from outside the U.S. and those with less than 50% of earnings coming from the global markets.


The chart is pretty striking. More globally diversified S&P constituents (green bars) are posting vastly faster rates of growth in earnings and a notably faster growth in revenues than S&P500 constituents with less than 50% share of revenues from outside the U.S (light blue bars).

Impact of the concentration risk illustrated. Now, can we have an ETF for that?..

Wednesday, April 12, 2017

12/4/17: European Economic Uncertainty Moderated in 1Q 2017


European Policy Uncertainty Index, an indicator of economic policy risks perception based on media references, has posted a significant moderation in the risk environment in the first quarter of 2017, falling from the 4Q 2016 average of 307.75 to 1Q 2017 average of 265.42, with the decline driven primarily by moderating uncertainty in the UK and Italy, against rising uncertainty in France and Spain. Germany's economic policy risks remained largely in line with 4Q 2016 readings. Despite the moderation, overall European policy uncertainty index in 1Q 2017 was still ahead of the levels recorded in 1Q 2016 (221.76).

  • German economic policy uncertainty index averaged 247.19 in 1Q 2017, up on 239.57 in 4Q 2016, but down on the 12-months peak of 331.78 in 3Q 2016. However, German economic uncertainty remained above 1Q 2016 level of 192.15.
  • Italian economic policy uncertainty index was running at 108.52 in 1Q 2017, down significantly from 157.31 reading in 4Q 2016 which also marked the peak for 12 months trailing period. Italian uncertainty index finished 1Q 2017 at virtually identical levels as in 1Q 2016 (106.92).
  • UK economic policy uncertainty index was down sharply at 411.04 in 1Q 2017 from 609.78 in 4Q 2016, with 3Q 2016 marking the local (12 months trailing) peak at 800.14. Nonetheless, in 1Q 2017, the UK index remained well above 1Q 2016 reading of 347.11.
  • French economic policy uncertainty rose sharply in 1Q 2017 to 454.65 from 371.16 in 4Q 2016. Latest quarterly average is the highest in the 12 months trailing period and is well above 273.05 reading for 1Q 2016.
  • Spain's economic policy uncertainty index moderated from 179.80 in 4Q 2016 to 137.78 in 1Q 2017, with the latest reading being the lowest over the five recent quarters. A year ago, the index stood at 209.12.

Despite some encouraging changes and some moderation, economic policy uncertainty remains highly elevated across the European economy as shown in the chart and highlighted in the chart below:
Of the Big 4 European economies, only Italy shows more recent trends consistent with decline in uncertainty relative to 2012-2015 period and this moderation is rather fragile. In every other big European economy, economic uncertainty is higher during 2016-present period than in any other period on record. 

Tuesday, April 11, 2017

11/4/17: S&P 500 Concentration Risk


Concentration risk is a concept that comes from banking. In simple terms, concentration risk reflects the extent to which bank's assets (loans) are distributed across the borrowers. Take an example of a bank which has 10 large borrowers with equivalent size loans extended to them. In this case, each borrower accounts for 10 percent of the bank total assets and bank's concentration ratio is 10% or 0.1. Now, suppose that another bank has 5 borrowers with equivalent loans. For the second bank, the concentration ratio is 0.2 or 20%. Concentration risk (exposure to a limited number of borrowers) is obviously higher in the latter bank than in the former.

Despite coming from banking, the concept of concentration risk applies to other organisations and sectors. For example, take suppliers of components to large companies, like Apple. For many of these suppliers, Apple represents the source of much of their revenues and, thus, they are exposed to the concentration risk. See this recent article for examples.

For sectors, as opposed to individual organisations, concentration risk relates to the distribution of sector earnings. And the latest FactSet report from April 7, 2017 shows just how concentrated the geographical distributions of earnings for S&P 500 are:


In summary:

  • With exception of Information Technology, not a single sector in the S&P 500 has aggregate revenues exposure to the U.S. market that is below 50%;
  • Seven out of 11 sectors covered within S&P 500 have exposure concentration to the U.S. market in excess of 70%; and
  • On the aggregate, 70% of revenues for the entire S&P 500 arise from within the U.S. markets.
In simple terms, S&P 500 is extremely vulnerable to the fortunes of the U.S. economy. Or put differently, there is a woeful lack of economic / revenue sources diversification in the S&P 500 companies.

Friday, December 30, 2016

30/12/16: Corporate Debt Grows Faster than Cash Reserves


Based on the data from FactSet, U.S. corporate performance metrics remain weak.

On the positive side, corporate cash balances were up 7.6% to USD1.54 trillion in 3Q 2016 y/y, for S&P500 (ex-financials) companies. This includes short term investments, as well as cash reserves. Cash balances are now at their highest since the data records started in 2007.

But, there’s been some bad news too:

  1. Top 20 companies now account for 52.5% of the total S&P500 cash holdings, up on 50.8% in 3Q 2015.
  2. Heaviest cash reserves are held by companies that favour off-shore holdings over repatriation of funds into the U.S., like Microsoft (USD136.9 billion, +37.8% y/y), Alphabet (USD83.1 billion, +14.1% y/y), Cisco (USD71 billion, +20.1% y/y), Oracle (USD68.4 billion, +22.3%) and Apple (USD67.2 billion, +61.4%). Per FactSet, “the Information Technology sector maintained the largest cash balance ($672.7 billion) at the end of the third quarter. The sector’s cash total made up 43.6% of the aggregate amount for the index, which was a jump from the 39.3% in Q3 2015”
  3. Despite hefty cash reserves, net debt to EBITDA ratio has reached a new high (see green line in the first chart below), busting records for the sixth consecutive quarter - up 9.9% y/y. Again, per FactSet, “at the end of Q3, net debt to EBITDA for the S&P 500 (Ex-Financials) increased to 1.88.” So growth in debt has once again outpaced growth in cash. “At the end of the third quarter, aggregate debt for the S&P 500 (Ex-Financials) index reached its highest total in at least ten years, at $4.57 trillion. This marked a 7.8% increase from the debt amount in Q3 2015.” which is nothing new, as in the last 12 quarters, growth in debt exceeded growth in cash in all but one quarter (an outlier of 4Q 2013). 3Q 2016 cash to debt ratio for the S&P 500 (Ex-Financials) was 33.7%, on par with 3Q 2015 and 5.2% below the average ratio over the past 12 quarters.



Net debt issuance is also a problem: 3Q 2016 posted 10th highest quarter in net debt issuance in 10 years, despite a steep rise in debt costs.


While investment picked up (ex-energy sector), a large share of investment activity remains within the M&As. “The amount of cash spent on assets acquired from acquisitions amounted to $85.7 billion in Q3, which was the fifth largest quarterly total in the past ten years. Looking at mergers and acquisitions for the United States, M&A volume slowed in the third quarter (August - October) compared to the same period a year ago, but deal value rose. The number of transactions fell 7.3% year-over-year to 3078, while the aggregate deal value of these transactions increased 23% to $564.2 billion.”

The above, of course, suggests that quality of the deals being done (at least on valuations side) remains relatively weak: larger deals signal higher risks for acquirers. This is confirmed by data from Bloomberg, which shows that 2016 median Ebitda Multiple for M&A deals of > USD 1 Billion has declined to x12.7 in 2016 from an all-time high in 2015 of x14.3. Still, 2016 multiple is the 5th highest on record. In part, this reduction in risk took place at the very top of M&As distribution, as the number of so-called mega-deals (> USD 10 billion) has fallen to 35 in 2016, compared to 51 in 2015 (all time record). However, 2016 was still the sixth highest mega-deal year in 20 years.

Overall, based on Bloomberg data, 2015 was the fourth highest M&A deals year since 1996.


So in summary:

  • While cash flow is improving, leading to some positive developments on R&D investment and general capex (ex-energy);
  • Debt levels are rising and they are rising faster than cash reserves and earnings;
  • Much of investment continues to flow through M&A pipeline, and the quality of this pipeline is improving only marginally.



Source: https://www.bloomberg.com/gadfly/articles/2016-12-30/trump-set-to-refill-m-a-punch-bowl-in-2017

Sunday, June 26, 2016

26/6/16: Black Swan ain't Brexit... but


There is a lot of froth in the media opinionating on Brexit vote. And there is a lot of nonsense.

One clearly cannot deal with all of it, so I am going to occasionally dip into the topic with some comments. These are not systemic in any way.

Let's take the myth of Brexit being a 'Black Swan'. This goes along the lines: lack of UK and European leaders' preparedness to the Brexit referendum outcome can be explained by the nature of the outcome being a 'Black Swan' event.

The theory of 'Black Swan' events was introduced by Nassin Taleb in his book “Black Swan
Theory”. There are three defining characteristics of such an event:

  1. The event can be explained ex post its occurrence as either predictable or expected;
  2. The event has an extremely large impact (cost or benefit); and
  3. The event (ex ante its occurrence) is unexpected or not probable.

Let's take a look at the Brexit vote in terms of the above three characteristics.

Analysis post-event shows that Brexit does indeed conform with point 1, but only partially. There is a lot of noise around various explanations for the vote being advanced, with analysis reaching across the following major arguments:

  • 'Dumb' or 'poor' or 'uneducated' or 'older' people voted for Brexit
  • People were swayed to vote for Brexit by manipulative populists (which is an iteration of the first bullet point)
  • People wanted to punish elites for (insert any reason here)
  • Protests vote (same as bullet point above)
  • People voted to 'regain their country from EU' 
  • Brits never liked being in the EU, and so on
The multiplicity of often overlapping reasons for Brexit vote outcome does imply significant complexity of causes and roots for voters preferences, but, in general, 'easy' explanations are being advanced in the wake of the vote. They are neither correct, nor wrong, which means that point 1 is neither violated nor confirmed: loads of explanations being given ex post, loads of predictions were issued ex ante.

The Brexit event is likely to have a significant impact. Short term impact is likely to be extremely large, albeit medium and longer term impacts are likely to be more modest. The reasons for this (not an exhaustive list) include: 
  • Likely overshooting in risk valuations in the short run;
  • Increased uncertainty in the short run that will be ameliorated by subsequent policy choices, actions and information flows; 
  • Starting of resolution process with the EU which is likely to be associated with more intransigence vis-a-vis the UK on the EU behalf at the start, gradually converging to more pragmatic and cooperative solutions over time (what we call moving along expectations curve); 
  • Pre-vote pricing in the markets that resulted in a rather significant over-pricing of the probability of 'Remain' vote, warranting a large correction to the downside post the vote (irrespective of which way the vote would have gone); 
  • Post-vote vacillations and debates in the UK as to the legal outrun of the vote; and 
  • The nature of the EU institutions and their extent in determining economic and social outcomes (the degree of integration that requires unwinding in the case of the Brexit)
These expected impacts were visible pre-vote and, in fact, have been severely overhyped in media and official analysis. Remember all the warnings of economic, social and political armageddon that the Leave vote was expected to generate. These were voiced in a number of speeches, articles, advertorials and campaigns by the Bremainers. 

So, per second point, the event was ex ante expected to generate huge impacts and these potential impacts were flagged well in advance of the vote.

The third ingredient for making of a 'Black Swan' is unpredictable (or low predictability) nature of the event. Here, the entire thesis of Brexit as a 'Black Swan' collapses. 

Let me start with an illustration: about 18 hours before the results were announced, I repeated my view (proven to be erroneous in the end) that 'Remain' will shade the vote by roughly 52% to 48%. As far as I am aware, no analyst or media outfit or /predictions market' (aka betting shop) put probability of 'Leave' at less than 30 percent. 

Now, 30 percent is not unpredictable / unexpected outcome. It is, instead, an unlikely, but possible, event. 

Let's do a mental exercise: you are offered by your stock broker an investment product that risks losing 30% of our pension money (say EUR100,000) with probability of 30%. Your expected loss is EUR9,000 is not a 'Black Swan' or an improbable high impact event, but instead a rather possible high impact event. Conditional (on loss materialising) impact here is, however, EUR30,000 loss. Now, consider a risk of losing 90% of your pension money with a probability of 10%. Your expected loss is the same, but low probability of a loss makes it a rather unexpected high impact event, as conditional impact of a loss here is EUR90,000 - three times the size of the conditional loss in the first case. 

The latter case is not Brexit, but is a Black Swan, the former case is Brexit-like and is not a Black Swan event. 

Besides the discussion of whether Brexit was a Black Swan event or not, however, the conditional loss (conditional on loss materialising) in the above examples shows that, however low the probability of a loss might be, once conditional loss becomes sizeable enough, the risk assessment and management of the event that can result in such a loss is required. In other words, whether or not Brexit was probable ex ante the vote (and it was quite probable), any risk management in preparation of the vote should have included full evaluation of responses to such a loss materialising. 

It is now painfully clear (see EU case here: http://arstechnica.co.uk/tech-policy/2016/06/brexit-in-brussels-junckers-mic-drop-and-political-brexploitation/, see Irish case here: http://www.irishtimes.com/news/politics/government-publishes-brexit-contingency-plan-1.2698260) that prudent risk management procedures were not followed by the EU and the Irish State. There is no serious contingency plan. No serious road map. No serious impact assessment. No serious readiness to deploy policy responses. No serious proposals for dealing with the vote outcome.

Even if Brexit vote was a Black Swan (although it was not), European institutions should have been prepared to face the aftermath of the vote. This is especially warranted, given the hysteria whipped up by the 'Remain' campaigners as to the potential fallouts from the 'Leave' vote prior to the referendum. In fact, the EU and national institutions should have been prepared even more so because of the severely disruptive nature of Black Swan events, not despite the event being (in their post-vote minds) a Black Swan.

Saturday, June 18, 2016

18/7/16: Gamed Financial Information and Regulation Misfires


A recent interview by the Insights by Stanford Business, titled “In Financial Disclosures, Not All Information Is Equal” (all references are supplied below and all emphasis in quotations is mine) touched upon a pivotal issue of quality of information available from public disclosures by listed companies - the very heart of the market fundamentals.

The interview is with Stanford professor of accounting Iván Marinovic, who states, in the words of the interviewer, that “financial statements are becoming less and less relevant compared to other sources of information, such as analysts and news outlets. ...there is a creeping trend in financial disclosures away from the reliance on verifiable assets and toward more intangible elements of a business’s operations.”

In simple terms, financial information is being gamed. It is being gamed by increasing concentration in disclosures on ‘soft’ information (information that cannot be verified) at the expense of hard information disclosures (information that can be verified). More parodoxically, increasing gaming of information is a result, in part, of increasing requirements to disclose hard information! Boom!


Let's elaborate.

In a recently published (see references below) paper, Marinovic and his co-author, Jeremy Bertomeu define ‘hard’ and ‘soft’ information slightly differently. “The coexistence of hard and soft information is a fundamental characteristic of the measurement process. A disclosure can be soft, in the form of a measure that “can easily be pushed in one direction or another”, or hard, having been subjected to a verification after which “it is difficult to disagree”."

For example, firms asset classes can range "from tangible assets to traded securities which are subject to a formal verification procedure. Forward-looking assets are more difficult to objectively verify and are typically regarded as being soft. For example, the value of many intangibles (e.g., goodwill, patents, and brands) may require unverifiable estimates of future risks.”

The problem is that ‘soft’ information is becoming the focus of corporate reporting because of coincident increase in hard information reporting. And worse, unmentioned in the article, that ‘soft’ information is now also a matter of corporate taxation systems (e.g. Ireland’s ‘Knowledge Development Box’ tax scheme). In other words, gamable metrics are now throughly polluting markets information flows, taxation mechanisms and policy making environment.

Per interview, there is a “tradeoff between reliability and the relevance of the information” that represents “a big dilemma among standard setters, who I think are feeling pressure to change the accounting system in a way that provides more information.”

Which, everyone thinks, is a good thing. But it may be exactly the opposite.

“One of the main results — and it’s a very intuitive one — shows that when markets don’t trust firms, we will tend to see a shift toward financial statements becoming harder and harder. [and] …a firm that proportionally provides more hard information is more likely to manipulate whatever soft information it does provide. In other words, you should be more wary about the soft information of a firm that is providing a lot of hard information.”

Again, best to look at the actual paper to gain better insight into what Marinovic is saying here.

Quoting from the paper: “...a manager who is more likely to misreport is more willing to verify and release hard information, even though issuing hard information reduces her ability to manipulate. To explain this key property of our model, we reiterate that not all information can be made hard. Hence, what managers lose in terms of discretion to over-report the verifiable information, they can gain in credibility for the remaining soft disclosure. Untruthful managers will tend to issue higher soft reports, naturally facing stronger market skepticism. We demonstrate that untruthful managers are always more willing to issue hard information, relative to truthful managers."

Key insight: "...situations in which managers release more hard information are also more likely to feature aggressive soft reports and have a greater likelihood of issuing overstatements.”

As the result, as noted in the interview, “…we should expect huge frauds, huge overstatements precisely in settings or markets where there is a lot of credibility. The markets believe the information because they perceive the environment as credible, which encourages more aggressive manipulations from dishonest managers who know they are trusted. In other words, there is a relationship between the frequency and magnitude of frauds, where a lower frequency should lead to a larger magnitude.”

In other words, when markets are complacent about information disclosed and/or markets have greater trust in the disclosures mandates (high regulation barrier), information can be of lower quality and/or risk of large fraud cases rises. While this is intuitive, the end game here is not as clear cut: heavily regulating information flows might be not necessarily a productive response because markets trust has a significant positive value.

Let’s dip into the original paper once again, for more exposition on this paradox: “We consider the consequence of reducing the amount of discretion in the reporting of any verifiable information. The mandatory disclosure of hard information has the unintended consequence of reducing information about the soft, unverifiable components of firm value. In other words, there is a trade-off between the quality of hard versus soft information. Regulation cannot increase the social provision of one without reducing the other.”

Now, take European banks (U.S. banks face much of the same). Under the unified supervision by the ECB within the European Banking Union framework, banks are required to report increasingly more and more hard information. In Bertomeu-Marinovic model this can result in reduced incidence of smaller fraud cases and increased frequency and magnitude of large fraud cases. Which will compound the systemic risks within the financial sector (small frauds are non-systemic; large ones are). The very disclosure requirement mechanism designed to reduce large fraud cases can mis-fire by producing more systemic cases.

In its core, Jeremy Bertomeu and Ivan Marinovic paper shows that “certain soft disclosures may contain as much information as hard disclosures, and we establish that: (a) exclusive reliance on soft disclosures tends to convey bad news, (b) credibility is greater when unfavorable information is reported and (c) misreporting is more likely when soft information is issued jointly with hard information. We also show that a soft report that is seemingly unbiased in expectation need not indicate truthful reporting.”

So here is a kicker: “We demonstrate that …the aggregation of hard with soft information will turn all information soft.” In other words, soft information tends to fully cancel out hard information, when both types of information are present in the same report.

Now, give this a thought: many sectors today (think ICT et al) are full of soft information reporting and soft metrics targeting. Which, in Bertomeu-Marinovic model renders all information, including hard corporate finance metrics, reported by these sectors effectively soft (non-verifiable). This, in turn, puts into question all pricing frameworks based on corporate finance information whenever they apply to these sectors and companies.



References for the above are:

The Interview with Marinovic can be read here: https://www.gsb.stanford.edu/insights/financial-disclosures-not-all-information-equal.

Peer reviewed publication (gated version) of the paper is here: http://www.gsb.stanford.edu/faculty-research/publications/theory-hard-soft-information

Open source publication is here: Bertomeu, Jeremy and Marinovic, Ivan, A Theory of Hard and Soft Information (March 16, 2015). Accounting Review, Forthcoming; Rock Center for Corporate Governance at Stanford University Working Paper No. 194: http://ssrn.com/abstract=2497613.