Showing posts with label Risk management. Show all posts
Showing posts with label Risk management. Show all posts

Thursday, April 30, 2020

30/4/20: No, Healthcare Systems are Not Lean Startups, Mr. Musk


A tweet from @elonmusk yesterday has prompted a brief response from myself:

https://twitter.com/GTCost/status/1255681426445365248?s=20

For two reasons, as follows, it is worth elaborating on my argument a little more:

  1. I have seen similar sentiment toward authorities' over-providing healthcare system capacity in other countries as well, including, for example in Ireland, where the public has raised some concerns with the State contracting private hospitals for surplus capacity; and
  2. Quite a few people have engaged with my response to Musk.
So here are some more thoughts on the subject:

'Lean startups' is an idea that goes hand-in-hand with the notion that a startup needs some organic growth runway. In other words, it needs to ‘nail’ parts of its business model first, before ‘scaling’ the model up. ‘Nailing’ bit is done using highly scarce resources pre-extensive funding (which is a ‘scaling’ phase). It makes perfect sense for a start up, imo, for a startup.

But in the ‘nailing’ stage, when financial resources are scarce, the startup enterprise has another resource is relies upon to execute on a ‘lean’ strategy: time. Why? Because a ‘lean’ startup is a smaller undertaking than a scaling startup. As a result, failure at that stage carries lower costs. In other words, you can be ‘lean’ because you are allowed to fail, because if you do fail in that stage of development, you can re-group and re-launch. You can afford to be reactive to news flows and changes in your environment, which means you do not need to over-provide resources in being predictive or pro-active. Your startup can survive on lean funding.

As you scale startup, you accumulate resources (investment and retained earnings) forward. In other words, you are securing your organization by over-providing capacity. Why? Because failure is more expensive for a scaling startup than for a 'lean' early stage startup. The notion of retained and untilized cash is no longer the idea of waste, but, rather a prudential cushion. Tesla, Mr. Musk's company, carries cash reserves and lines of credit that it is NOT using at the moment in time precisely because not doing so risks smaller shocks to the company immediately escalating into existential shocks. And a failure of Tesla has larger impact than a failure of small 'lean' startup. In other words, Mr. Musk does not run a 'lean startup' for a good reason. Now, in a public health emergency with rapid rates of evolution and high degree of forecast uncertainty, you cannot be reactive. You must allocate resources to be pro-active, or anticipatory. In doing so, you do not have a choice, but to over-supply resources. You cannot be ‘lean’, because the potential (and highly probable) impact of any resource under-provision is a public health threat spinning out of control into a public health emergency and a systemic shock. ‘Lean’ startup methods work, when you are dealing with risk and uncertainty in a de-coupled systems with a limited degree of complexity involved and the range of shocks impact limited by the size of the organization/system being shocked. Public health emergence are the exact opposite of such a environment: we are dealing with severe uncertainty (as opposed to risk) with hugely substantial impacts of these shocks (think thousands of lives here, vs few million dollars in investment in an early stage start up failure). We are also dealing with severe extent of complexity. High speed of evolution of threats and shocks, uncertain and potentially ambiguous pathways for shocks propagation, and highly complex shock contagion pathways that go beyond the already hard-to-model disease contagion pathways. So a proper response to a pandemic, like the one we are witnessing today, is to use an extremely precautionary principle in providing resources and imposing controls. This means: (1) over-providing resources before they become needed (which, by definition, means having excess capacity ex-post shock realization); (2) over-imposing controls to create breaks on shock contagion (which, by definition, means doing too-much-tightening in social and economic environment), (3) doing (1) and (2) earlier in the threat evolution process rather than later (which means overpaying severely for spare capacity and controls, including - by design - at the time when these costs may appear irrational). And (4), relying on the worst-case-scenario parameterization of adverse impact in your probabilistic and forecasting analysis and planning. This basis for a public health threat means that responses to public health threat are the exact opposite to a ‘lean’ start up environment. In fact they are not comparable to the ‘scaling up’ start up environment either. A system that has a huge surplus capacity left in it, not utilized, in a case of a start up is equivalent to waste. Such system’s leadership should be penalized. A system that has a huge surplus capacity left un-utilized, in a case of a pandemic is equivalent to the best possible practice in prudential management of the public health threat. Such system’s leadership should be applauded.

And even more so in the case of COVID pandemic. Mr. Musk implies something being wrong with California secured hospital beds capacity running at more than double the rate of COVID patients arrivals. That's the great news, folks. COVID pandemic carries infection detection rates that double the population of infected individuals every 3-30 days, depending on the stage of contagion evolution. Earlier on, doubling times are closer to 3 days, later on, they are closer to 30 days. But, utilization of hospital beds follows an even more complex dynamic, because in addition to the arrival rates of new patients, you also need to account for the duration of hospital stay for patients arriving at different times in the pandemic. Let's be generous to sceptics, like Mr. Musk, and assume that duration-of-stay adjusted arrivals of new patients into the hospitals has a doubling time of the mid-point of 3-30 days or, close to two weeks. If California Government did NOT secure massively excessive capacity for COVID patients in advance of their arrival, the system would not have been able to add new capacity amidst the pandemic on time to match the doubling of new cases arrivals. This would have meant that some patients would be able to access beds only later in the disease progression period, arriving to hospital beds later in time, with more severe impact from the disease and in the need of longer stays and more aggressive interventions. The result would have been even faster doubling rate in the demand for hospital beds with a lag of few days. You can see how the system shortages would escalate out of control.

Running tight supply chains in a pandemic is the exact opposite to what has to be done. Running supply capacity at more than double the rate of realized demand is exactly what needs to be done. We do not cut corners on basic safety equipment. Boeing did, with 737-Max, and we know where they should be because of this. We most certainly should not treat public health pandemic as the basis for cutting surplus safety capacity in the system.

Friday, November 24, 2017

24/11/17: Learning from the GFC: Lessons for Investors


My article, summing up the key lessons from the Global Financial Crisis that investors should review before the next crisis hits is now available via Manning Financial newsletter: http://issuu.com/publicationire/docs/mf_winter_2017?e=16572344/55685136.


Friday, February 24, 2017

Sunday, June 26, 2016

26/6/16: Black Swan ain't Brexit... but


There is a lot of froth in the media opinionating on Brexit vote. And there is a lot of nonsense.

One clearly cannot deal with all of it, so I am going to occasionally dip into the topic with some comments. These are not systemic in any way.

Let's take the myth of Brexit being a 'Black Swan'. This goes along the lines: lack of UK and European leaders' preparedness to the Brexit referendum outcome can be explained by the nature of the outcome being a 'Black Swan' event.

The theory of 'Black Swan' events was introduced by Nassin Taleb in his book “Black Swan
Theory”. There are three defining characteristics of such an event:

  1. The event can be explained ex post its occurrence as either predictable or expected;
  2. The event has an extremely large impact (cost or benefit); and
  3. The event (ex ante its occurrence) is unexpected or not probable.

Let's take a look at the Brexit vote in terms of the above three characteristics.

Analysis post-event shows that Brexit does indeed conform with point 1, but only partially. There is a lot of noise around various explanations for the vote being advanced, with analysis reaching across the following major arguments:

  • 'Dumb' or 'poor' or 'uneducated' or 'older' people voted for Brexit
  • People were swayed to vote for Brexit by manipulative populists (which is an iteration of the first bullet point)
  • People wanted to punish elites for (insert any reason here)
  • Protests vote (same as bullet point above)
  • People voted to 'regain their country from EU' 
  • Brits never liked being in the EU, and so on
The multiplicity of often overlapping reasons for Brexit vote outcome does imply significant complexity of causes and roots for voters preferences, but, in general, 'easy' explanations are being advanced in the wake of the vote. They are neither correct, nor wrong, which means that point 1 is neither violated nor confirmed: loads of explanations being given ex post, loads of predictions were issued ex ante.

The Brexit event is likely to have a significant impact. Short term impact is likely to be extremely large, albeit medium and longer term impacts are likely to be more modest. The reasons for this (not an exhaustive list) include: 
  • Likely overshooting in risk valuations in the short run;
  • Increased uncertainty in the short run that will be ameliorated by subsequent policy choices, actions and information flows; 
  • Starting of resolution process with the EU which is likely to be associated with more intransigence vis-a-vis the UK on the EU behalf at the start, gradually converging to more pragmatic and cooperative solutions over time (what we call moving along expectations curve); 
  • Pre-vote pricing in the markets that resulted in a rather significant over-pricing of the probability of 'Remain' vote, warranting a large correction to the downside post the vote (irrespective of which way the vote would have gone); 
  • Post-vote vacillations and debates in the UK as to the legal outrun of the vote; and 
  • The nature of the EU institutions and their extent in determining economic and social outcomes (the degree of integration that requires unwinding in the case of the Brexit)
These expected impacts were visible pre-vote and, in fact, have been severely overhyped in media and official analysis. Remember all the warnings of economic, social and political armageddon that the Leave vote was expected to generate. These were voiced in a number of speeches, articles, advertorials and campaigns by the Bremainers. 

So, per second point, the event was ex ante expected to generate huge impacts and these potential impacts were flagged well in advance of the vote.

The third ingredient for making of a 'Black Swan' is unpredictable (or low predictability) nature of the event. Here, the entire thesis of Brexit as a 'Black Swan' collapses. 

Let me start with an illustration: about 18 hours before the results were announced, I repeated my view (proven to be erroneous in the end) that 'Remain' will shade the vote by roughly 52% to 48%. As far as I am aware, no analyst or media outfit or /predictions market' (aka betting shop) put probability of 'Leave' at less than 30 percent. 

Now, 30 percent is not unpredictable / unexpected outcome. It is, instead, an unlikely, but possible, event. 

Let's do a mental exercise: you are offered by your stock broker an investment product that risks losing 30% of our pension money (say EUR100,000) with probability of 30%. Your expected loss is EUR9,000 is not a 'Black Swan' or an improbable high impact event, but instead a rather possible high impact event. Conditional (on loss materialising) impact here is, however, EUR30,000 loss. Now, consider a risk of losing 90% of your pension money with a probability of 10%. Your expected loss is the same, but low probability of a loss makes it a rather unexpected high impact event, as conditional impact of a loss here is EUR90,000 - three times the size of the conditional loss in the first case. 

The latter case is not Brexit, but is a Black Swan, the former case is Brexit-like and is not a Black Swan event. 

Besides the discussion of whether Brexit was a Black Swan event or not, however, the conditional loss (conditional on loss materialising) in the above examples shows that, however low the probability of a loss might be, once conditional loss becomes sizeable enough, the risk assessment and management of the event that can result in such a loss is required. In other words, whether or not Brexit was probable ex ante the vote (and it was quite probable), any risk management in preparation of the vote should have included full evaluation of responses to such a loss materialising. 

It is now painfully clear (see EU case here: http://arstechnica.co.uk/tech-policy/2016/06/brexit-in-brussels-junckers-mic-drop-and-political-brexploitation/, see Irish case here: http://www.irishtimes.com/news/politics/government-publishes-brexit-contingency-plan-1.2698260) that prudent risk management procedures were not followed by the EU and the Irish State. There is no serious contingency plan. No serious road map. No serious impact assessment. No serious readiness to deploy policy responses. No serious proposals for dealing with the vote outcome.

Even if Brexit vote was a Black Swan (although it was not), European institutions should have been prepared to face the aftermath of the vote. This is especially warranted, given the hysteria whipped up by the 'Remain' campaigners as to the potential fallouts from the 'Leave' vote prior to the referendum. In fact, the EU and national institutions should have been prepared even more so because of the severely disruptive nature of Black Swan events, not despite the event being (in their post-vote minds) a Black Swan.

Friday, January 15, 2016

15/1/16: Household Debt Sustainability in One Chart?


Here is a neat chart plotting household debt against long term interest rates in an attempt to visualise property prices in affordability / sustainability context:

Source: @resi_analyst

Irish progression is poor by debt measure, and is sustained (barely) by low interest rates, even post-deleveraging.

15/1/16: Gold Bullion as Risk Diversifier: 2015 Overview


A note of mine covering 2015 Gold market and the continued role of gold bullion & coins as risk diversifiers in current environment is now available on GoldCore page here: http://www.goldcore.com/us/gold-blog/gold-bullion-retains-key-role-of-a-major-diversifier-dr-gurdgiev/.


Friday, November 27, 2015

27/11/15: More Tiers, Lower Risks, But Higher Costs: FSB Latest Solutions to Systemic Crises


The Financial Stability Board (a mega quango set up under the G20 cover to make policy recommendations aimed at assuring that Too-Big-To-Fail banks are brought under some international oversight) has recently issued its position on the bank capital shortfalls under the assessment of their balance sheets designed to ‘prevent taxpayers bailouts of lenders’.

The FSB report based on stress tests stated that big international banks operating globally will have to raise anywhere between EUR42 billion and up to as much as EUR 1.1 trillion in funding by 2022 to cover the shortfall in bailable (special) tier debt that can be written down in the case of a bank running into trouble in the future. The tests explicitly covered what is known as banks’ Total Loss Absorbing Capacity (TLAC) - debt that can be converted into equity when a bank fails, in effect forcing debt holders to shoulder the cost of bank collapse and freeing taxpayers from the need to step in. The TLAC approach to bank funding also breaks the pari passu chain of rights distribution across the banks’ liabilities, separating (at last) depositors from bondholders. (1)

In releasing its estimates for TLAC shortfall, the FSB also provided final guidance as to the levels of TLAC it expects to be held by the globally important TBTF banks: 16% of total bank risk-weighted assets by 2019, rising to 18% by 2022. (2)

To be clear, the TLAC cushion is not an iron-clad guarantee that in a future crisis, depositors’ bail-ins and taxpayers’ supports won’t ever arise. Instead, it is just a cushion, albeit at 18 percent target - a significant one. And the cost of this insurance will also be material and likely to be shared across depositors and borrowers worldwide. Current estimates show the cost of 16% hurdle for TLAC to be around 2% of total income of the largest banks, spread over roughly 4 years, this would imply that up to 1/3 of average bank interest margin can be swallowed by the accumulation of cushion. Maintenance of this cushion will also require additional costs as TLAC instruments will likely carry higher cost of funding.

In a silver-lining for Western banking groups, the hardest hit banks amongst the 30 Globally Systemically Important Banks (GSIBs) (3) FSB are four Chinese banks: Agricultural Bank of China, Bank of China, China Construction Bank and Industrial and Commercial Bank of China, will no longer be exempt from TLAC. These banks currently hold no senior debt liabilities that can count as a part of TLAC cushion. In total, there are 60 GSIBs covered by TLAC, but in Europe, some 6,000 smaller banks are also covered by the Minimum Requirement for Eligible Liabilities (MREL) due in January 2016.

The core point for both, the MREL and TLAC is the issue of ‘loss-absorbing capital’. While the issue has been with regulators since the end of the Global Financial Crisis (2010), there is still no clarity on the mechanics of how this concept will work in the end. Currently there are three channels through which liabilities can be subordinated (bailed-in) in case of a crisis. All relate to bank-issued debt instruments:

  1. Contractual channel for subordination: banks can issue senior subordinated debt (tier-3 debt) which ranks ahead of tier-2 debt already outstanding in case of normal crises, but is bailable in the case of a structural crisis. 
  2. Statutory channel: bank-issued debt can be subordinated by statute.
  3. Structural channel: bailable debt is issued through a holding company to be subordinated to debt issued by the bank itself.


Euromoney recently covered these channels, concluding that whilst all three channels are complex, contractual channel is the hardest to structure. It appears that FSB view is that the contractual channel is the one to be pursued. In contrast, Italian authorities have pursued statutory channel, with legislative proposal to make un-guaranteed depositors super-senior liabilities, bailable only in the last instance. German legislation currently in draft stage will make all bonds suboridinatable in the case of bank insolvency. Another case of statutory instrument that defines contractual subordination channel is Spanish regulator introduction of a legislation that will simply subordinate all tier-2 debt by creating a tier-3 debt wedged between senior and tier-2 debt. In contrast, two Swiss GSIBs - Credit Suisse and UBS - have issued at holding company bonds in 2015, opting for the structural channel to subordination. Finally, in the U.S. the Federal Reserve already applied (as of October 30, 2015) the TLAC standards, covering eight of the biggest U.S. banks, with total shortfall of long-term debt arising under TLAC rules estimated at $120 billion. On November 9, U.S. giant Wells Fargo & Co announced that it will need to issue between $40 billion and $60 billion in new debt to cover TLAC requirements, with $40 billion representing the minimum required volume.

Per Fed, U.S. GSIBs will be required to hold:

  • A long-term debt balance of 6% of their respective GSIB surcharge of risk-weighted assets or 4.5% of total leverage exposure, whichever is greater; 
  • Maintain a TLAC amount of 18% of RWAs or 9.5% of total leverage exposure, whichever is greater. 
  • Maintain sufficient high-quality assets (proposed in 2014) as well as a cushion to raise capital levels by an additional $200 billion, over and above the industry requirements. (4)

The key problem with the most functional - contractual and statutory - channels is that TLAC approach requires creation of a new tier-3 debt that has to be ‘wedged’ between current senior and tier-2 levels. And this, as noted in Euromoney article (5) can violate the pari passu clauses already written into existent bank debt.

In simple terms, the regulatory innovations aiming to address the need to break the link between the state and the banks, including for the systemically important banks, seems to continue going down the route of creating added tiers of risk absorption that improve, but not entirely remove the problem of banks-sovereign contagion. At the same time, all these innovations continue to raise the cost of running basic banking operations - costs that are likely to translate into more expensive credit and lower credit-related activities, such as capex and household investment. On long enough time frame, if successful, the new tier of bank debt can, if taken to higher ratios, displace the problem of pari passu vis-a-vis the depositors. Question is - at what cost?


(1) Some basic details are available here: http://www.financialstabilityboard.org/2015/11/total-loss-absorbing-capacity-tlac-principles-and-term-sheet/,  http://www.euromoney.com/Article/3408580/TLAC-what-you-should-know.html and http://www.financialstabilityboard.org/2015/11/total-loss-absorbing-capacity-tlac-principles-and-term-sheet/


Friday, October 16, 2015

16/10/15: Gold and Bitcoin: Adjacency and Hedging Properties


This week, I spoke at a joint Markets Technicians Association and CAIA seminar hosted by Bloomberg, covering two recent research projects I was involved with on the role of Gold and Bitcoin as safe havens and hedges for other assets.

Here are my slides (omitting section division slides):
The first section was based on the following paper: http://www.sciencedirect.com/science/article/pii/S1057521912001226



A caveat to the above, we are seeing increasing evidence that Gold's hedging properties may be changing over time, especially due to increased financialisation of the asset. In this context, it is worth referencing a recent working paper by Brian M. Lucey et al linked here that I also cited at the seminar.




The Bitcoin section is based on a work-in-progress paper with Cormac Ennis: "Is Bitcoin like Gold? Hedging and Safe Haven Properties of the Virtual Currency". The results of presented below should be treated with serious caution as they are extremely preliminary.

Note: we are extending data set to cover longer period, although even with this extension data coverage for Bitcoin is still suboptimal in both duration and quality. Many thanks to the seminar participant for pointing out two key caveats to the overall data coverage:

  1. The 'lumpy' nature of demand around Cypriot banking crisis; and
  2. Potential effects on data quality reported for Bitcoin from a small number of high profile pricing events, such as technical glitches and supply/demand shifts linked to large exchanges-linked events (e.g. MtGox).


 Summarising the two papers findings:

Friday, May 11, 2012

11/5/2012: Ignoring that which almost happened?

In recent years, I am finding myself migrating more firmly toward behavioralist views on finance and economics. Not that this view, in my mind, is contradictory to the classes of models and logic I am accustomed to. It is rather an additional enrichment of them, adding toward completeness.

With this in mind - here's a fascinating new study.

How Near-Miss events Amplify or Attenuate Risky Decision Making, written by Catherine Tinsley, Robin Dillon and Matthew Cronin and published in April 2012 issue of Management Science studied the way people change their risk attitudes "in the aftermath of many natural and man-made disasters".

More specifically, "people often wonder why those affected were underprepared, especially when the disaster was the result of known or regularly occurring hazards (e.g., hurricanes). We study one contributing factor: prior near-miss experiences. Near misses are events that have some nontrivial expectation of ending in disaster but, by chance, do not."

The study shows that "when near misses are interpreted as disasters that did not occur, people illegitimately underestimate the danger of subsequent hazardous situations and make riskier decisions (e.g., choosing not to engage in mitigation activities for the potential hazard). On the other hand, if near misses can be recognized and interpreted as disasters that almost happened, this will counter the basic “near-miss” effect and encourage more mitigation. We illustrate the robustness of this pattern across populations with varying levels of real expertise with hazards and different hazard contexts (household evacuation for a hurricane, Caribbean cruises during hurricane season, and deep-water oil drilling). We conclude with ideas to help people manage and communicate about risk."

An interesting potential corollary to the study is that analytical conclusions formed ex post near misses (or in the wake of significant increases in the risk) matter to the future responses. Not only that, the above suggests that the conjecture that 'glass half-full' type of analysis should be preferred to 'glass half-empty' position might lead to a conclusion that an event 'did not occur' rather than that it 'almost happened'.

Fooling yourself into safety by promoting 'optimism' in interpreting reality might be a costly venture...