Tuesday, June 27, 2017

27/6/17: Millennials’ Support for Liberal Democracy is Failing


New paper is now available at SSRN: "Millennials’ Support for Liberal Democracy is Failing. An Investor Perspective" (June 27, 2017): https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2993535.


Recent evidence shows a worrying trend of declining popular support for the traditional liberal democracy across a range of Western societies. This decline is more pronounced for the younger cohorts of voters. The prevalent theories in political science link this phenomena to a rise in volatility of political and electoral outcomes either induced by the challenges from outside (e.g. Russia and China) or as the result of the aftermath of the recent crises. These views miss a major point: the key drivers for the younger generations’ skepticism toward the liberal democratic values are domestic intergenerational political and socio-economic imbalances that engender the environment of deep (Knightian-like) uncertainty. This distinction – between volatility/risk framework and the deep uncertainty is non-trivial for two reasons: (1) policy and institutional responses to volatility/risk are inconsistent with those necessary to address rising deep uncertainty and may even exacerbate the negative fallout from the ongoing pressures on liberal democratic institutions; and (2) investors cannot rely on traditional risk management approaches to mitigate the effects of deep uncertainty. The risk/volatility framework view of the current political trends can result in amplification of the potential systemic shocks to the markets and to investors through both of these factors simultaneously. Despite touching on a much broader set of issues, this note concludes with a focus on investment strategy that can mitigate the rise of deep political uncertainty for investors.


Thursday, June 22, 2017

22/6/17: Efficient Markets for H-bomb Fuel - 1954


For all the detractors of the EMH - the Efficient Markets Hypothesis - and for all its fans, as well as for any fan of economic history, this paper is a must-read: http://www.terry.uga.edu/media/events/documents/Newhard_paper-9-6-13.pdf.

Back in 1954, an economist, Armen A. Alchian, working at RAND conducted the world’s first event study. His study used stock market data, publicly available at the time, to infer which fissile fuel material was used in manufacturing highly secret H-bomb. That study was immediately withdrawn from public view. The paper linked above replicates Alchian's results.


22/6/17: Unwinding Monetary Excesses: FocusEconomics


Focus Economics are running my comment (amongst other analysts') on the Fed and ECB paths for unwinding QE: http://www.focus-economics.com/blog/how-will-fed-reduce-balance-sheet-how-will-ecb-end-qe.


21/6/17: Azerbaijan Bank and Irish Saga of $900 million


A Bloomberg article on the trials and tribulations of yet another 'listing' on the Irish Stock Exchange, this one from Azerbaijan: https://www.bloomberg.com/news/articles/2017-06-18/azerbaijan-bank-took-900-million-irish-detour-on-way-to-default. Includes a comment from myself.



Friday, June 16, 2017

16/6/17: Replicating Scientific Research: Ugly Truth


Continuing with the theme on 'What I've been reading lately?', here is a smashing paper on 'accuracy' of empirical economic studies.

The paper, authored by Hou, Kewei and Xue, Chen and Zhang, Lu, and titled "Replicating Anomalies" (most recent version is from June 12, 2017, but it is also available in an earlier version via NBER) effectively blows a whistle on what is going on in empirical research in economics and finance. Per authors, the vast literature that detects financial markets anomalies (or deviations away from the efficient markets hypothesis / economic rationality) "is infested with widespread p-hacking".

What's p-hacking? Well, it's a shady practice whereby researchers manipulate (by selective inclusion or exclusion) sample criteria (which data points to exclude from estimation) and test procedures (including model specifications and selective reporting of favourable test results), until insignificant results become significant. In other words, under p-hacking, researchers attempt to superficially maximise model and explanatory variables significance, or, put differently, they attempt to achieve results that confirm their intuition or biases.

What's anomalies? Anomalies are departures in the markets (e.g. in share prices) from the predictions generated by the models consistent with rational expectations and the efficient markets hypothesis. In other words, anomalies occur when markets efficiency fails.

There are scores of anomalies detected in the academic literature, prompting many researchers to advocate abandonment (in all its forms, weak and strong) of the idea that markets are efficient.

Hou, Xue and Zhang take these anomalies to the test. The compile "a large data library with 447 anomalies". The authors then control for a key problem with data across many studies: microcaps. Microcaps - or small capitalization firms - are numerous in the markets (accounting for roughly 60% of all stocks), but represent only 3% of total market capitalization. This is true for key markets, such as NYSE, Amex and NASDAQ. Yet, as authors note, evidence shows that microcaps "not only have the highest equal-weighted returns, but also the largest cross-sectional standard deviations in returns and anomaly variables among microcaps, small stocks, and big stocks." In other words, these are higher risk, higher return class of securities. Yet, despite this, "many studies overweight microcaps with equal-weighted returns, and often together with NYSE-Amex-NASDAQ breakpoints, in portfolio sorts." Worse, many (hundreds) of studies use 1970s regression technique that actually assigns more weight to microcaps. In simple terms, microcaps are the most common outlier and despite this they are given either same weight in analysis as non-outliers or their weight is actually elevated relative to normal assets, despite the fact that microcaps have little meaning in driving the actual markets (their weight in the total market is just about 3% in total).

So the study corrects for these problems and finds that, once microcaps are accounted for, the grand total of 286 anomalies (64% of all anomalies studied), and under more strict statistical signifcance test 380 (of 85% of all anomalies) "including 95 out of 102 liquidity variables (93%) are insignificant at the 5% level." In other words, the original studies claims that these anomalies were significant enough to warrant rejection of markets efficiency were not true when one recognizes one basic and simple problem with the data. Worse, per authors, "even for the 161 significant anomalies, their magnitudes are often much lower than originally reported. Among the 161, the q-factor model leaves 115 alphas insignificant (150 with t < 3)."

This is pretty damning for those of us who believe, based on empirical results published over the years, that markets are bounded-efficient, and it is outright savaging for those who claim that markets are perfectly inefficient. But, this tendency of researchers to silverplate statistics is hardly new.

Hou, Xue and Zhang provide a nice summary of research on p-hacking and non-replicability of statistical results across a range of fields. It is worth reading, because it dents significantly ones confidence in the quality of peer review and the quality of scientific research.

As the authors note, "in economics, Leamer (1983) exposes the fragility of empirical results to small specification changes, and proposes to “take the con out of econometrics” by reporting extensive sensitivity analysis to show how key results vary with perturbations in regression specification and in functional form." The latter call was never implemented in the research community.

"In an influential study, Dewald, Thursby, and Anderson (1986) attempt to replicate empirical results published at Journal of Money, Credit, and Banking [a top-tier journal], and find that inadvertent errors are so commonplace that the original results often cannot be reproduced."

"McCullough and Vinod (2003) report that nonlinear maximization routines from different software packages often produce very different estimates, and many articles published at American Economic Review [highest rated journal in economics] fail to test their solutions across different software packages."

"Chang and Li (2015) report a success rate of less than 50% from replicating 67 published papers from 13 economics journals, and Camerer et al. (2016) show a success rate of 61% from replicating 18 studies in experimental economics."

"Collecting more than 50,000 tests published in American Economic Review, Journal of Political Economy, and Quarterly Journal of Economics, [three top rated journals in economics] Brodeur, L´e, Sangnier, and Zylberberg (2016) document a troubling two-humped pattern of test statistics. The pattern features a first hump with high p-values, a sizeable under-representation of p-values just above 5%, and a second hump with p-values slightly below 5%. The evidence indicates p-hacking that authors search for specifications that deliver just-significant results and ignore those that give just-insignificant results to make their work more publishable."

If you think this phenomena is encountered only in economics and finance, think again. Here are some findings from other ' hard science' disciplines where, you know, lab coats do not lie.

"...replication failures have been widely documented across scientific disciplines in the past decade. Fanelli (2010) reports that “positive” results increase down the hierarchy of sciences, with hard sciences such as space science and physics at the top and soft sciences such as psychology, economics, and business at the bottom. In oncology, Prinz, Schlange, and Asadullah (2011) report that scientists at Bayer fail to reproduce two thirds of 67 published studies. Begley and Ellis (2012) report that scientists at Amgen attempt to replicate 53 landmark studies in cancer research, but reproduce the original results in only six. Freedman, Cockburn, and Simcoe (2015) estimate the economic costs of irreproducible preclinical studies amount to about 28 billion dollars in the U.S. alone. In psychology, Open Science Collaboration (2015), which consists of about 270 researchers, conducts replications of 100 studies published in top three academic journals, and reports a success rate of only 36%."

Let's get down to real farce: everyone in sciences knows the above: "Baker (2016) reports that 80% of the respondents in a survey of 1,576 scientists conducted by Nature believe that there exists a reproducibility crisis in the published scientific literature. The surveyed scientists cover diverse fields such as chemistry, biology, physics and engineering, medicine, earth sciences, and others. More than 70% of researchers have tried and failed to reproduce another scientist’s experiments, and more than 50% have failed to reproduce their own experiments. Selective reporting, pressure to publish, and poor use of statistics are three leading causes."

Yeah, you get the idea: you need years of research, testing, re-testing and, more often then not, you get the results are not significant or weakly significant. Which means that after years of research you end up with unpublishable paper (no journal would welcome a paper without significant results, even though absence of evidence is as important in science as evidence of presence), no tenure, no job, no pension, no prospect of a career. So what do you do then? Ah, well... p-hack the shit out of data until the editor is happy and the referees are satisfied.

Which, for you, the reader, should mean the following: when we say that 'scientific research established fact A' based on reputable journals publishing high quality peer reviewed papers on the subject, know that around half of the findings claimed in these papers, on average, most likely cannot be replicated or verified. And then remember, it takes one or two scientists to turn the world around from believing (based on scientific consensus at the time) that the Earth is flat and is the centre of the Universe, to believing in the world as we know it to be today.


Full link to the paper: Charles A. Dice Center Working Paper No. 2017-10; Fisher College of Business Working Paper No. 2017-03-010. Available at SSRN: https://ssrn.com/abstract=2961979.

16/6/17: Trumpery & Knavery: New Paper on Washington's Geopolitical Rebalancing


Not normally my cup of tea, but Valdai Club work is worth following for all Russia watchers, regardless of whether you agree or disagree with Moscow-centric worldview (and  whether you agree or disagree that such worldview even exists). So here is a recent paper on Trump's Administration and the context of the Washington's search for new positioning in the geopolitical environment where asymmetric influence moves by China, Russia and India, as well as by smaller players, e.g. Iran and Saudis, are severely constraining the neo-conservative paradigm of the early 2000s.

Making no comment on the paper and leaving it for you to read:  http://valdaiclub.com/files/14562/.


Wednesday, June 14, 2017

14/6/17: Unwinding the Mess: Fed's Road Map to QunE


As promised in the previous post, a quick update on Fed’s latest guidance regarding its plans to unwind the $4.5 trillion sized balance sheet, to the Quantitative un-Easing...

First, the size and the composition of the problem:



So, as noted in the post here: http://trueeconomics.blogspot.com/2017/06/13617-unwinding-mess-ecb-vs-fed.html, the Fed is aiming to gradually unwind the size of its assets exposures on both, the U.S. Treasuries and MBS (mortgage-backed securities). This is a tricky task, because simply dumping both asset classes into the markets (aka, selling them to investors) risks pushing yields on Government debt up and value of Government bonds down, as well as the value of MBS assets down. The problem with this is that all of these assets are systemically important to… err… systemically important financial institutions (banks, pension funds, investment funds and insurance companies).

Should yields on Government debt explode due to the Fed selling, the U.S. Government will simultaneously: 1) pay more on its debt; and 2) get less of rebates from the Fed (the returned payments on debt held by the Fed). This would be ugly. Uglier yet, the value of these bonds will fall, creating pressure on the assets valuations for assets held by banks, investment funds, insurance companies and pensions funds. In other words, these institutions will have to accumulate more assets to cover their capital cushions and/or sustain their funds valuations. Or they will have to reduce lending and provision of payouts.

Should MBS assets decline in value, there will be an assets write down for private sector financial institutions holding them. The result will be the same as above: less lending, more expensive credit and lower profit margins.

With this in mind, today’s Fed announcement is an interesting one. The FOMC “currently expects to begin implementing a balance sheet normalization program this year, provided that the economy evolves broadly as anticipated,” according to today’s statement. And the FOMC provides some guidance to this normalization program:

Instead of dumping assets into the market, the Fed will try to gradually shrink the balance sheet by ‘rolling off’ a fixed amount of assets every month. At the start, the Fed will ‘roll off’ $10 billion a month, split between $6 billion from Treasuries and $4 billion from MBS. Three months later, the numbers will rise to $20 billion per month: $12 billion for Treasuries and $8 billion for MBS. Subsequently, ‘roll-offs’ will rise $10 billion per month ever three months ($6 billion for Treasuries and $4 billion for MBS). The ‘roll-off’ will be capped once it reaches $30 billion for Treasuries and $20 billion for MBS.

This modestly-paced plan suggests that the ‘roll off’ will concentrate on non-replacement of maturing instruments, rather than on direct sales of existent instruments.

What we do not know: 1) when the ‘roll off’ process will begin, and 2) when will it stop (in other words, what is the target level of both assets on Fed’s balance sheet in the long run. But the rest is pretty much consistent with my view presented here: http://trueeconomics.blogspot.com/2017/06/13617-unwinding-mess-ecb-vs-fed.html.




PS: A neat summary of Fed decisions and votes here: http://fingfx.thomsonreuters.com/gfx/rngs/USA-FED/010030ZL253/

14/6/17: The Fed: Bravely Going Somewhere Amidst Rising Uncertainty


Predictably (in line with the median investors’ outlook) the Fed raised its base rate and provided more guidance on their plans to deleverage the Fed’s balance sheet (more on the latter in a subsequent post). The moves came against a revision of short term forecast for inflation (inflationary expectations moved down) and medium turn sustainable (or neutral) rate of unemployment (unemployment target moved down); both targets suggesting the Fed could have paused rate increase.

Rate hike was modest: the Federal Open Market Committee (FOMC) increased its benchmark target by a quarter point, so the new rate range will be 1 percent to 1.25 percent, against the previous 0.91 percent. This marks the third rate hike in 6 months and the Fed signalled that it is on track to hike rates again before the end of the year (with likely date for the next hike in September). The forecast for 2018 is for another 75 basis points rise in rates, unchanged on March forecast.

Interestingly, the Fed statement highlights that inflation (short term expectations) remains subdued. “Inflation on a 12-month basis is expected to remain somewhat below 2 percent in the near term but to stabilize around the committee’s 2 percent objective over the medium term,” the FOMC statement said. This changes the tack on previous months’ statements when the Fed described inflationary outlook as “broadly close” to target. Data released earlier today showed core consumer price inflation (ex-food and energy) slowed in May for the fourth straight month to 1.7 percent y-o-y. This is below the Fed target rate of 2 percent and suggests that monetary policy is currently running countercyclical to inflation. On expectations side, FOMC lowered its median forecast for inflation to 1.6 percent in 2017, from 1.9 percent forecast published in March. The FOMC left its forecasts for 2018 and 2019 unchanged at 2 percent.

The Fed, therefore, sees inflation slump to be temporary, which prompted U.S. 2 year yields to move sharply up:
Source: @Schuldensuehner

Which means that today’s hike was not about inflationary pressures, but rather unemployment, which dropped to a 16-year low at 4.3 percent in May.

As labour markets continue to overheat (we are now at 4.2 percent forecast 2017 unemployment and with over 1 million vacancies postings in excess of jobs seekers, suggesting a substantial and rising gap between the low quality of remaining skills on offer and the demand for higher skills), the Fed dropped its estimate of the neutral rate of unemployment (or, in common terms, the estimated minimum level of unemployment that can be sustained without a major uptick in wages inflation), from 4.7 percent in march to 4.6 percent today. At which point, it is worth noting the surreality of this number: the estimate has nothing to do with realistic balancing out of skills supply and demand, and is mechanically adjusted to match evolving balance between actual unemployment trends and inflation trends. In other words, the neutral rate of unemployment is Fed’s voodoo metric for justifying anything. How do I know this? Ok, consider the following forecasts & outlook figures from FOMC:

  • 2017 GDP growth at 2.2% compared to 2.1%, unemployment rate at 4.2% compared to 4.5% prior, and core inflation at 2.0%, same as prior. So growth outlook is, basically, stable, but unemployment is dropping and inflation not budging. 
  • 2018 GDP growth unchanged at 2.1%, inflation unchanged at 2.0%, and unemployment 4.2% vs 4.5% prior. So unemployment drops significantly, but GDP drops too and inflation stays put.
  • 2019 GDP 1.9% vs 1.9% prior, unemployment 4.2% vs 4.5% prior and inflation 2.0% vs 2.0% prior. Same story as in 2018. 

In other words, it no longer matters what the Fed forecasts for growth and unemployment, inflation stays put; and it doesn’t matter what it forecast for growth and inflation, unemployment drops, and you can stop worrying about joint forecast for inflation and unemployment, growth remains remarkably stable. It’s the New Normal of Alan Greenspan Redux.


The FOMC next meets in six weeks, on July 25-26. Here is the dots chart of Fed’s expectations on benchmark rate compared to previous:


Source: https://www.bloomberg.com/graphics/fomc-dot-plot/

The key takeaway from all of this is that the Fed is currently at a crossroads: the uncertainty about key economic indicators remains elevated, as the Fed is compressing 2017-2018 guidance on rates. In other words, more certainty signalled by the Fed runs against more uncertainty signalled by the economy. Go figure…

Tuesday, June 13, 2017

13/6/17: Unwinding the Mess: ECB vs Fed


My guest post on the potential paths to unwinding monetary policies excesses by the Fed and ECB is available on FocusEconomics : http://www.focus-economics.com/blog/the-fed-ecb-at-a-crossroads-unwinding-qe.


13/6/17: Four Months of the Invisible Fiscal Discipline


U.S Treasury latest figures (through May 2017) for Federal Government’s fiscal (I’m)balance are an interesting read this year for a number of reasons. One of these is the promise of fiscal responsibility and cutting of public spending and deficits made by President Trump and the Republicans during last year’s campaigns. The promise that remains, unfortunately, unfulfilled.

In May 2017, cumulative fiscal year-to-date Federal Government receipts amounted to $2.169 trillion, which is $30 billion higher than over the same period of 2016. However, Federal Government’s gross outlays in the first 8 months of this fiscal year stood at $2.602 trillion, of $57.345 billion above the same period of last year.As a result, Federal deficit in the first 8 months of FY 2017 rose to $432.853 billion, up 6.77% y/y or $27.44 billion.

Given that 4 out of the 8 months of FY 2017 were under the Obama Presidency tenure, the above comparatives are incomplete. So consider the four months starting February and ending May. Over that period of 2017, Federal deficit stood at $274.274 billion, up 11.17% or $27.569 billion on February-May for FY 2016. In this period, in 2017, Trump Administration managed to spend $51.9 billion more than his predecessor’s presidency.

You can see more detailed breakdown of expenditures and receipts here: https://www.fiscal.treasury.gov/fsreports/rpt/mthTreasStmt/mts0517.pdf but the bottom line is simple: so far, four months into his presidency, Mr. Trump is yet to start showing any signs of fiscal discipline. Which raises the question about his cheerleaders in Congress: having spent Obama White House years banging on about the need for responsible financial management in Washington, the Republicans are hardly in a rush to start balancing the books now that their party is in control of both legislative and, with some hefty caveats, the executive branches.

Sunday, June 11, 2017

10/6/2017: And the Ship [of Monetary Excesses] Sails On...


The happiness and the unbearable sunshine of Spring is basking the monetary dreamland of the advanced economies... Based on the latest data, world's 'leading' Central Banks continue to prime the pump, flooding the carburetor of the global markets engine with more and more fuel.

According to data collated by Yardeni Research, total asset holdings of the major Central Banks (the Fed, the ECB, the BOJ, and PBOC) have grown in April (and, judging by the preliminary data, expanded further in May):


May and April dynamics have been driven by continued aggressive build up in asset purchases by the ECB, which now surpassed both the Fed and BOJ in size of its balancesheet. In the euro area case, current 'miracle growth' cycle requires over 50% more in monetary steroids to sustain than the previous gargantuan effort to correct for the eruption of the Global Financial Crisis.


Meanwhile, the Fed has been holding the junkies on a steady supply of cash, having ramped its monetary easing earlier than the ECB and more aggressively. Still, despite the economy running on overheating (judging by official stats) jobs markets, the pride first of the Obama Administration and now of his successor, the Fed is yet to find its breath to tilt down:


Which is clearly unlike the case of their Chinese counterparts who are deploying creative monetarism to paint numbers-by-abstraction picture of its balancesheet.
To sustain the dip in its assets held line, PBOC has cut rates and dramatically reduced reserve ratio for banks.

And PBOC simultaneously expanded own lending programmes:

All in, PBOC certainly pushed some pain into the markets in recent months, but that pain is far less than the assets account dynamics suggest.

Unlike PBOC, BOJ can't figure out whether to shock the worlds Numero Uno monetary opioid addict (Japan's economy) or to appease. Tokyo re-primed its monetary pump in April and took a little of a knock down in May. Still, the most indebted economy in the advanced world still needs its Central Bank to afford its own borrowing. Which is to say, it still needs to drain future generations' resources to pay for today's retirees.

So here is the final snapshot of the 'dreamland' of global recovery:

As the chart above shows, dealing with the Global Financial Crisis (2008-2010) was cheaper, when it comes to monetary policy exertions, than dealing with the Global Recovery (2011-2013). But the Great 'Austerity' from 2014-on really made the Central Bankers' day: as Government debt across advanced economies rose, the financial markets gobbled up the surplus liquidity supplied by the Central Banks. And for all the money pumped into the bond and stock markets, for all the cash dumped into real estate and alternatives, for all the record-breaking art sales and wine auctions that this Recovery required, there is still no pulling the plug out of the monetary excesses bath.