Cyclicality in markets returns and political environment is an interesting and under-researched topic. Here is a great post on the subject: https://fat-pitch.blogspot.com/2018/05/trading-worst-6-months-and-presidential.html.
Showing posts with label Investment Theory. Show all posts
Showing posts with label Investment Theory. Show all posts
Monday, June 4, 2018
4/6/18: Presidential Cycles and Markets Returns
Cyclicality in markets returns and political environment is an interesting and under-researched topic. Here is a great post on the subject: https://fat-pitch.blogspot.com/2018/05/trading-worst-6-months-and-presidential.html.
Sunday, October 15, 2017
15/10/17: Concentration Risk & Beyond: Markets & Winners
An excellent summary of several key concepts in investment worth reading: "So Few Market Winners, So Much Dead Weight" by Barry Ritholtz of Bloomberg View. Based on an earlier NY Times article that itself profiles new research by Hendrik Bessembinder from Arizona State University, Ritholtz notes that:
- "Only 4 percent of all publicly traded stocks account for all of the net wealth earned by investors in the stock market since 1926, he has found. A mere 30 stocks account for 30 percent of the net wealth generated by stocks in that long period, and 50 stocks account for 40 percent of the net wealth. Let that sink in a moment: Only one in 25 companies are responsible for all stock market gains. The other 24 of 25 stocks -- that’s 96 percent -- are essentially worthless ballast."
Which brings us to the key concepts related to this observation:
- Concentration risk: This an obvious one. In today's markets, returns are exceptionally concentrated within just a handful of stocks. Which puts the argument in favour of diversification through a test. Traditionally, we think of diversification as a long-term protection against risks of markets decline. But it can also be seen as coming at a cost of foregone returns. Think of holding 96 stocks that have zero returns against four stocks that yield high returns, and at the same time weighing these holdings in return-neutral fashion, e.g. by their market capitalization.
- Strategic approaches to capturing growth drivers in your portfolio: There are, as Ritholtz notes, two: exclusivity (active winners picking) and exclusivity (passive market indexing). Which also rounds off to diversification.
- Behavioral drivers matter: Behavioral biases can wreck havoc with both selecting and holding 'winners-geared' portfolios (as noted by Rithholtz's discussion of exclusivity approach). But inclusivity or indexing is also biases -prone, although Ritholtz does not dig deeper into that. In reality, the two approaches are almost symmetric in behavioral biases impacts. Worse, as proliferation of index-based ETFs marches on, the two approaches to investment are becoming practically indistinguishable. In pursuit of alpha, investors are increasingly being caught in chasing more specialist ETFs (index-based funds), just as they were before caught in a pursuit of more concentrated holdings of individual 'winners' shares.
- Statistically, markets are neither homoscedastic nor Gaussian: In most cases, there are deeper layers of statistical meaning to returns than simple "Book Profit" or "Stop-loss" heuristics can support. Which is not just a behavioral constraint, but a more fundamental point about visibility of investment returns. As Ritholtz correctly notes, long-term absolute winners do change. But that change is not gradual, even if time horizons for it can be glacial.
All of these points is something we cover in our Investment Theory class and Applied Investment and Trading course, and some parts we also touch upon in the Risk and Resilience course. Point 4 relates to what we do, briefly, discuss in Business Statistics class. So it is quite nice to have all of these important issues touched upon in a single article.
Thursday, June 22, 2017
22/6/17: Efficient Markets for H-bomb Fuel - 1954
For all the detractors of the EMH - the Efficient Markets Hypothesis - and for all its fans, as well as for any fan of economic history, this paper is a must-read: http://www.terry.uga.edu/media/events/documents/Newhard_paper-9-6-13.pdf.
Back in 1954, an economist, Armen A. Alchian, working at RAND conducted the world’s first event study. His study used stock market data, publicly available at the time, to infer which fissile fuel material was used in manufacturing highly secret H-bomb. That study was immediately withdrawn from public view. The paper linked above replicates Alchian's results.
Friday, June 16, 2017
16/6/17: Replicating Scientific Research: Ugly Truth
Continuing with the theme on 'What I've been reading lately?', here is a smashing paper on 'accuracy' of empirical economic studies.
The paper, authored by Hou, Kewei and Xue, Chen and Zhang, Lu, and titled "Replicating Anomalies" (most recent version is from June 12, 2017, but it is also available in an earlier version via NBER) effectively blows a whistle on what is going on in empirical research in economics and finance. Per authors, the vast literature that detects financial markets anomalies (or deviations away from the efficient markets hypothesis / economic rationality) "is infested with widespread p-hacking".
What's p-hacking? Well, it's a shady practice whereby researchers manipulate (by selective inclusion or exclusion) sample criteria (which data points to exclude from estimation) and test procedures (including model specifications and selective reporting of favourable test results), until insignificant results become significant. In other words, under p-hacking, researchers attempt to superficially maximise model and explanatory variables significance, or, put differently, they attempt to achieve results that confirm their intuition or biases.
What's anomalies? Anomalies are departures in the markets (e.g. in share prices) from the predictions generated by the models consistent with rational expectations and the efficient markets hypothesis. In other words, anomalies occur when markets efficiency fails.
There are scores of anomalies detected in the academic literature, prompting many researchers to advocate abandonment (in all its forms, weak and strong) of the idea that markets are efficient.
Hou, Xue and Zhang take these anomalies to the test. The compile "a large data library with 447 anomalies". The authors then control for a key problem with data across many studies: microcaps. Microcaps - or small capitalization firms - are numerous in the markets (accounting for roughly 60% of all stocks), but represent only 3% of total market capitalization. This is true for key markets, such as NYSE, Amex and NASDAQ. Yet, as authors note, evidence shows that microcaps "not only have the highest equal-weighted returns, but also the largest cross-sectional standard deviations in returns and anomaly variables among microcaps, small stocks, and big stocks." In other words, these are higher risk, higher return class of securities. Yet, despite this, "many studies overweight microcaps with equal-weighted returns, and often together with NYSE-Amex-NASDAQ breakpoints, in portfolio sorts." Worse, many (hundreds) of studies use 1970s regression technique that actually assigns more weight to microcaps. In simple terms, microcaps are the most common outlier and despite this they are given either same weight in analysis as non-outliers or their weight is actually elevated relative to normal assets, despite the fact that microcaps have little meaning in driving the actual markets (their weight in the total market is just about 3% in total).
So the study corrects for these problems and finds that, once microcaps are accounted for, the grand total of 286 anomalies (64% of all anomalies studied), and under more strict statistical signifcance test 380 (of 85% of all anomalies) "including 95 out of 102 liquidity variables (93%) are insignificant at the 5% level." In other words, the original studies claims that these anomalies were significant enough to warrant rejection of markets efficiency were not true when one recognizes one basic and simple problem with the data. Worse, per authors, "even for the 161 significant anomalies, their magnitudes are often much lower than originally reported. Among the 161, the q-factor model leaves 115 alphas insignificant (150 with t < 3)."
This is pretty damning for those of us who believe, based on empirical results published over the years, that markets are bounded-efficient, and it is outright savaging for those who claim that markets are perfectly inefficient. But, this tendency of researchers to silverplate statistics is hardly new.
Hou, Xue and Zhang provide a nice summary of research on p-hacking and non-replicability of statistical results across a range of fields. It is worth reading, because it dents significantly ones confidence in the quality of peer review and the quality of scientific research.
As the authors note, "in economics, Leamer (1983) exposes the fragility of empirical results to small specification changes, and proposes to “take the con out of econometrics” by reporting extensive sensitivity analysis to show how key results vary with perturbations in regression specification and in functional form." The latter call was never implemented in the research community.
"In an influential study, Dewald, Thursby, and Anderson (1986) attempt to replicate empirical results published at Journal of Money, Credit, and Banking [a top-tier journal], and find that inadvertent errors are so commonplace that the original results often cannot be reproduced."
"McCullough and Vinod (2003) report that nonlinear maximization routines from different software packages often produce very different estimates, and many articles published at American Economic Review [highest rated journal in economics] fail to test their solutions across different software packages."
"Chang and Li (2015) report a success rate of less than 50% from replicating 67 published papers from 13 economics journals, and Camerer et al. (2016) show a success rate of 61% from replicating 18 studies in experimental economics."
"Collecting more than 50,000 tests published in American Economic Review, Journal of Political Economy, and Quarterly Journal of Economics, [three top rated journals in economics] Brodeur, L´e, Sangnier, and Zylberberg (2016) document a troubling two-humped pattern of test statistics. The pattern features a first hump with high p-values, a sizeable under-representation of p-values just above 5%, and a second hump with p-values slightly below 5%. The evidence indicates p-hacking that authors search for specifications that deliver just-significant results and ignore those that give just-insignificant results to make their work more publishable."
If you think this phenomena is encountered only in economics and finance, think again. Here are some findings from other ' hard science' disciplines where, you know, lab coats do not lie.
"...replication failures have been widely documented across scientific disciplines in the past decade. Fanelli (2010) reports that “positive” results increase down the hierarchy of sciences, with hard sciences such as space science and physics at the top and soft sciences such as psychology, economics, and business at the bottom. In oncology, Prinz, Schlange, and Asadullah (2011) report that scientists at Bayer fail to reproduce two thirds of 67 published studies. Begley and Ellis (2012) report that scientists at Amgen attempt to replicate 53 landmark studies in cancer research, but reproduce the original results in only six. Freedman, Cockburn, and Simcoe (2015) estimate the economic costs of irreproducible preclinical studies amount to about 28 billion dollars in the U.S. alone. In psychology, Open Science Collaboration (2015), which consists of about 270 researchers, conducts replications of 100 studies published in top three academic journals, and reports a success rate of only 36%."
Let's get down to real farce: everyone in sciences knows the above: "Baker (2016) reports that 80% of the respondents in a survey of 1,576 scientists conducted by Nature believe that there exists a reproducibility crisis in the published scientific literature. The surveyed scientists cover diverse fields such as chemistry, biology, physics and engineering, medicine, earth sciences, and others. More than 70% of researchers have tried and failed to reproduce another scientist’s experiments, and more than 50% have failed to reproduce their own experiments. Selective reporting, pressure to publish, and poor use of statistics are three leading causes."
Yeah, you get the idea: you need years of research, testing, re-testing and, more often then not, you get the results are not significant or weakly significant. Which means that after years of research you end up with unpublishable paper (no journal would welcome a paper without significant results, even though absence of evidence is as important in science as evidence of presence), no tenure, no job, no pension, no prospect of a career. So what do you do then? Ah, well... p-hack the shit out of data until the editor is happy and the referees are satisfied.
Which, for you, the reader, should mean the following: when we say that 'scientific research established fact A' based on reputable journals publishing high quality peer reviewed papers on the subject, know that around half of the findings claimed in these papers, on average, most likely cannot be replicated or verified. And then remember, it takes one or two scientists to turn the world around from believing (based on scientific consensus at the time) that the Earth is flat and is the centre of the Universe, to believing in the world as we know it to be today.
Full link to the paper: Charles A. Dice Center Working Paper No. 2017-10; Fisher College of Business Working Paper No. 2017-03-010. Available at SSRN: https://ssrn.com/abstract=2961979.
Thursday, May 12, 2016
12/5/16: Leaky Buckets of U.S. Data
Recently, ECB researchers published an interesting working paper (ECB Working Paper 1901, May 2016). Looking at the U.S. data that is released under the embargo, they found a disturbing regularity: across a range of data, there is a strong evidence of a statistical drift some 30 minutes prior to the official time of the release. In simple terms, someone is getting data ahead of the markets and is trading on it in sufficient volumes to move the market.
Let’s put this into a perspective: there is a scheduled release for private data that is material for pricing the market. The release time is t=0. Some 30 minutes before the official release, markets start pricing assets in line with information contained in the data yet to be released. This process continues for 30 minutes until the release becomes public. And it moves prices in the direction that correctly anticipates the data release. The effect is so large, by the time t=0 hits and data is made publicly available, some 50% of the total price adjustment consistent with the data is already priced into the market.
"Seven out of 21 market-moving announcements show evidence of substantial informed trading before the official release time. The pre-announcement price drift accounts on average for about half of the total price adjustment,” according to the research note.
Pricing occurs in S&P and U.S. Treasury-note futures and data sample used in the study covers January 2008 through March 2014.
Here is the data list which appears to be leaked in advance to some market participants:
- ISM non-manufacturing
- Pending home salses
- ISM manufacturing
- Existing home sales
- Consumer confidence from the Conference Board (actually, CB has taken some actions recently to tighten their releases policy)
- Industrial production (U.S. Fed report)
- The second reading on GDP
- There is also partial evidence of leaks in other data, such as retail sales, consumer price inflation, advance GDP estimates and initial jobless claims.
Overall, plenty of the above data are being released by non-private (aka state) agencies.
The authors control for market expectation, including forecasts drift (as date of release grows nearer, forecasts should improve in their accuracy, and this can have an effect on market pricing). They found that “more up-to-date forecasts” are no “better predictors of the surprise” than older forecasts. In addition, as noted by the authors: “these results are robust to controlling for, among others, outliers, data snooping, nearby announcements and the choice of the event window length.”
The problem is big and has gotten worse since 2008. “Extending the sample period back to 2003 with minute-by-minute data reveals both a higher announcement impact and a stronger pre-announcement drift since 2008, especially in the S&P E-mini futures market. Based on a back-of-the-envelope calculation, we estimate that since 2008 in the S&P E-mini futures market alone the profits associated with trading prior to the o fficial announcement release time have amounted to about 20 million USD per year.”
Two tables summarising there results.
The paper is available here: http://www.ecb.europa.eu/pub/pdf/scpwps/ecbwp1901.en.pdf?ca0947cb7c6358aed9180ca2976160bf
Sunday, April 19, 2015
19/4/15: New Evidence: Ambiguity Aversion is the Exception
A fascinating behavioural economics study on ambiguity aversion by Kocher, Martin G. and Lahno, Amrei Marie and Trautmann, Stefan, titled "Ambiguity Aversion is the Exception" (March 31, 2015, CESifo Working Paper Series No. 5261: http://ssrn.com/abstract=2592313) provides empirical testing of ambiguity aversion hypothesis.
Note: my comments within quotes are in bracketed italics
When an agent makes a decision in the presence of uncertainty, "risky prospects with known probabilities are often distinguished from ambiguous prospects with unknown or uncertain probabilities… [in economics literature] it is typically assumed that people dislike ambiguity in addition to a potential dislike of risk, and that they adjust their behavior in favor of known-probability risks, even at significant costs."
In other words, there is a paradoxical pattern in behaviour commonly hypothesised: suppose an agent is facing a choice between a gamble with known probabilities (uncertain, but not ambiguous) that has low expected return and a gamble with unknown (ambiguous) probabilities that has high expected return. In basic terms, ambiguity aversion implies that an agent will tend to opt to select the first choice, even if this choice is sub-optimal, in standard risk aversion setting.
As authors note, "A large literature has studied the consequences of such ambiguity aversion for decision making in the presence of uncertainty. Building on decision theories that assume ambiguity aversion, this literature shows that ambiguity can account for empirically observed violations of expected utility based theories (“anomalies”)."
"These and many other theoretical contributions presume a universally negative attitude toward ambiguity. Such an assumption seems, at first sight, descriptively justified on the basis of a large experimental literature… However, …the predominance of ambiguity aversion in experimental findings might be due to a narrow focus on the domain of moderate likelihood gains… While fear of a bad unknown probability might prevail in this domain [of choices with low or marginal gains], people might be more optimistic in other domains [for example if faced with much greater payoffs or risks, or when choices between strategies are more complex], hoping for ambiguity to offer better odds than a known-risk alternative."
So the authors then set out to look at the evidence for ambiguity aversion "in different likelihood ranges and in the gain domain, the loss domain, and with mixed outcomes, i.e. where both gains and losses may be incurred. …Our between-subjects design with more than 500 experimental participants exposes participants to exactly one of the four domains, reducing any contrast effects that may affect the preferences in the laboratory context."
Core conclusion: "Ambiguity aversion is the exception, not the rule. We replicate the finding of ambiguity aversion for moderate likelihood gains in the classic ...design. However, once we move away from the gain domain or from the [binary] choice to more [complex set of choices], thus introducing lower likelihoods, we observe either ambiguity neutrality or even ambiguity seeking behavior. These results are robust to the elicitation procedure."
So is ambiguity hypothesis dead? Not really. "Our rejection of universal ambiguity aversion does not generally contradict ambiguity models, but it has important implications for the assumptions in applied models that use ambiguity attitudes to explain real-world phenomena. Theoretical analyses should not only consider the effects of ambiguity aversion, but also potential implications of ambiguity loving for economics and finance, particularly in contexts that involve rare events or perceived losses such as with insurance or investments. Policy implications should always be fine-tuned to the specific domain, because policy interventions based on wrong assumptions regarding the ambiguity attitudes of those targeted by the policy could be detrimental."
Wednesday, March 4, 2015
4/3/15: Core biases in Hedge Funds returns
My second post on the topic of measuring hedge funds returns for LearnSignal blog, covering the issue of biases in measurement, induced by timing and risk considerations is now available here: http://blog.learnsignal.com/?p=161
Thursday, January 29, 2015
29/1/15: Where the Models Are Wanting: Banking Sector & Modern Investment Theory
My new post for Learn Signal blog covering the shortcomings of some core equity valuation models when it comes to banking sector stocks analysis is now available here: http://blog.learnsignal.com/?p=152.
Tune in next week to read the second part, covering networks impact on core valuation models validity.
Wednesday, February 19, 2014
18/2/2014: Have Financial Markets Become More Informative since the 1960s?
In strongly efficient markets, prices of shares transmit strong information about company fundamentals, such as productivity and demand for and risk of investment. As Fama (1970) wrote: "The primary role of the capital market is allocation of ownership of the economy's capital stock. In general terms, the ideal is a market in which prices provide accurate signals for resource allocation: that is, a market in which firms can make production/investment decisions... under the assumption that security prices at any time `fully reflect' all available information."
In recent years, quality of information in the financial markets has significantly improved, while analysis costs have fallen, suggesting that informational content of prices in the markets should have risen as well.
A new paper, titled "HAVE FINANCIAL MARKETS BECOME MORE INFORMATIVE?" by Jennie Bai, Thomas Philippon, and Alexi Savov (Working Paper 19728: http://www.nber.org/papers/w19728, December 2013) measures "the information content of prices by using them to predict earnings and investment. We trace the evolution of price informativeness in the U.S. over the last five decades."
The period of analysis is not ad hoc. "During this period, a revolution in computing has transformed finance: Lower trading costs have led to a flood of liquidity. Modern information technology delivers a vast array of data instantly and at negligible cost. Concurrent with these trends, the finance industry has grown, its share of GDP more than doubling."
In this context, the authors ask "Have market prices become more informative?"
To answer this question, the authors first develop measures of informativeness in the financial markets. They do so by combining Tobin's (1969) q-theory of investment with the noisy rational expectations framework of Grossman and Stiglitz (1980). "When more information is produced, prices become stronger predictors of earnings. We define price informativeness to be the standard deviation of the predictable component of earnings and we show that it is directly related to welfare, as in Hayek (1945): information promotes the efficient allocation of investment, which leads to economic growth."
For empirical testing, the authors regress "future earnings on current valuation ratios, controlling for current earnings. We look at both equity and corporate bond markets. We include one-digit industry-year fixed effects to absorb time-varying cross-sectional differences in the cost of capital. This regression compares firms in the same sector and asks whether firms with higher market valuations tend to produce higher earnings in the future than firms with lower valuations."
Conclusion: "...the amount of informativeness has not changed since 1960."
Surprising result means there is some room for potential mis-specification of the tests. As authors note: "By itself, constant price informativeness does not imply constant information production in markets. It is possible that information production has simply migrated from inside firms to markets. Hirshleifer (1971) first noted the dual role of prices in revealing new information and reflecting existing information. Bond, Edmans, and Goldstein (2012) call the revelatory component of price informativeness real price efficiency (RPE), and the forecasting component forecasting price efficiency (FPE). The financial sector adds value only to the extent that it reveals information that would otherwise be unavailable to decision makers. …the distinction between RPE and FPE is fundamental, and we seek to disentangle them."
The model provides a solution. "When managers rely on prices, they import the price noise into their investment policies. When markets reveal no new information, managers ignore them and prices remain noisy but investment does not. In the opposite case, when all information is produced in markets, managers use prices and both investment and prices are equally noisy. Information increases the predictive power of both prices and investment, but a rise in the revelatory component of prices increases price informativeness disproportionately."
Complicated thinking? You bet. But still, intuitive and testable. "To see if the constant price informativeness could mask a substitution from forecasting (FPE) to revealing (RPE) information, we check to see if the predictable component of earnings based on investment has changed."
Conclusion: the authors find that the predictable component of earnings based on investment has not changed over time. "…this implies that neither FPE nor RPE has risen over the last five decades." And furthermore, "…results show that discount rate variation has also remained stable" over time.
But there is more to the study. It turns out that informativeness is different for different types of investment. "Our strongest positive finding is that a higher equity valuation is more closely associated with R&D investment now than in the past. The same is not true of capital expenditure. However, the increased predictability of R&D is not related to increased predictability of earnings, so we cannot conclude that informativeness has increased."
And the results discussed above are sensitive to the sample of stocks studied. "For most of the paper, we examine S&P 500 stocks whose characteristics have remained stable. In contrast, running the same tests on the universe of stocks appears to show a decline in informativeness. We argue, however, that this decline is consistent with changing firm characteristics: the typical firm today is more difficult to value."
Top conclusion therefore is that having examined "the extent to which stock and bond prices predict earnings", the authors find that "the informativeness of financial market prices has not increased in the past fifty years".
In the words of Herbert Simon (1971), "An information processing subsystem (a computer) will reduce the net demand on attention of the rest of the organization only if it absorbs more information, previously received by others, than it produces -- if it listens and thinks more than it speaks."
Friday, February 14, 2014
14/2/2014: Buffett's Alpha Demystified... or not?
Warren Buffett is probably the most legendary of all investors and his Berkshire Hathaway, despite numerous statements by Buffett explaining his investment philosophy, is still shrouded in a veil of mystery and magic.
The more you wonder about Buffett's fantastic historical track record, the more you ask whether the returns he amassed are a matter of luck, skill, unique strategy or all of the above.
"Buffett’s Alpha" by Andrea Frazzini, David Kabiller, and Lasse H. Pedersen (NBER Working Paper 19681 http://www.nber.org/papers/w19681, November 2013) shows that "looking at all U.S. stocks from 1926 to 2011 that have been traded for more than 30 years, …Berkshire Hathaway has the highest Sharpe ratio among all. Similarly, Buffett has a higher Sharpe ratio than all U.S. mutual funds that have been around for more than 30 years." In fact, for the period 1976-2011, Berkshire Hathaway realized Sharpe ratio stands at impressive 0.76, and "Berkshire has a significant alpha to traditional risk factors." According to the authors, "adjusting for the market exposure, Buffett’s information ratio is even lower, 0.66. This Sharpe ratio reflects high average returns, but also significant risk and periods of losses and significant drawdowns."
According to authors, this begs a question: "If his Sharpe ratio is very good but not super-human, then how did Buffett become among the richest in the world?"
The study looks at Buffett's performance and finds that "The answer is that Buffett has boosted his returns by using leverage, and that he has stuck to a good strategy for a very long time period, surviving rough periods where others might have been forced into a fire sale or a career shift. We estimate that Buffett applies a leverage of about 1.6-to-1, boosting both his risk and excess return in that proportion."
The conclusion is that "his many accomplishments include having the conviction, wherewithal, and skill to operate with leverage and significant risk over a number of decades."
But the above still leaves open a key question: "How does Buffett pick stocks to achieve this attractive return stream that can be leveraged?"
The authors "…identify several general features of his portfolio: He buys stocks that are
-- “safe” (with low beta and low volatility),
-- “cheap” (i.e., value stocks with low price-to-book ratios), and
-- high-quality (meaning stocks that profitable, stable, growing, and with high payout ratios).
This statistical finding is certainly consistent with Graham and Dodd (1934) and Buffett’s writings, e.g.: "Whether we’re talking about socks or stocks, I like buying quality merchandise when it is marked down" – Warren Buffett, Berkshire Hathaway Inc., Annual Report, 2008."
Of course, such a strategy is not novel and Ben Graham's original factors for selection are very much in line with it, let alone more sophisticated screening factors. Everyone knows (whether they act on this knowledge or not is a different matter altogether) that low risk, cheap, and high quality stocks "tend to perform well in general, not just the ones that Buffett buys. Hence, perhaps these characteristics can explain Buffett’s investment? Or, is his performance driven by an idiosyncratic Buffett skill that cannot be quantified?"
The authors look at these questions as well. "The standard academic factors that capture the market, size, value, and momentum premia cannot explain Buffett’s performance so his success has to date been a mystery (Martin and Puthenpurackal (2008)). Given Buffett’s tendency to buy stocks with low return risk and low fundamental risk, we further adjust his performance for the Betting-Against-Beta (BAB) factor of Frazzini and Pedersen (2013) and the Quality Minus Junk (QMJ) factor of Asness, Frazzini, and Pedersen (2013)."
And then 'Eureka!': "We find that accounting for these factors explains a large part of Buffett's performance. In other words, accounting for the general tendency of high-quality, safe, and cheap stocks to outperform can explain much of Buffett’s performance and controlling for these factors makes Buffett’s alpha statistically insignificant… Buffett’s genius thus appears to be at least partly in recognizing early on, implicitly or explicitly, that these factors work, applying leverage without ever having to fire sale, and sticking to his principles. Perhaps this is what he means by his modest comment: "Ben Graham taught me 45 years ago that in investing it is not necessary to do extraordinary things to get extraordinary results." – Warren Buffett, Berkshire Hathaway Inc., Annual Report, 1994."
There is more to be asked about Warren Buffett's investment style and strategy. "…we consider whether Buffett’s skill is due to his ability to buy the right stocks versus his ability as a CEO. Said differently, is Buffett mainly an investor or a manager?"
Authors oblige: "To address this, we decompose Berkshire’s returns into a part due to investments in publicly traded stocks and another part due to private companies run within Berkshire. The idea is that the return of the public stocks is mainly driven by Buffett’s stock selection skill, whereas the private companies could also have a larger element of management."
Another 'Eureka!' moment beckons: "We find that both public and private companies contribute to Buffett’s performance, but the portfolio of public stocks performs the best, suggesting that Buffett’s skill is mostly in stock selection. Why then does Buffett rely heavily on private companies as well, including insurance and reinsurance businesses? One reason might be that this structure provides a steady source of financing, allowing him to leverage his stock selection ability. Indeed, we find that 36% of Buffett’s liabilities consist of insurance float with an average cost below the T-Bill rate.
So core conclusions on Buffett's genius: "In summary, we find that Buffett has developed a unique access to leverage that he has invested in safe, high-quality, cheap stocks and that these key characteristics can largely explain his impressive performance. Buffett’s unique access to leverage is consistent with the idea that he can earn BAB returns driven by other investors’ leverage constraints. Further, both value and quality predict returns and both are needed to explain Buffett’s performance. Buffett’s performance appears not to be luck, but an expression that value and quality investing can be implemented in an actual portfolio (although, of course, not by all investors who must collectively hold the market)."
Awesome study!
Friday, July 19, 2013
19/7/2013: Global Asset Returns: 10-year Averages
A nice inforgaphic on 10-year average annual returns for a number of major asset classes from Pictet:
Details of calculations and strategy discussion - as usual, with Pictet's no-nonsense straight talking analysis - is here: http://perspectives.pictet.com/wp-content/uploads/2013/07/Pictet-Horizon-EN1.pdf
What can I say, I like these guys...
Details of calculations and strategy discussion - as usual, with Pictet's no-nonsense straight talking analysis - is here: http://perspectives.pictet.com/wp-content/uploads/2013/07/Pictet-Horizon-EN1.pdf
What can I say, I like these guys...
Sunday, October 21, 2012
21/10/2012: Some links for Investment Analysis 2012-2013 course
For Investment Analysis class - here are some good links on CAPM and it's applications to actual strategy formation & research, and couple other topics we covered in depth:
Classic:
"The Capital Asset Pricing Model: Theory and Evidence" Eugene F. Fama and Kenneth R. French : http://papers.ssrn.com/sol3/papers.cfm?abstract_id=440920
"CAPM Over the Long-Run: 1926-2001", Andrew Ang, Joseph Chen, January 21, 2003: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=346600
"Downside Risk", Joseph Chen, Andrew Ang, Yuhang Xing, The Review of Financial Studies, Vol. 19, Issue 4, pp. 1191-1239, 2006
"Mean-Variance Investing", Andrew Ang, August 10, 2012, Columbia Business School Research Paper No. 12/49 http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2131932&rec=1&srcabs=2103734&alg=1&pos=1
More related to the Spring 2013 course on HFT and Technical Models:
"A Quantitative Approach to Tactical Asset Allocation" Mebane T. Faber : http://www.mebanefaber.com/2009/02/19/a-quantitative-approach-to-tactical-asset-allocation-updated/
"The Trend is Our Friend: Risk Parity, Momentum and Trend Following in Global Asset Allocation", Andrew Clare, James Seaton, Peter N. Smith and Stephen Thomas, 11th September 2012: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2126478
"Dynamic Portfolio Choice" Andrew Ang: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2103734
Subscribe to:
Posts (Atom)