Superforecasters – And being perpetual Beta

I like reading, both for pleasure and as a great way of educating myself. A nice combination is when the educational book is written in such a way that it feels like pure pleasure reading. Michael Lewis is an example of such a writer for me. His books become page-turners for me, I just can’t get enough. Slightly more heavy educational material, usually get’s a bit boring and dry. The book Thinking Fast and Slow by Daniel Kahneman was a typical love/hate relationship of the sort. The book material was extraordinary interesting, probably top 3 educational book I ever read. But written in such a way that halfway through the book I really struggled. The latest book I read, Superforecasting: The Art and Science of Prediction, is an easier read. It draws some of it’s material from Kahneman’s book and for an equity investor like me it was almost more intriguing and interesting.

Superforecasting: The Art and Science of Prediction

superforecasting_cover

In a 20 year very impressive study, Professor Philip Tetlock showed that even the average expert was only slightly better at predicting the future than “monkey’s throwing darts”. Tetlock’s latest projects has since shown that there are, however, some people with real demonstrable foresight. These are ordinary people, who are nonetheless able to predict the future with a 60% greater degree of accuracy than regular forecasters. They are superforecasters.

Reading this book is about understanding what things these regular people do, to make them such great forecasters. Surely one would like to pick up such skills! The examples are mostly related to forecasting politically related events, but I think almost all of it is applicable in the world of investing as well. There are many reviews online of the book, so I will not provide that. Mostly I urge you to read the book, it is really good. But there are so many great books to read, so for the ones that do not have the time here are some personal reflections:

Some of my personal takeaways from the book (interpreted into a financial context)

  • This book resonates with me on multiple levels, but what spoke to me the most, was Tetlock’s description of perpetual beta: “a computer program that is not intended to be released in a final version, but will instead be used, analyzed, and improved without end.” As you might guess, we are not talking about computer programs really, but rather an approach to life itself. I finished my Masters studies many years ago now, the first few years working in Finance was very intense and I really learned a lot. But later on, I realized I had started to fall back on old knowledge in many occasions. I was still learning new things, but at a much slower pace than at Uni (or the first year working). And I realized I didn’t like it. It’s somewhat hard to explain, I have this need to constantly keep learning and understanding new things. I’m just very curious about understanding things. From how people think, to how a certain company operates and how it all slots in together in the bigger picture and the world we live in. Since back then I have found multiple ways to keep developing and improving: proper financial studies with exams, reading books on topics I want to understand better, listening to podcasts, reading blogs, following great people on Twitter and vloggers on YouTube. The world is today filled with so great, easily accessible ways to keep learning! The latest way of improving is my very own blog, where I can structure some of all that information I take in and do something useful out of it. One of the main points of the book and how to be a superforecaster, is to constantly keep learning and improving. Or in Tetlock’s words in the about superforecaster Bill Flack: “I can’t imagine a better description of the “try, fail, analyze, adjust, try again” cycle—and of the grit to keep at it and keep improving. Bill Flack is perpetual beta.”. Just like Bill I’m trying to very humbly be perpetual beta as well.
  • Another important concept of becoming a great forecaster is to actually measure how you are doing. Tetlock takes the example of medicine. In the old days, there was no actual follow-up on if what doctors did actually worked or not. It was just assumed by their reputation and knowledge that they knew what they were doing. This illusions of knowledge and accepting the doctors view was a terrible way to actually improve medicine. It was only when we started of putting medicine to test, through randomized controlled trials, that medicine science really took off. In the same way for investing, one should not forget to evaluate free from bias, what did one forecast and how did it actually turn out. Which leads to the next point
  • Be specific and clear in your forecasting. The financial field is full of people going on TV everyday, voicing their views and forecasts of the future. It is very seldom any TV-program goes back and reviews over the past 3 years, of all the people we brought here multiple times, who got in right and who was wrong? Except the problem that its not evaluated at all, next time you hear one of these people on TV, listen to what they really say. They use vague words, which basically never makes them entirely wrong (or right). To actually be able to evaluate yourself, you have to force yourself to make actual precise forecast, use percentages! And try to use as detailed percentages as possible! Example: Right now, weighting all factors available to me, there is 78% probability that the S&P 500 is lower at year end than it is today, that we can follow up upon! And if I do a hundred forecasts like that, we can start to see if I’m a superforecaster or a dart throwing monkey. Which leads me to me final main takeaway of the book:
  • Forecasting should be evaluated in two dimensions, calibration and resolution. To cite the book: “When we combine calibration and resolution, we get a scoring system that fully captures our sense of what good forecasters should do. Someone who says there is a 70% chance of X should do fairly well if X happens. But someone who says there is a 90% chance of X should do better. And someone bold enough to correctly predict X with 100% confidence gets top marks. But hubris must be punished. The forecaster who says X is a slam dunk should take a big hit if X does not happen. How big a hit is debatable, but it’s reasonable to think of it in betting terms. If I say it is 80% likely that the Yankees will beat the Dodgers, and I am willing to put a bet on it, I am offering you 4 to 1 odds. If you take my bet, and put $100 on it, you will pay me $100 if the Yankees win and I will pay you $400 if the Yankees lose. But if I say the probability of a Yankees victory is 90%, I’ve upped the odds to 9 to 1. If I say a win is 95% likely, I’ve put the odds at 19 to 1. That’s extreme. If you agree to bet $100, I will owe you $1,900 if the Yankees lose. Our scoring system for forecasting should capture that pain. The math behind this system was developed by Glenn W. Brier in 1950, hence results are called Brier scores. In effect, Brier scores measure the distance between what you forecast and what actually happened. So Brier scores are like golf scores: lower is better. Perfection is 0. A hedged fifty-fifty call, or random guessing in the aggregate, will produce a Brier score of 0.5. A forecast that is wrong to the greatest possible extent—saying there is a 100% chance that something will happen and it doesn’t, every time—scores a disastrous 2.0.” And the book then goes on describing that a Brier score needs to be set into a contex: “Let’s suppose we discover that you have a Brier score of 0.2. That’s far from godlike omniscience (0) but a lot better than chimp-like guessing (0.5), so it falls in the range of what one might expect from, say, a human being. But we can say much more than that. What a Brier score means depends on what’s being forecast. For instance, it’s quite easy to imagine circumstances where a Brier score of 0.2 would be disappointing. Consider the weather in Phoenix, Arizona. Each June, it gets very hot and sunny. A forecaster who followed a mindless rule like, “always assign 100% to hot and sunny” could get a Brier score close to 0, leaving 0.2 in the dust. Here, the right test of skill would be whether a forecaster can do better than mindlessly predicting no change. This is an underappreciated point. For example, after the 2012 presidential election, Nate Silver, Princeton’s Sam Wang, and other poll aggregators were hailed for correctly predicting all fifty state outcomes, but almost no one noted that a crude, across-the-board prediction of “no change”—if a state went Democratic or Republican in 2008, it will do the same in 2012—would have scored forty-eight out of fifty, which suggests that the many excited exclamations of “He called all fifty states!” we heard at the time were a tad overwrought. Fortunately, poll aggregators are pros: they know that improving predictions tends to be a game of inches. Another key benchmark is other forecasters. Who can beat everyone else? Who can beat the consensus forecast? How do they pull it off? Answering these questions requires comparing Brier scores, which, in turn, requires a level playing field. Forecasting the weather in Phoenix is just plain easier than forecasting the weather in Springfield, Missouri, where weather is notoriously variable, so comparing the Brier scores of a Phoenix meteorologist with those of a Springfield meteorologist would be unfair. A 0.2 Brier score in Springfield could be a sign that you are a world-class meteorologist. It’s a simple point, with a big implication: dredging up old forecasts from newspapers will seldom yield apples-to-apples comparisons because, outside of tournaments, real-world forecasters seldom predict exactly the same developments over exactly the same time period.”
  • This long explanation in a Financial context is such a nice way touching upon my very first post on the blog, where I tried to argue that it’s important to know what your benchmark is: Know your benchmark. Because if you do not know your benchmark, how can you then even start to test how well your investment strategy is doing?

How good was my forecast in Tonly Electronics?

I just published my analysis of Tonly a week ago. I ended the post saying that soon the Q3 Sales figures will be out, well now they are out. In the spirit of forecasting, let’s look how well I forecasted the newly released Sales figures for Q3 in my three scenarios:

tonly_q32018_sales

So far it seems I’m very far from being a superforecaster, when non of my three investment scenarios managed to capture the total Sales of Tonly would come in for Q3. Sales came in higher than my Bull scenario! Given the great results on a revenue basis and that the stock has traded down another 10%, I decided to allocate all the cash I had left, into Tonly (which was about another 2% of my portfolio) as of Friday. These Sales results doesn’t mean it’s a home run, something could still have happened to the margins, that we have to wait until next year to know. One could also notice that the segment I thought would drive future growth, with smartspeakers, did not actually even live up to my Base case of 550 million HKD. So although New Audio Products delivered extremely strong, it was still not a perfect report. One should not make too big of a deal of a quarter either, but very nice to see the execution of this company and I can’t believe that the market did not take notice of this at all and actually traded the stock down during the day (it ended flat, where I took my position at 5 HKD per share).

Forecasting is hard

To book brought up how hard it is to forecast something, and the further out in time you go, the harder it gets. The message more or less was, forecasts further than 5-6 years out is more or less meaningless, at least from trying to have an edge in guessing outcomes. I leave this post with a letter which was mentioned in the book, as a way of proving this point. The letter was written in April 2001 by Linton Wells, who at the time was principal staff assistant to President George W. Bush’s secretary and deputy secretary of defense.

 

2001-Quadrennial-Defense-Review

 

 

6 thoughts to “Superforecasters – And being perpetual Beta”

  1. http://www.channelnewsasia.com/news/business/amazon-com–qualcomm-to-put-alexa-assistant-in-more-headphones-10854130

    Thanks for that, it’s never easy to accurately forecast in the short term. Neither do we know till time reveals if we were right a few years later. This stock reminds me when I first look at valuetronics because of LED that I saw on the traffic lights.

    Fortunately, the company managements are very capable and manage to change and focus on others rather than the LEDs itself.

    Another promising stock that like you I am very interested to be a long term share holder.

    Just sharing a short and personal investment lesson that I saw the similarities on.

    Lastly, thanks for the wonderful write up that helps to shorten my due diligence 😉

  2. Where did you get the Tonly 3Q results from? I could not find on HKEXnews.
    Trying to invest additional cash towards high div and battered down stocks and this might look like a good candidate.

Leave a Reply

Your email address will not be published. Required fields are marked *