Generally speaking, people seek out the type of analysis and commentary that validates their own way of thinking about the world. Confirmation bias is a powerful thing.
I try my best to “cure” people of this rather noxious habit. For one thing, the satisfaction you get from hearing someone else agree with you instills a false sense of confidence in your beliefs. You can only become more confident in your own hypotheses by subjecting them to scrutiny (i.e. by considering the opposing viewpoint).
As distressing as it is for people to consider that they might be wrong about this or that prediction (e.g. Tesla shares are going to infinity), it’s even more distressing for these same people to ask whether the entire exercise of analyzing individual securities isn’t pointless on its face.
Unfortunately for those who desperately search for what they call “actionable” analysis, the simple fact of the matter is that no matter how many DCF models and SOTP exercises you run, you cannot predict the unpredictable. If Tesla’s autopilot causes a 100-car pile up next week, killing a school bus full of first graders in the process, all of your calculations will amount to shit when it comes to determining where the stock trades.
It’s the same thing with macro. You can look at the numbers and spend all the time you want constructing econometric models designed to extract something meaningful from the data, but if Donald Trump wins the US presidency, all of your work might well turn out to be pointless when it comes to predicting where the economy is headed.
The point: the term “actionable” shouldn’t apply to forecasting stock prices or economic outcomes. Rather, when we assess whether a given piece of research is “actionable,” what we should be asking is whether, upon reviewing it, we have expanded our list of possible outlier events. If we have, then we can say the following with more confidence than ever: “I know that I don’t know.” We have, in effect, gained a better understanding of where the limits are in terms of making predictions about a given security or about an index or about the economy or about… etc., etc. Those limits tell us something about how to hedge our bets.
That’s what we should call “actionable” research. Everything else is just monkeys throwing darts at a newspaper. Don’t be a monkey.
To summarize, it’s in the explicit recognition of the inherent limits of financial and economic research that we find truly “tradable” ideas.