One of my favorite reads this year was Nate Silverâs The Signal and the Noise which has the subtitle âWhy so many predictions fail, but some donât.â It covers a ton of different topics, from weather to politics to gambling, and I couldnât help but read it with a startup/tech point of view.
After all, the industry of technology startups is all about prediction- we try to predict what will be a good market, what will be a good product, as we âiterateâ and âpivotâ on our predictions. And of course the business of venture capital is even more directly about knowing how to pick winners- especially the seed and Series A investments.
And yet, weâre all so bad at predicting what will work and what wonât. Iâve written about my embarrassing skepticism about Facebook, but hey, Iâm just a random tech guy. For the folks whose job it is to professionally pick winners, the venture capitalists, they arenât doing very well either. Itâs been widely noted that the venture capital asset class, after fees, has lagged the public markets- youâd be better off buying some index funds.
Startup exceptionalism = sparse data sets = shitty prediction models
One of the most challenging aspects of predicting the next breakout startup is that thereâs so few of them. Itâs been widely discussed that 10-15 startups a year generate 97% of the returns in tech, and each one seems like a crazy exception. And as an industry we get myopically focused on each one of them.
With these kinds of odds, our brains go crazy with pattern-matching. When a once-in-a-generation startup like Google comes around, for the next few years after that, we all ask, âOK, but do you have any PhDs on the team? Whatâs the âPageRankâ of your product?â And now that we have AirBnb, weâve gone from being skeptical of designer-led companies to being huge fans of them. With so few datapoints, the prediction models we generate as a community arenât great- theyâre simplistic and are amplified with the swirl of attention-grabbing headlines and soundbites.
These simplistic models result in generic startup advice. As I wrote about earlier, thereâs a whole ecosystem of vendors, press, consultants, and advisors who go on advice autopilot and give the same advice regardless of situation. Invest in great UX, charge users right away, iterate quickly, measure everything, launch earlier, work long hours, raise more money, raise less money â all of these ideas are helpful to complete newbies but dangerous when applied recklessly to every situation.
We all know how to parrot this common wisdom, but how do we know when weâre hearing good versus bad advice? If you think about the idea that thereâs 10-15 companies every year who are breakouts, how many people really have first-hand experience making the right decisions to start and build breakout companies?
Hedgehogs and pundits
I was reminded for my dislike of generic startup advice when in his book, Nate Silver writes about hedgehogs versus foxes and their approaches towards generating predictions â hereâs the Wikipedia definition on the concept:
[There are] two categories: hedgehogs, who view the world through the lens of a single defining idea and foxeswho draw on a wide variety of experiences and for whom the world cannot be boiled down to a single idea.
Silver clearly identifies as a fox, and contrasted his approach to the talking head pundits that dominate political talk shows on TV and radio. For the pundits, the more aggressive, contrarian, and certain they seem, the more attention-grabbing they are. Rather similar to what we see in the blogosphere, where people are rewarded for writing headlines like â10 reasons why [hot company] will be killed by [new product].â Or âEvery startup should care about [metric X]â or whatever.
This hedgehog-like behavior is amplified by the fact that thereâs always pressure to articulate a thesis on whatâs going on in the market. People in the press are always trying to spot trends or boil down complex ideas, and investors are constantly asked, âWhat kinds of startups are you investing in? Why?â And entrepreneurs are always forced to fit their businesses into the broader trends of the market, to find sexy competitors, all in the change to find a simple narrative that describes whatâs going on.
The solution to all of this isnât easy- to be a fox means to draw from a much broader set of data, to look at the problem from multiple perspectives, and to reach a conclusion that combines all of those datapoints. Thereâs been some great work on the science of forecasting by Philip Tetlock of UPenn, whoâs set up an open contest to study good forecasting here. Thereâs an interview of him Edge.org here and a video describing some of his academic research below. Worth watching.
My personal experience
Over my 5 years in Silicon Valley, the biggest lesson Iâve learned from trying to predict startups is calibration. They talk about it in the video above, but the short way to describe it is to be careful with what you think you know versus what you donât. Iâve found that my area of expertise where I can make good decisions is actually pretty narrow- Iâve done a bunch of work in online ads, analytics, consumer communication/publishing, and I think my judgement is pretty good there, but itâs much shakier outside of that area.
When I do an analysis, I try to match my delivery with how much I think I know- and these days, it means that they sound a lot more tentative than the younger, brasher version of myself when I first came to SF. Iâve also tried to be diligent in my employment of âadvice autopilotâ â if I meet with entrepreneurs and find myself saying the same thing multiple times, then I try to refine the idea to take into account the specifics and nuances of that product. Itâs easier, lazier, but less helpful to just say the same thing over and over again.
Be the fox, not the hedgehog.
(Andrew Chen is an entrepreneur and blogger based in Palo Alto, CA. He blogs here.)
To become a guest contributor with VCCircle, write to shrija@vccircle.com.