How Bets Among Employees Can Guide a Company’-s Future –- Internal prediction markets enable colleagues to wager on the fate of crucial projects and the success of products in the pipeline. –- Technology Review
Tag Archives: private prediction markets
If prediction markets are such a powerful tool, then why arent we able to use them to solve [INSERT YOUR FAVORITE WORLD PROBLEM HERE]?
Justin Wolfers is asked the question, but I would have a different answer than his.
The reason prediction markets are not widely used in business is that their many boosters (Robin Hanson, James Surowiecki, Justin Wolfers, etc.) have exaggerated their usefulness. Just because they are objective in their wisdom does not mean that they are very useful.
Objectivity is over-rated. This is a painful lesson for the handful of young startups who swallowed the prediction market myth. Next step: the dead pool.
MAT FOGARTY FOR PRESIDENT
CrowdCast + SAP
Hyping enterprise prediction markets in Mashable
Business leaders rely on metrics and data to inform decisions around new products and opportunities, but traditional forecasting methods suffer from bias and lack of first-hand information. That’s why business forecasting is an ideal target for the application of crowd wisdom. While bets are made anonymously, some prediction market software applications have built-in reward systems for accurate forecasters. And the accuracy of prediction markets over traditional forecasting methods is proven again and again. […] Prediction markets will then aggregate this knowledge to produce actionable, people-powered forecasts. The result is an ultra-rich information source that will lay the foundation for smarter, better-informed company decisions. […]
CrowdCast is an enterprise software platform that helps companies make better forecasts by tapping the knowledge stored in their employees.
Download this post to watch the video —-if your feed reader does not show it to you.
Common pitfalls of enterprise prediction markets: participants who lack relevant information, too few participants, and too little trading.
The truth about CrowdClaritys extraordinary predictive power (which impresses Jed Christiansen so much)
At first blush, it appears that we finally have a bona fide prediction market success! If we’-re going to celebrate, I’-d suggest Prosecco, not Champagne, however.
There are a number of reasons to be cautious. These represent only a couple of markets. We don’-t know why Urban Science people appear to be so adept at forecasting GM sales in turbulent times. There is no information on the CrowdClarity web site to indicate why the markets were successful nor how their mechanism might have played a role in the PM accuracy. I’-m guessing that it would have been really easy to beat GM’-s forecasts in November, as they would likely have been even more biased than usual, mainly for political reasons. I’-m not sure how Edmunds.com’-s may have been biased or why their predictions were not accurate. Maybe they are not so good at predicting unless the market is fairly stable.
The CrowdClarity web site boasts that a few days after the markets were opened, the predictions were fairly close to the eventual outcome. This is a good thing, but, at this point it is not useful. No one knew, at that time, that those early predictions would turn out to be reasonably accurate. As a result, no one would have relied upon these early predictions to make decisions.
I’-m even more skeptical of the company’-s contention that markets can be operated with as few as 13 participants. Here we go again, trying to fake diversity.
It is interesting that a prediction market comprised of participants outside of the subject company did generate more accurate predictions than GM insiders (biased) and Edmunds.com (experts). The question that needs to be answered is why. Clearly, Urban Science people did have access to better information, but why?
Unless we know why the prediction markets were successful at CrowdClarity, it is hard to get excited. There are too many examples of prediction markets that are not significantly better than traditional forecasting methods. This one could be a fluke.
I’-ll have more to say, soon, when I write about the prediction markets that were run at General Mills. There the authors of the study found that prediction markets were no better than the company internal forecasting process.
Paul Hewitt’-s analysis is more interesting than Jed Christiansen’-s naive take.
Next: Assessing the usefulness of enterprise prediction markets
Share This:
Assessing the usefulness of enterprise prediction markets
Do you need to have experience in running an enterprise prediction exchange in order to assess the pertinence of enterprise prediction markets?
Hi Jed…
As for qualifications, I have been making business decisions for almost 30 years. I am a chartered accountant and a business owner. Starting in university and continuing to this day, I have been researching information needs for corporate decision making. As Chris points out, I’m not a salesperson for any of the software developers. In fact, if I have a bias, it is to be slightly in favour of prediction markets. That said, I still haven’t seen any convincing evidence that they work as promised by ANY of the vendors.
As for whether I have ever run or administered a prediction market, the answer is no. Does that mean I am not qualified to critique the cases that have been published? Hardly. You don’t have to run a PM to know that it is flawed. Those that do, end up trying to justify minuscule “improvements” in the accuracy of predictions. They also fail to consider the consistency of the predictions. Without this, EPMs will never catch on. Sorry, but that is just plain common sense.
The pilot cases that have been reported are pretty poor examples of prediction market successes. In almost every case, the participants were (at least mostly) the same ones that were involved with internal forecasting. The HP markets, yes, the Holy Grail of all prediction markets, merely showed that prediction markets are good at aggregating the information already aggregated by the company forecasters! They showed that PMs are only slightly better than other traditional methods – and mainly because of the bias reduction. Being slightly better is not good enough in the corporate world.
I think I bring a healthy skepticism to the assessment of prediction markets. I truly want to believe, but I need to be convinced. I am no evangelist, and there is no place for that in scientific research. Rather than condemn me for not administering a PM, why not address the real issues that arise from my analyses?
Previously: The truth about CrowdClarity’s extraordinary predictive power (which impresses Jed Christiansen so much)
Finally, a positive corporate prediction market case study… -well, according to Jed Christiansen
To recap, the prediction market beat the official GM forecast (made at the beginning of the month) easily, which isn’t hugely surprising considering the myopic nature of internal forecasting. But the prediction market also beat the Edmunds.com forecast. This is particularly interesting, as Edmunds would have had the opportunity to review almost the entire month’s news and data before making their forecast at the end of the month. […]
Assume that even with three weeks’ early warning Chevrolet was only able to save 10% of that gap, it’s still $80million in savings. Even if a corporate prediction market for a giant company like GM cost $200,000 a year, that would still be a return on investment of 40,000 %. And again, that’s in the Chevrolet division alone. […]
Make up your own mind by reading the whole piece.
Next: Assessing the usefulness of enterprise prediction markets