Tag Archives: prediction markets
CrowdCast + SAP
Predictalot is a combinatorial prediction exchange.
Dear MO reader: Why you should try Predictalot
Why should you try Predictalot?
- Gamers: Make almost any prediction you can think of about March Madness, the NCAA men’-s basketball tournament.
- Sports fans: Check the crowd’-s odds: Is St. Mary’-s the next Cinderella?
- Economists: Play with a true combinatorial prediction market with 9.2 quintillion outcomes and a single pool of liquidity, unlike almost any other of today’-s financial and prediction markets.
- Geeks: Ponder some of the interesting computer science challenges, including approximating #P-hard problems and an eerily similar sampling problem as faced by physicists.
- Everybody’-s doing it.
- Barack Obama might do it, according to VentureBeat on NYTimes.com: “-President Barack Obama will likely be busy this week [but]…- maybe he’ll be able to sneak a peek at Predictalot on his BlackBerry between meetings.”-
Max Keiser weighs in on potential insider trading and hypothetical manipulation in the ObamaCare prediction market at InTrade.
Max says that the political prediction markets are “-routinely manipulated”- and we often see “-price rigging”-…-
9:57 into:
What has been the best InTrade prediction market ever? Has the ObamaCare prediction market at InTrade been ahead of the commentary?
Jason Ruspini (who feels that the health care reform proposal might well be adopted) wanna feedback from you, folks.
- Which InTrade prediction market(s) has/have been ahead of the Press (rather than the other way around)? What is/are the best (most divergent from the commentary, and correct) InTrade prediction market(s) in people’-s memories?
- Do you sense that the ObamaCare prediction market at InTrade fits these 2 criteria?
UPDATE: I asked The Brain whether he meant generalist media or political media, and he meant “-generalist”-. That makes all the difference in the world.
ADDENDUM
More info on health care reform on Memeorandum.
Previously: Insider trading in the InTrade prediction market on health care reform?
Truth in Advertising – Meet Prediction Markets
Most published papers on prediction markets (there aren’-t many) paint a wildly rosy picture of their accuracy. Perhaps it is because many of these papers are written by researchers having affiliations with prediction market vendors.
Robin Hanson is Chief Scientist at Consensus Point. I like his ideas about combinatorial markets and market scoring rules, but I think he over-sells the accuracy and usefulness of prediction markets. His concept of Futarchy is an extreme example of this. Robin loves to cite HP’-s prediction markets in his presentations. Emile Servan-Schreiber (Newsfutures) is mostly level-headed but still a big fan of prediction markets. Crowdcast’-s Chief Scientist is Leslie Fine- their Board of Advisors includes Justin Wolfers and Andrew McAfee. Leslie seems to have a more practical understanding than most, as evidenced by this response to the types of questions that Crowdcast’-s prediction markets can answer well: “-Questions whose outcomes will be knowable in three months to a year and where there is very dispersed knowledge in your organization tend to do well.”- She gets it that prediction markets aren’-t all things to all people.
An Honest Paper
To some extent, all of the researchers over-sell the accuracy and the range of useful questions that may be answered by prediction markets. So, it is refreshing to find an honest article written about the accuracy of prediction markets. Not too long ago, Sharad Goel, Daniel M. Reeves, Duncan J. Watts, David M. Pennock published Prediction Without Markets. They compared prediction markets with alternative forecasting methods for three types of public prediction markets: Football and baseball games and movie box office receipts.
They found that prediction markets were just slightly more accurate than alternative methods of forecasting. As an added bonus, these researchers considered the issue that prediction market accuracy should be judged by its effect on decision-making. So few researchers have done this! A very small improvement in accuracy is not considered material (significant), if it doesn’-t change the decision that is made with the forecast. It’-s a well-established concept in public auditing, when deciding whether an error is significant and requires correction. I have discussed this concept before.
While they acknowledge that prediction markets may have a distinct advantage over other forecasting methods, in that they can be updated much more quickly and at little additional cost, they rightly suggest that most business applications have little need for instantaneously updated forecasts. Overall, they conclude that “-simple methods of aggregating individual forecasts often work reasonably well relative to more complex combinations (of methods).”-
For Extra Credit
When we compare things, it is usually so that we can select the best option. In the case of prediction markets it is not a safe assumption that the choices are mutually exclusive. Especially in enterprise applications, prediction markets are heavily dependent on the alternative information aggregation methods as a primary source of market information. Of course, there are other sources of information and the markets are expected to minimize bias to generate more accurate predictions.
In the infamous HP prediction markets, the forecasts were eerily close to the company’-s internal forecasts. It wasn’-t difficult to see why. The same people were involved with both predictions! The General Mills prediction markets showed similar correlations, even when only some of the participants were common to both methods. The implication of these cases is that you cannot replace the existing forecasting system with a prediction market and expect the results to be as accurate. The two (or more) methods work together.
Not only do most researchers (Pennock et al, excepted) recommend adoption of prediction markets, based on insignificant improvements in accuracy, they fail to consider the effect (or lack thereof) on decision-making in their cost/benefit analysis. Even if some do the cost/benefit math, they don’-t do it right.
Where a prediction market is dependent on other forecasting methods, the marginal cost is the total cost of running the market. There is no credit for eliminating the cost of alternative forecasting methods. The marginal benefit is that expected by choosing a different course of action than the one that would have been taken based on a less accurate prediction. That is, a slight improvement in prediction accuracy that does not change the course of action has no marginal benefit.
Using this approach, a prediction market that is only “-slightly”- more accurate, than those from alternative forecasting approaches, is just not good enough. So far, there is little, if any, evidence that prediction markets are anything more than “-slightly”- better than existing methods. Still, most of our respected researchers continue to tout prediction markets. Even a technology guru like Andrew McAfee doesn’-t get it , in this little PR piece he wrote, shortly after joining Crowdcast’-s Board of Advisors.
Is it a big snow job or just wishful thinking?
[Cross-posted from Toronto Prediction Market Blog]
Wall Street 2: Money Never Sleeps. -> September 24, 2010
Wall Street 2 @ HSX –-> Quite high flying.
The first trailer is hilarious:
About Wall Street 2:
Wall Street 1:
Frank Sinatra, “-Fly Me To The Moon”-:
Cantor Exchange in the New York Times
Richard Jaycobs uses the adjective “-tremendous”-. But here’-s what the journalo says:
But buyers beware: if “Avatar” is any indication, the public isn’t always so wise about Hollywood fortunes. Most users of HSX.com predicted a flop, and if those users had placed real money on the Cantor exchange, they would have taken a serious hit.
Libertarian journalist John Stossel explained InTrades prediction markets, and forgot BetFair.
ABC News, in 2008: