Lance Fortnow:
[…] Notice that when we have a surprise victory in a primary, like Clinton in New Hampshire, much of the talk revolves on why the pundits, polls and prediction markets all “-failed.”- Meanwhile in sports when we see a surprise victory, like the New York Giants over Dallas and then again in Green Bay, the focus is on what the Giants did right and the Cowboys and Packers did wrong. Sports fans understand probabilities much better than political junkies—upsets happen occasionally, just as they should.
Previously: Defining Probability in Prediction Markets – by Panos Ipeirotis – 2008
[…] Interestingly enough, such failed predictions are absolutely necessary if we want to take the concept of prediction markets seriously. If the frontrunner in a prediction market was always the winner, then the markets would have been a seriously flawed mechanism. […]
Previously: Can prediction markets be right too often? – by David Pennock – 2006
[…] But this begs another question: didn’t TradeSports call too many states correctly? […] The bottom line is we need more data across many elections to truly test TradeSports’s accuracy and calibration. […] The truth is, I probably just got lucky, and it’s nearly impossible to say whether TradeSports underestimated or overestimated much of anything based on a single election. Such is part of the difficulty of evaluating probabilistic forecasts. […]
Previously: Evaluating probabilistic predictions – by David Pennock – 2006
[…] Their critiques reflect a clear misunderstanding of the nature of probabilistic predictions, as many others have pointed out. Their misunderstanding is perhaps not so surprising. Evaluating probabilistic predictions is a subtle and complex endeavor, and in fact there is no absolute right way to do it. This fact may pose a barrier for the average person to understand and trust (probabilistic) prediction market forecasts. […] In other words, for a predictor to be considered good it must pass the calibration test, but at the same time some very poor or useless predictors may also pass the calibration test. Often a stronger test is needed to truly evaluate the accuracy of probabilistic predictions. […]
[…] look at his prediction market explainer shows that Leighton Vaughan-Williams lacks understanding of the concept of probability as applied to the market-generated predictions. And my readers remember that Leighton Vaughan-Williams has stubbornly refused to disclose the […]