Tag Archives: polls
The Prediction Market Police
Flawed New Hampshire polls = Non-accurate New Hampshire prediction markets
The most comprehensive analysis ever conducted of presidential primary polls:
Via Mister the Great Research Scientist David Pennock –-who is an indispensable element of the field of prediction markets.
As I blogged many times, prediction markets react to polls…- See the addendum below…- – [UPDATE: See also Jed’s comment.] – Prediction markets should not be hyped as crystal balls, but simply as an objective and continuous way to aggregate expectations. So, if you think of it, their social utility is much smaller than what the advocates of the “-idea futures”-, “-wisdom of crowds”- or “-collective intelligence”- concepts told us. Much, much, much, much smaller…- They all make the mistake to put accuracy forward. (By the way, somewhat related to that issue, please go reading the dialog between Robin Hanson and Emile Servan-Schreiber.)
–
Addendum
–
California Institute of Technology economist Charles Plott:
What you’-re doing is collecting bits and pieces of information and aggregating it so we can watch it and understand what people know. People picked this up and called it the “-wisdom of crowds”- and other things, but a lot of that is just hype.
–
New Hampshire – The Democrats
–
The Hillary Clinton event derivative was expired to 100.
–
New Hampshire – The Republicans
–
The John McCain event derivative was expired to 100.
–
No TweetBacks yet. (Be the first to Tweet this post)
Damped polls outperform prediction markets.
Forecasting Principles:
Damping polls
Evidence from the literature shows that polls, in particular early in the campaign, are not reliable in predicting election outcomes but tend to overestimate the extent to which a candidate leads. To deal with these uncertainties, we added a damping factor to the RCP poll average. Damping is used to make forecasts more conservative in situations involving high uncertainty.
Our poll damping is based on research conducted by Campbell (1996) who showed that polls have to be discounted in order to achieve more reliable forecasts. Performing a regression analysis on historical poll data for the elections from 1948 to 2004, he derived a formula for discounting the polls according to their distance from Election Day. Campbell provided Polly the formula, along with a list of damping factors that vary by the number of days left before the election.
Currently, polls are discounted with a damping factor of ${factor}. Applying this factor, we calculate Polly’-s discounted poll based forecast thus:
Polly’-s poll based forecast = ((Latest RCP polling average – 50) * (1 – [damping factor])) + 50 = ((46.1 – 50) * (1 – 0.17)) + 50 = 46.8
Latest RCP polling average 46.1
Damping factor 0.17
Polly’-s poll based forecast 46.8
Thus, our poll damping discounts a candidate’-s lead in the two-party vote, depending on the days left prior to election. The further away the election day, the larger the damping.
Such damped polls have been shown to outperform sophisticated forecasting approaches like prediction markets. Comparing damped polls to forecasts of the Iowa Electronic Markets, Erikson and Wlezien (2008) showed that the damped polls outperformed both the winner-take-all and the vote-share markets.
Thanks to Andreas Graefe for the link.
–
Are Political Markets Really Superior to Polls as Election Predictors? – PDF file
For now, our results suggest the need for much more caution and less naive cheerleading about election markets on the part of prediction market advocates.
Previously: The truth about prediction markets
–
Damped polls are superior to prediction markets as election predictors.
Are Political Markets Really Superior to Polls as Election Predictors? – (PDF file) – by Chris Wlezien and Robert Erikson – 2007
Abstract
Election markets have been praised for their ability to forecast election outcomes, and to forecast better than trial-heat polls. This paper challenges that optimistic assessment of election markets, based on an analysis of Iowa Electronic Market (IEM) data from presidential elections between 1988 and 2004. We argue that it is inappropriate to naively compare market forecasts of an election outcome with exact poll results on the day prices are recorded, that is, market prices reflect forecasts of what will happen on Election Day whereas trial-heat polls register preferences on the day of the poll. We then show that when poll leads are properly discounted, poll-based forecasts outperform vote-share market prices. Moreover, we show that win-projections based on the polls dominate prices from winner-take-all markets. Traders in these markets generally see more uncertainty ahead in the campaign than the polling numbers warrant—in effect, they overestimate the role of election campaigns. Reasons for the performance of the IEM election markets are considered in concluding sections.
Conclusion
This paper has tested the claim that the Iowa Electronic Market offers superior predictions of election outcomes than the snapshots from public opinion polls. By our tests, the IEM election markets are not better than trial-heat polls for predicting elections. In fact, by a reasonable as opposed to naive reading of the polls, the polls dominate the markets as an election forecaster. This is true in the sense that a trader in the market can readily profit by “buying” candidates who, according to informed readings of the polls, are undervalued. Moreover, we find that market prices contain little information of value for forecasting beyond the information already available in the polls. Where then do the markets go wrong? To begin with, consider the vote-share market. The histories of market prices show that traders tend to hold persistent beliefs about the vote division that contradict the polls and that these persistent beliefs are often wrong. Incorrect beliefs get corrected only in the last days before the election, when the polls are difficult to ignore. The winner-take-all market tracks the vote-share market but compounds its errors by overvaluing long-shot candidates’ chances of victory, as if the market expects more campaign surprises than occur in reality. The existence of persistent mistakes in the vote-share market compounded by the degree of uncertainty about the vote-share estimates makes the winner-take-all market a particularly poor forecasting tool. Based on the experience of the IEM, if the polls show a candidate to hold a decisive lead but the market is unconvinced, bet on the polls. It should be noted that our daily poll projections are themselves rather crude instruments. Our robotic trading programs are informed by a flat prior, relying solely on the current polls and the days until the election but nothing more. Even when we compare market prices to the weekly average of poll-based forecasts, our instrument is primitive in that the week’s polls are not weighted for relative recency. But further perfection of our forecasting model from the polls would only advance our central argument. If we were to apply more rigorous modeling to obtain a properly weighted average of current polls and earlier polls, the victory of poll forecasts over the market forecast presumably would be more secure. One could argue that the results are drawn from a limited number of election years from a toy market with thin volume and limits on trader spending. With time, the IEM record could improve, and there is some suggestion that it has. Full-blown markets like Tradesports.com [or InTrade.com or BetFair.com] might in the end achieve an efficiency that so far has eluded the Iowa Electronic Market. Additionally, studies like the present one can suggest improved strategies to traders, which in turn improve the efficiency of election markets. Since our results are confined to a few runs of the toy Iowa market, some might claim a “so what” reaction. To such claimants, an important reminder is that the allegedly uncanny performance of the Iowa market has been touted as the primary evidence for the supposed superiority of election markets over the polls as an information source. The Iowa election market’s performance has not been so special after all. For now, our results suggest the need for much more caution and less naive cheerleading about election markets on the part of prediction market advocates.
Prediction markets compute facts and expertise quicker that the mass media do.
Political prediction markets react (with a small delay) to political polls —-just like the political experts and the mass media do, too. Hence, in order to discover their true social utility, the prediction markets (which are tools of intelligence) should not be compared to the polls (which are just facts) but to the similar meta intelligence mechanisms (the averaged probabilistic predictions from a large panel of experts, or the averaged probabilistic predictions from the political reporters in the mass media, or else). My bet is that, in complicated situations (such as the 2008 Democratic primary), the prediction markets beat the mass media (in terms of velocity) —-even though the prediction markets are not omniscient and not completely objective (but who is?).
–
You might remember the research article that I have blogged about:
–
Learning in Investment Decisions: Evidence from Prediction Markets and Polls – (PDF file) – David S. Lee and Enrico Moretti – 2008-12-XX
In this paper, we explore how polls and prediction markets interact in the context of the 2008 U.S. Presidential election. We begin by presenting some evidence on the relative predictive power of polls and prediction markers. If almost all of the information that is relevant for predicting electoral outcomes is not captured in polling, then there is little reason to believe that prediction market prices should co-move with contemporaneous polling. If, at the other extreme, there is no useful information beyond what is already summarized by the current polls, then market prices should react to new polling information in a particular way. Using both a random walk and a simple autoregressive model, we find that the latter view appears more consistent with the data. Rather than anticipating significant changes in voter sentiment, the market price appears to be reacting to the release of the polling information.
We then outline and test a more formal model of investor learning. In the model, investors have a prior on the probability of victory of each candidate, and in each period they update this probability after receiving a noisy signal in the form of a poll. This Bayesian model indicates that the market price should be a function of the prior and each of the available signals, with weights reflecting their relative precision. It also indicates that more precise polls (i.e. polls with larger sample size) and earlier polls should have more effect on market prices, everything else constant. The empirical evidence is generally, although not completely, supportive of the predictions of the Bayesian model.
–
–
You might also have watched Emile Servan-Schreiber’-s videos. Emile is a smart man, and those videos are truly instructive.
- In the first part (the lecture), our good doctor Emile Servan-Schreiber sold the usual log lines about the prediction markets —-blah blah blah blah blah.
- In the second part, Emile Servan-Schreiber took questions from the audience in the room. “-Aren’-t political prediction markets just following the polls?”-, asked one guy. Emile’-s answer was long and confused. However, in my view, Emile actually did answer that question (before it was ever asked) in his preceding lecture when, at one point, he made the point that the media were slower than the prediction markets to integrate all the facts about the 2008 Democratic primary, around May 2008. That is the right answer to give to a conference attendee who enquires about prediction markets “-following”- the polls. Both the mass media and the prediction markets do follow the polls (since the polls are facts that can’-t be ignored), during political campaigns. Let’-s compare the prediction markets with the mass media, instead, and let’-s see who’-s quicker to deliver the right intelligence..
–
Lance Fortnow gives a good insight about the relationship between polls and prediction markets (see his last paragraph).
Yesterday the Electoral College delegates voted, 365 for Barack Obama and 173 for John McCain. How did the markets do?
To compare, here is my map the night before the election and the final results. The leaning category had Obama at 364. The markets leaned the wrong way for Missouri and Indiana, their 11 electoral votes canceling each other out. The extra vote for Obama came from a quirk in Nebraska that the Intrade markets didn’-t cover: Nebraska splits their votes based on congressional delegations, one of which went to Obama.
Indiana and Missouri were the most likely Republican and Democratic states to switch sides according to the markets, which mean the markets did very well this year again. Had every state leaned the right way (again), one would wonder if the probabilities in each state had any meaning beyond being above or below 50%.
Many argue the markets just followed the predictions based on polls like Nate Silver’-s fivethirtyeight.com. True to a point, Silver did amazingly well and the markets smartly trusted him. But the markets also did very well in 2004 without Silver. [Chris Masse’s remark: In 2004, Electoral-Vote.com (another poll aggregator) was all the rage.] One can aggregate polls and other information using hours upon hours of analysis or one can just trust the markets to get essentially equally good results with little effort.
–
The polls are facts. Prediction markets are meta to facts. Prediction markets are intelligence tools. Let’-s compare them with similar intelligence tools.
–
Lance Fortnow’-s post attracted an interesting comment from one of his readers:
to provide an exciting collection of political and other prediction markets.
These markets are as much a “-prediction”- tool as a wind vane or outdoor thermometer are. They moved up and down according to the daily trends, with very little insight of the longer place phenomena underlying them.
When the weather was hot (Palin’-s nomination announcement) the market swinged widely towards McCain, while ignoring the cold front on the way here (the economic recession + Palin inexperience).
The value of weather forecast is in telling us things we didn’-t know. We don’-t need to trade securities to believe that if McCain is closing on the polls then his chances of wining are higher (duh!), which is what the markets did. We need sophisticated prediction mechanisms to tell us how the worsening economic conditions, the war in Iraq and Palin ineptitude (which in pre-Couric days wasn’-t as well established) will impact this election, today poll’-s be damned.
Looking at the actions by the republican teams, who were trying to read past the daily trend all the way to November 4th, it is clear that they thought all along they were losing by a fair margin. Because of this is they choose moderate, maverick McCain, went for the Palin hail mary fumble^H^H^H^H^H pass and the put-the-campaign-on-hold move.
A full two weeks before the election the McCain team concluded the election was unwinnable, while the electoral college market was still giving 25-35% odds to McCain.
–
As highlighted in bold, the commenter says two things:
- The prediction markets are just following the polls.
- The prediction markets have a minimal societal value.
–
My replies to his/her points:
- That’-s not the whole truth. The polls are just a set of facts, whereas the prediction markets are intelligence tools that aggregate both facts and expertise. The commenter picks up a simple situation (the 2008 US presidential election) where, indeed, anybody reading the latest polls (highly favorable to Barack Obama) could figure out by himself/herself what the outcome would be (provided the polls wouldn’-t screw it).
- That’-s true in simple situations, but that’-s wrong in complicated situations (such as the 2008 Democratic primary).
–
The emergence of the social utility of the prediction markets will come more clearly to people once we:
- Highlight the complicated situations-
- Code the mass media’-s analysis of those complicated situations, and compare that with the prediction markets.
–
–
APPENDIX:
–
–
–
–
No TweetBacks yet. (Be the first to Tweet this post)
Prediction markets feed on facts and expertise.
Via Yahoo! research scientist David Pennock of Odd Head and YooPick, the dear honorable Duncan Watts:
In part because of disappointing findings such as this, an increasingly popular substitute for expert opinions are so-called “-prediction markets,”- in which individuals buy and sell contracts on various outcomes, such as football game point spreads or presidential elections. The market prices for these contracts then effectively aggregate the knowledge and judgment of the many into a single prediction, which often turns out to be more accurate than all but the best individual guesses.
But even if these markets do perform better than experts, they don’-t necessarily do a good enough job to rely on. Recently, my colleagues have started tracking the performance of one popular prediction market, at forecasting the outcome of weekly NFL games. So far, what they’-re finding is that the market predictions are better than the simple rule of always betting on the home team, but only slightly so —- which, oddly, is very similar to what Tetlock found regarding his experts. Some outcomes, in other words, and possibly the outcomes we care about the most, simply aren’-t “-predictable”- in the way we would like.
–
- Prediction markets are not “-a substitute for expert opinions”-. They are a substitute for the averaged probabilistic predictions of a large group of experts polled the traditional way (by phone or by e-mail). In prediction markets, traders (who are not experts, most of the times) collect and aggregate facts and expertise at a lower cost than a poll or survey of experts.
- In the research cited by Ducan Watts, the prediction markets are slightly more accurate than the competitive forecasting mechanism. Well, that’-s something we are used to.
- What Ducan Watts doesn’-t say is that prediction markets integrate facts and expertise faster than the group of experts polled by his researching colleagues —-for the very crude reason that it takes a certain time to survey a group of experts (be it by e-mail or by phone).
–
If I can count, that’-s 3 reasons why prediction markets can bring in business value:
- lower cost-
- better accuracy (relatively, and, overall)-
- velocity.
That said, it should be repeated that prediction markets feed on facts and expertise —-so the experts remain indispensable in the general forecasting process.
No facts (e.g., political polls) –->- No prediction markets.
No experts (e.g., NFL prognosticators) –->- No prediction markets.
–
Are they afraid?
Bo Cowgill and Midas Oracle are the only media to have published about the Lee–-Moretti paper. We are awaiting insightful takes from the following prediction market bloggers:
– Freakonomics @ New York Times
– Overcoming Bias – (”-the future of humanity”-)
– Odd Head
– Computational Complexity
– Caveat Bettor
– Mike Linksvayer Blog
– NewsFutures Blog
– Inkling Markets Blog
– Consensus Point Blog
– Xpree Blog
– George Tziralis Blog
– Chris Hibbert Blog
– Jason Ruspini Blog
– John Delaney Blog
– James Surowiecki Blog @ New Yorker
– Felix Salmon @ Portfolio – Market Movers
– Zubin Jelveh @ Portfolio – Odd Numbers
If you are a reader of one of the blogs listed above, do e-mail their owners to demand that they feature a piece on the Lee–-Moretti paper.
–
Learning in Investment Decisions: Evidence from Prediction Markets and Polls – (PDF file) – David S. Lee and Enrico Moretti – 2008-12-XX
In this paper, we explore how polls and prediction markets interact in the context of the 2008 U.S. Presidential election. We begin by presenting some evidence on the relative predictive power of polls and prediction markers. If almost all of the information that is relevant for predicting electoral outcomes is not captured in polling, then there is little reason to believe that prediction market prices should co-move with contemporaneous polling. If, at the other extreme, there is no useful information beyond what is already summarized by the current polls, then market prices should react to new polling information in a particular way. Using both a random walk and a simple autoregressive model, we find that the latter view appears more consistent with the data. Rather than anticipating significant changes in voter sentiment, the market price appears to be reacting to the release of the polling information.
We then outline and test a more formal model of investor learning. In the model, investors have a prior on the probability of victory of each candidate, and in each period they update this probability after receiving a noisy signal in the form of a poll. This Bayesian model indicates that the market price should be a function of the prior and each of the available signals, with weights reflecting their relative precision. It also indicates that more precise polls (i.e. polls with larger sample size) and earlier polls should have more effect on market prices, everything else constant. The empirical evidence is generally, although not completely, supportive of the predictions of the Bayesian model.
–
Prediction markets react to polls.
Learning in Investment Decisions: Evidence from Prediction Markets and Polls – (PDF file) – David S. Lee and Enrico Moretti – 2008-12-XX
In this paper, we explore how polls and prediction markets interact in the context of the 2008 U.S. Presidential election. We begin by presenting some evidence on the relative predictive power of polls and prediction markers. If almost all of the information that is relevant for predicting electoral outcomes is not captured in polling, then there is little reason to believe that prediction market prices should co-move with contemporaneous polling. If, at the other extreme, there is no useful information beyond what is already summarized by the current polls, then market prices should react to new polling information in a particular way. Using both a random walk and a simple autoregressive model, we find that the latter view appears more consistent with the data. Rather than anticipating significant changes in voter sentiment, the market price appears to be reacting to the release of the polling information.
We then outline and test a more formal model of investor learning. In the model, investors have a prior on the probability of victory of each candidate, and in each period they update this probability after receiving a noisy signal in the form of a poll. This Bayesian model indicates that the market price should be a function of the prior and each of the available signals, with weights reflecting their relative precision. It also indicates that more precise polls (i.e. polls with larger sample size) and earlier polls should have more effect on market prices, everything else constant. The empirical evidence is generally, although not completely, supportive of the predictions of the Bayesian model.
–
Are prediction markets useful to you?
It’-s “-pretty clear”- that the prediction markets on political elections aggregate information from the polls —-and from the political experts.
–
Previously: #1 – #2 – #3 – #4 – #5 – #6
–
It’-s “-pretty clear”- that:
- InTrade has been over-selling the predicting power of its prediction markets.
- The prediction markets are information aggregation systems —-not magical tools.
- The main benefit of a prediction market is to express an aggregated expected probability. Most of the times, this is of low utility.
- In complicated situations, this aggregation will contrast well with a poor reporting. In these instances, the prediction market is a useful source of information.
–