Manipulation can affect prices

For the last two weeks a very interesting manipulation has been going on in Intrade’s “Hillary Clinton for President” contract.

1. The contract had been trading between 23 and 26 all year. It has consistently been about half the price of the “Hillary to get nominated” contract price. This ratio implies that, conditional on nominating Hillary, the Democrats have a 50 percent chance of winning the Presidency.

2. Comparing this with the unconditional probability of a Democratic victory (about 56 percent throughout 2007) suggests that Hillary is a slightly weaker general election candidate than the alternatives (Obama, mainly). [Note I say “suggests” because the comparison of conditional probabilities implies a correlation, but not necessarily that a Clinton nomination would cause a better outcome for the GOP. For more see the fifth question in this paper].

3. Around May 12, someone started buying “Hillary for President” pretty heavily, driving the price up to 40. This price is clearly ridiculous for two reasons:

3a. You could sell the President contracts of Hillary, Obama, Gore, and Edwards for a combined 69 (40+17+8+4) and buy the “Democrat to win” contract for 56.

3b. Since there was no movement in the nomination contract, the conditional probability of Hillary was now a ridiculous 40/52 = 77%, while the conditional probability of “Not Hillary” was 16/48 = 33%.

4. Unlike past manipulation attempts, this manipulator isn’t just dumping a ton of money in to move the price once. He (or she) is moving the price, and then providing support to keep the price high. Note that the price stayed at 40 for about a week (on higher than normal volume).

5. I mentioned the manipulation at the end of my talk at the Palm Desert prediction markets conference, figuring that there was no surer way to get a $100 bill picked up than to tell that crowd about it. Someone emailed Greg Mankiw and he blogged about it the next day. (Justin and I also just tipped off Tyler Cowen). Since then there has been some downward price pressure, but the manipulator isn’t throwing in the towel. He/she keeps replenishing the bid side of the order book, albeit giving ground in the process.

6. By my calculation, the manipulator has spent about $10k to push the Hillary contract up around 12 pts on average for 2 weeks, buying about 8,500 contracts in the process. [I’m assuming 26 is fair value and just summing up volume*(price – 26)].

7. So what do we learn from this?

7a. Manipulation doesn’t have to be as ham fisted as the 2004 Bush reelection contract manipulation.

7b. The manipulators are getting smarter. This manipulator was smarter in one sense by providing price support after the fact. But of course, he/she shouldn’t have pushed the price up to such an obviously ridiculous level (and should have bought and sold other contracts to keep the pricing relationships consistent). The same mistake probably won’t be made next time.

7c. By prediction markets standards, manipulation is expensive. But by political spending standards, manipulation could be reasonably cheap. That said, I can find only one media mention of the inflated Hillary price. $10k for one blog mention probably isn’t great value for money, but the Intrade prices get cited a lot these days, so the manipulator may just have been unlucky.

7d. None of this disputes Hanson and Oprea’s point that, if anticipated, manipulation could increase average prediction market accuracy. In their model, traders all have rational expectations about how much manipulation to expect. In the real world, they may need some help (hence this post).

7e. Although the Hillary price is down to 34.5 (bid-ask midpoint at time of writing), there are about 500 contracts bid above 33, so there is still plenty of free money there if you want it.

Hillary Clinton Chart 2007 Manipulations EZ

NPD releases April sales data, prediction market and analyst compared

No Gravatar

Last month was the first month the simExchange (the free to play video game stock market game) has traded monthly hybrid futures contracts to predict NPD US video game sales data. In this trial run, trading on the simExchange appeared to be relatively more accurate than expert predictions. To quickly review, the following table depicts actual sales as reported by NPD Group, the prediction by trading on the simExchange, the error of the simExchange&#8217-s forecast, the prediction by leading Wall Street firm Wedbush Morgan, and the error of Wedbush Morgan&#8217-s forecast:

US Hardware March 2007

ConsoleActual Sales*The simExchange**ErrorWedbush Morgan***Error
Nintendo DS508k492.8K-2.99%250K-50.79%
Nintendo Wii259k385.0K+48.65%400K+54.44%
Microsoft Xbox 360199k231.0K+16.08%250K+25.63%
Sony PlayStation Portable180K180.5K+0.28%210K+16.67%
Sony PlayStation 3130k144.0K+10.77%165k+26.92%

NPD has released its findings for US April 2007 video game sales. The following charts compare actual sales numbers determined by the NPD Group with forecasts from market trading on the simExchange and predictions by leading Wall Street analyst Michael Pachter of Wedbush Morgan.

US Hardware April 2007

RankConsoleActual Sales*The simExchange**ErrorWedbush Morgan****Error
1.Nintendo DS471k548.9K+16.54%450K-4.46%
2.Nintendo Wii360k319.5K-11.25%300K-16.67%
3.Sony PlayStation 2194kNot listedN/A250K+29.87%
4.Sony PlayStation Portable183K190.4K+4.04%200K+9.29%
5.Microsoft Xbox 360174k194.8K+11.95%175K+0.57%
6.Nintendo GameBoy Advance84kNot listedN/AN/AN/A
7.Sony PlayStation 382k107.3K+30.85%100k+21.95%
8.Nintendo GameCube13kNot listedN/AN/AN/A

The simExchange did not trade monthly hybrid futures for game software this month. However, NPD&#8217-s Top 10 is still relevant for comparing how trading on the simExchange is forecasting lifetime, global sales of the games.

RankTitlePublisherApril Sales*Lifetime Forecast**
1.Pokemon Diamond (DS)Nintendo1.045M18.21M
2.Pokemon Pearl (DS)Nintendo712K18.21M
3.Super Paper Mario (Wii)Nintendo352K1.83M
4.Wii Play w/remote (Wii)Nintendo249K4.90M
5.Guitar Hero 2 w/ guitar (Xbox 360)Activision197K1.30M
6.Guitar Hero 2 w/ guitar (PS2)Activision142KNot listed
7.Spider-Man 3 (Xbox 360)Activision117K1.07M
8.Spider-Man 3 (PS2)Activision105K1.62M
9.God of War II (PS2)Sony101K1.92M
10.MLB &#8216-07: The Show (PS2)Sony79KNot listed

Although data is still limited, initial predictions on the simExchange video game prediction market appear to be relatively accurate (compared with traditional predictors), and in some cases, absolutely accurate (compared with the actual result). The prediction market out performed the analyst on every prediction in March and was split in April.

Predictions on the simExchange should become more accurate over time as more accurate players are rewarded with more virtual currency for their accuracy (thereby enabling them to form more predictions) and less accurate players lose virtual currency (thereby discounting their ability to form more predictions).

Joining and playing the simExchange is completely free and does not involve any real money. To play, sign up here.

* NPD Group sales data
** The simExchange trading data
*** GameDaily Biz, April 13, 2007
**** GameDaily Biz, May 14, 2007

Cross posted from NPD releases April sales data, prediction market and analyst compared on the simExchange Official Blog.

Previously: Accounting sales of digitally downloaded games, Next lesson: so the “futures” aren’t really future, So what exactly are these “futures?”, The structure of the simExchange stocks and An invitation to join the simExchange beta.

Critical Mass Matters.

No Gravatar

An interesting article on the Fool re: Yahoo! is exiting the auctions market. Even more interesting however, is the testament of how even the biggest brands (Yahoo!) with even the most salient internet experiences (auctions), can fail due to the problem of not achieving a critical mass.

It&#8217-s hard to believe that on all of Yahoo! Sports Cards and Memorabilia auctions (186,000 listings) there are only 326 current bids (.2%). Given those types of numbers, I&#8217-m surprised they waited this long to get out.

When looking at some of the US prediction markets,

Inkling
WSX Exchange
HedgeStreet

Just looking at &#8220-the action&#8221-, their respective *active* user bases seem to be in the hundreds and low thousands. All seem to suffer from the basic malaise of not having a thriving critical mass user base.

Lesson de-jour: Get critical mass!

A lesson in stock trading mechanics

No Gravatar

A simExchange player (jayen) recently asked about how prices adjust in the real stock market compared to how trading on the simExchange works. This question came from a special event on April 26 following Ubisoft Entertainment&#8217-s earnings announcement. Ubisoft had announced that it has sold 950,000 copies of Red Steel when the stock was only forecasting 478,600 copies (47.86 DKP). This resulted in a free arbitrage opportunity in which anyone buying the stock would be locking in guaranteed gains.

At the same time, anyone selling the stock at 47.86 DKP would be giving away money. Naturally, no rational person would be selling at 47.86 DKP if the news already reveals the stock should be worth over 95.00 DKP. Unfortunately, the simExchange relies on NPC market makers (NPC is a gaming term meaning &#8220-Non-Player Character&#8221-) that do not take news into account when they make markets and so the prices would not immediately reflect the news unless traders bought every automated ask order up to 95 DKP.

Remember, stock markets function in an auction system where a bidder and seller each have a price they are willing to buy and sell at. When there is a match&#8211-a buyer and a seller willing to transact at the same price&#8211-a trade is filled. Due to these mechanics, a stock’s price can easily jump from one trade to the next as the last traded price does not directly affect what price buyers and sellers can trade at next.

Following large news events, such as earnings releases, you will often see a jump in the stock price. A stock may have just traded at $100. News is released that shows the company is growing much faster than previously believed. The market makers now believe the stock is worth around $120 a share. They don’t just keep posting sell orders around $100 and let buyers gradually push the price of the stock to $100, they immediately post that they are willing to sell at no less than $120 a share. Buyers who believe the stock is worth more than $120 a share will immediately adjust their bid orders to $120 as they know they are not going to get the shares at $100. The stock price would immediately jump from $100 to $120 with no trades at any price in between.

As previously mentioned, the NPC market makers on the simExchange are not aware of news that should adjust their bid and ask prices. It is unrealistic for them to keep posting sell orders below 95 DKP if the news already shows the stock should be worth 95 DKP. As a result, the bid and ask orders were manually adjusted to compensate for this new information, as would be done in the real stock market.

It is easiest to notice and understand this by viewing what are called Level II Quotes (advanced trading mode on the simExchange). This view lets you see the order book: the collection of orders people are posting as offers to buy or sell. A trade only fills when someone submits an order that matches an order in order book. If there are no sell orders submitted at 50 DKP, then you cannot buy at 50 DKP. You can always submit an order to buy at 50 DKP and wait for a seller to come by who is wiling to take your offer. However, if there are no orders to sell below 90 DKP, then 90 DKP is the only price you can immediately buy at. This system is often referred as a &#8220-double call auction.&#8221-

Cross posted from A lesson in stock trading mechanics on the simExchange Official Blog.

Previously: Next lesson: so the “futures” aren’t really futures, So what exactly are these “futures?”, The structure of the simExchange stocks and An invitation to join the simExchange beta.

Recession probability index rises to 16.9%

No Gravatar

The Bureau of Economic Analysis reported today that U.S. real GDP grew at an annual rate of 1.3% in the first quarter of 2007, moving our recession probability index up to 16.9%. This post provides some background on how that index is constructed and what the latest move up might signify.

What sort of GDP growth do we typically see during a recession? It is easy enough to answer this question just by selecting those postwar quarters that the National Bureau of Economic Research (NBER) has determined were characterized by economic recession and summarizing the probability distribution of those quarters. A plot of this density, estimated using nonparametric kernel methods, is provided in the following figure- (figures here are similar to those in a paper I wrote with UC Riverside Professor Marcelle Chauvet, which appeared last year in Nonlinear Time Series Analysis of Business Cycles). The horizontal axis on this figure corresponds to a possible rate of GDP growth (quoted at an annual rate) for a given quarter, while the height of the curve on the vertical axis corresponds to the probability of observing GDP growth of that magnitude when the economy is in a recession. You can see from the graph that the quarters in which the NBER says that the U.S. was in a recession are often, though far from always, characterized by negative real GDP growth. Of the 45 quarters in which the NBER says the U.S. was in recession, 19 were actually characterized by at least some growth of real GDP.

chauvet3.gif

One can also calculate, as in the blue curve below, the corresponding characterization of expansion quarters. Again, these usually show positive GDP growth, though 10 of the postwar quarters that are characterized by NBER as part of an expansion exhibited negative real GDP growth.

chauvet4.gif

The observed data on GDP growth can be thought of as a mixture of these two distributions. Historically, about 20% of the postwar U.S. quarters are characterized as recession and 80% as expansion. If one multiplies the recession density in the first figure by 0.2, one arrives at the red curve in the figure below. Multiplying the expansion density (second figure above) by 0.8, one arrives at the blue curve in the figure below. If the two products (red and blue curves) are added together, the result is the overall density for GDP growth coming from the combined contribution of expansion and recession observations. This mixture is represented by the yellow curve in the figure below.

chauvet5.gif

It is clear that if in a particular quarter one observes a very low value of GDP growth such as -6%, that suggests very strongly that the economy was in recession that quarter, because for such a value of GDP growth, the recession distribution (red curve)is the most important part of the mixture distribution (yellow curve). Likewise, a very high value such as +6% almost surely came from the contribution of expansions to the distribution. Intuitively, one would think that the ratio of the height of the recession contribution (the red curve) to the height of the mixture distribution (the yellow curve) corresponds to the probability that a quarter with that value of GDP growth would have been characterized by the NBER as being in a recession. Actually, this is not just intuitively sensible, it in fact turns out to be an exact application of Bayes&#8217- Law. The height of the red curve measures the joint probability of observing GDP growth of a certain magnitude and the occurrence of a recession, whereas the height of the yellow curve measures the unconditional probability of observing the indicated level of GDP growth. The ratio between the two is therefore the conditional probability of a recession given an observed value of GDP growth. This ratio is plotted as the red curve in the figure below.

chauvet6.gif

Such an inference strategy seems quite reasonable and robust, but unfortunately it is not particularly useful&#8211- for most of the values one would be interested in, the implication from Bayes&#8217- Law is that it&#8217-s hard to say from just one quarter&#8217-s value for GDP growth what is going on. However, there is a second feature of recessions that is extremely useful to exploit&#8211- if the economy was in an expansion last quarter, there is a 95% chance it will continue to be in expansion this quarter, whereas if it was in a recession last quarter, there is a 75% chance the recession will persist this quarter. Thus suppose for example that we had observed -10% GDP growth last quarter, which would have convinced us that the economy was almost surely in a recession last quarter. Before we saw this quarter&#8217-s GDP number, we would have thought in that case that there&#8217-s a 0.75 probability of the recession continuing into the current quarter. In this situation, to use Bayes&#8217- Law to form an inference about the current quarter given both the current and previous quarters&#8217- GDP, we would weight the mixtures not by 0.2 and 0.8 (the unconditional probabilities of this quarter being in recession and expansion, respectively), but rather by magnitudes closer to 0.75 and 0.25 (the probabilities of being in recession this period conditional on being in recession the previous period). The ratio of the height of the resulting new red curve to the resulting new yellow curve could then be used to calculate the conditional probability of a recession in quarter t based on observations of the values of GDP for both quarters t and t – 1. Starting from a position of complete ignorance at the start of the sample, we could apply this method sequentially to each observation to form a guess about whether the economy was in a recession at each date given not just that quarter&#8217-s GDP growth, but all the data observed up to that point.

Once can also use the same principle, which again is nothing more than Bayes&#8217- Law, working backwards in time&#8211- if this quarter we see GDP growth of -6%, that means we&#8217-re very likely in a recession this quarter, and given the persistence of recessions, that raises the likelihood that a recession actually began the period before. The farther back one looks in time, the better inference one can arrive at. Seeing this quarter&#8217-s GDP numbers helps me make a much better guess about whether the economy might have been in recession the previous quarter. We then work through the data iteratively in both directions&#8211- start with a state of complete ignorance about the sample, work through each date to form an inference about the current quarter given all the data up to that date, and then use the final value to work backwards to form an inference about each quarter based on GDP for the entire sample.

All this has been described here as if we took the properties of recessions and expansions as determined by the NBER as given. However, another thing one can do with this approach is to calculate the probability law for observed GDP growth itself, not conditioning at all on the NBER dates. Once we&#8217-ve done that calculation, we could infer the parameters such as how long recessions usually last and how severe they are in terms of GDP growth directly from GDP data alone, using the principle of maximum likelihood estimation. It is interesting that when we do this, we arrive at estimates of the parameters that are in fact very similar to the ones obtained using the NBER dates directly.

What&#8217-s the point of this, if all we do is use GDP to deduce what the NBER is eventually going to tell us anyway? The issue is that the NBER typically does not make its announcements until long after the fact. For example, the most recent release from the NBER Business Cycle Dating Committee was announced to the public in July 2003. Unfortunately, what the NBER announced in July 2003 was that the recession had actually ended in November 2001&#8211- they are telling us the situation 1-1/2 years after it has happened.

Waiting so long to make an announcement certainly has some benefits, allowing time for data to be revised and accumulating enough ex-post data to make the inference sufficiently accurate. However, my research with the algorithm sketched above suggests that it really performs quite satisfactorily if we just wait for one quarter&#8217-s worth of additional data. Thus, for example, with the advance 2007:Q1 GDP data just released, we form an inference about whether a recession might have started in 2006:Q4. The graph below shows how well this one-quarter-delayed inference would have performed historically. Shaded areas denote the dates of NBER recessions, which were not used in any way in constructing the index. Note moreover that this series is entirely real-time in construction&#8211- the value for any date is always based solely on information as it was reported in the advance GDP estimates available one quarter after the indicated date.

rec_prob_midas.gif

Although the sluggish GDP growth rates of the past year have produced quite an obvious move up the recession probability index, it is still far from the point at which we would conclude that a recession has likely started. At Econbrowser we will be following the procedure recommended in the research paper mentioned above&#8211- we will not declare that a recession has begun until the probability rises above 2/3. Once it begins, we will not declare it over until the probability falls back below 1/3.

So yes, the ongoing sluggish GDP growth has come to a point where we would worry about it, but no, it&#8217-s not at the point yet where we would say that a recession has likely begun.

[James Hamilton is professor of economics at the University of California, San Diego. The above is cross-posted from Econbrowser].

Leading political indicators

No Gravatar

American politics does not suffer from a shortage of polls. Zogby. Gallup. Rasmussen. SurveyUSA. Mason-Dixon. Polimetrix&#8230- In an information-glutted world, what matters is not the supply of sources, but the ability to glean trustworthy information from the larger swath of poor data.

Different polling organizations have different strengths and weaknesses. Some use &#8220-tight screens&#8221- to scope out likely voters- others simply sample registered voters, without making any attempt to tighten the survey base to &#8220-likely voters.&#8221- Tight screening is especially crucial to gauge the true state of a primary, when committed base opinion can diverge significantly from less engaged moderate voters, and more importantly, influence those moderates over time to converge to the more partisan perspective. Some use human interviewers, although recently that has given way to IVR (Interactive Voice Recording) polls (the kind where a computer talks to you and asks you to &#8220-press 1 if you will definitely support X, 2 if probably&#8230-&#8221-)

I have found tight-screen, IVR polling to be the most reliable. IVR not only has no marginal cost, but it eliminates all the biases resulting from trying to give the most pleasant-sounding answer possible (the &#8220-sexy grad student effect&#8221- that exaggerated Kerry&#8217-s margin by 15 points in Pennsylvania 2004 exit polling, for example). IVR possible responses can also be randomly rotated from respondent to respondent to eliminate recency biases (first and last responses in a list exaggerated because those are at the forefront of a person&#8217-s memory of the list, not because s/he will vote that way).

The poster-child of IVR tight-screen polling success is Scott Rasmussen&#8217-s Rasmussen Reports. I have only tracked them over the last two election cycles (2004 and 2006), but considering that 2004 was a GOP wave and 2006 a Democratic wave election, I think the data is sufficient to form a valid judgment. Rasmussen&#8217-s track record is simply stupendous. It predicted 49 out of 50 states in 2004 correctly, usually within two percentage points of the actual outcome. In 2006, Rasmussen achieved similarly impressive results &#8212- all the more impressive when you consider that most polling models tend to err in favor of one party or the other. (&#8220-Likely voter&#8221- models tend to favor Republicans, and registered voter-based models tend to exaggerate Democratic strength.)

My other favorite sources include Gallup and Mason-Dixon. Gallup comes closer to the &#8220-registered voter&#8221- model than the tighter Rasmussen model, so Gallup usually lags tighter-screen polls. By election eve, however, the two models usually converge. Gallup&#8217-s election-eve congressional generic vote is hands-down the best in the business. However, their numbers for party primaries have poor predictive value, because they don&#8217-t make much effort to hunt down likely voters.

Differing survey methods can yield very different results. Rasmussen has long shown a much closer Democratic nomination race than most established, &#8220-registered voter&#8221- pollsters &#8212- most recently, it showed a 32-32 tie between Clinton and Obama, with Edwards wallowing 15 points behind. Gallup&#8217-s last numbers tightened drastically to a 31-26 race between Clinton and Obama (Gallup&#8217-s numbers are also hard to compare with Rasmussen&#8217-s because Gallup includes Gore).

Many smart Democrats, notably MyDD&#8217-s Chris Bowers, believe that Gallup and others are mistakenly including lots of &#8220-low information voters&#8221- who simply lag the opinions and thought processes of more-attuned Democratic partisans.

Now that more establishmentarian polling firms are coming in line with Rasmussen&#8217-s results, one can infer that the likely voter/ Chris Bowers theory has gotten the better of the argument.

A survey of pollsters wouldn&#8217-t be complete without knowing which ones to stay away from. Stay away from Zogby and CNN polling. James Carville&#8217-s and Stan Greenberg&#8217-s DemocracyCorps polling outfit is not trustworthy, either &#8212- for example, when they doubled the percentage of blacks in an October 2006 survey sample to bump the Democrats&#8217- generic advantage by 5 points, to reinforce the Democratic narrative of a building wave.

Lastly, partisan pollsters in a competitive election season should always be taken with a grain of salt &#8212- they will use heuristic subtleties to create the best impression possible for their party&#8217-s candidates. Strategic Vision, a Republican outfit, deserves a three- or four-point handicap. Franklin Pierce generated a dubious Romney result for New Hampshire right after its lead pollster, Rich Killion, went to work for the Romney campaign. Such polls should be trusted only as a last resort.

For those of us who wish to divine movements in politics futures, discerning trustworthy data from bad data is paramount. Poll-rigging is the high art of Washington, DC, and as any interest group &#8212- or candidate &#8212- knows, it&#8217-s easier than easy to produce a poll that diverges wildly from reality, if the heuristics are threatening enough.

(cross-posted from my blog, The Tradesports Political Maven)

There are three financial prediction exchanges in the tube.

No Gravatar

There are three financial prediction exchanges (betting exchanges) in the tube, in North America. All of them are in stealth mode.

Previous blog posts by Deep Throat:

  • Deep Throat on the idle Prediction Market Industry Association (PMIA)
  • IN-PLAY BETTING: BetFair is already compliant with the Gambling Commission’s first pointer.
  • Rumor Mill — Wednesday morning
  • Conference on Prediction Markets
  • How BetFair did treat its customers on the day that the BetFair Starting Price system crashed down
  • How BetFair markets are settled in the situation where their integrity team are unhappy about some aspect of the betting on that event
  • Who is behind the CFTC’s request?

Third E-mail to InTrade-TradeSports

No Gravatar

Dan,

As you can see, I have greatly cut back my trading, as have many others. My desire is not to close my account. My dissatisfaction, as you label it, is not as immense as you call it. I do not believe anywhere in any of my emails I expressed that I would like to close my account. My goal was to help you realize what the perception from a high volume trader is.

If your final resolution to me is to suggest that I close my account, you are clearly sending a message to all as to how your company will continue to operate.

Good luck to you and the entire TEN with your company reorganization.

Todd

PS. It was great to see that the exchange closing time was changed to 3am last night from 2am, even though there were no late events. Maybe I&#8217-m not all wrong, and what a simple fix that would have been weeks ago&#8230-

Previous: A Big Trader’s Open Letter to TradeSports-InTrade + Second E-mail to InTrade-TradeSports

Next: InTrade-TradeSports to Todd Griepenburg: GO TO HELL.

Dan Laffan to Todd: #1 – #2

Second E-mail to InTrade-TradeSports

No Gravatar

Dear Dan,

Its a shame that TS/Intrade needs to use this split up to hide behind as a reason to not solve the issues of management incompetence. As I read thru your email, I laughed to myself many times as to how your company takes a defensive stance on the issues, rather than addressing them to make the user have a more friendly experience. While I may not be 100% correct in all my opinions either, the people on my email forward list above generally agree with my opinions. If you take the time to look at some of the names on that list, you will see that it consists of millions of lots traded, and other users who have attempted to contact you regarding issues.

Even though your responses are short, and basically just repetition of some generic line I can find somewhere in website rules, I will take the time to respond to them.

1) Early closing of games. SMC/USF was not the first time. Like I said to you, review the email chain between myself and live help. It needs to be explained to them on a weekly basis for the Big Monday college basketball games, or ANY game that runs late. You create the featured game schedule, so if you have a game that starts at midnight EST, I would think you would have enough common senses to set the proper exchange closing time.

You say you do not have a policy of closing early to generate more fees. When in actuality this is a blatant lie. The pause button had NEVER been used like it is now. Your new fee structure has created a benefit to cover all in game positions at games end. Covering of these positions is a loss of revenue for your company. Many, many times events are paused early. Before the new fee structure, I had never (in 7 months) seen you pause an over/under for a game with 5 minutes to go. Now it happens all the time. If the over is achieved while the game is still going on, it gets paused to save a few cents. However, you dont pause a golfer in an event who has completed his round and is scored behind another golfer who has completed a round. But don&#8217-t you see, TEN created this. Your fee schedule is wrong. Bottom line. Fees should be structured so there is no benefit for a winner to have to cover a position to save money, and for your employees to be forced to pause a game with time remaining on the clock.

I was unable to get help because when an issue arises during your PEAK hours, no one is there to help. You are the only website who has no real &#8216-live&#8217- help available during peak hours. One person to serve the 150 people or so logged in at am EST is not suitable. Email, and Internet &#8216-live&#8217- help that never comes online to help is not practical in my opinion.

You noted from my email my fees. Maybe you should review, I traded 508 total lots in that game, and expired (with no other choice but to expire them) 244. I was short 132, covered my position by buying 132, then went long 244. $24.40 was my forced expiry fee because you did not give me a chance to trade out of my position during the last 1:00 of clock time and 5 minutes of elapsed time. So $9 in fees in that game is impossible. Once again TEN gets on the defensive. You should willingly refund all fees in that game- in game, expiry, market makers, etc. You gave us no chance to trade in the only time during the second half where the game was close. I showed you my desire to cover by my trades on another website. So my expiry fee would have been non existent. The event traded 1500 lots at most? Is that few dollars really that important to TEN? Doing the right thing should be what is important. It should be a policy that if TEN pauses early or closes, its a fee free event. But you are not user friendly. Im still awaiting my refund, as are other STM/USF traders.

2) Your new fee structure – You are correct it&#8217-s not the only reason why volume is suffering. No new deposit methods, while other sites have added many, Market makers leaving events, etc., etc., Do I need to review them again?

You say trading at the extremes is desirable only for avoiding fees? Do you realize what your saying? Trading at 97 my return is now .17, while it used to be .23. Do you realize what a difference that is? It used to be great to be able to take on someone else&#8217-s low risk in an almost decided event and make some money doing so. Now you have made that a lot less desirable.

And yes, avoiding fees is the pitfall of the structure you created.

Interesting that a high volume discount is now being considered with a few days left for TEN as a whole.

3) I have also always been available to my staff members in my business via cell phone. But I always had people in each of my store locations who were authorized to make a decision. During your PEAK volume times, you have no management. Defend your position all your like, but it makes no sense at all. Overstaff during low volume times, skeleton crew when your site is making its most revenue. Great business decision.

4) Grading decisionsHaste, and incompetence. In order to pause a game at :00 like you do, someone must be watching. When the score differs from your &#8217-source&#8217-, to what they see, wouldn&#8217-t it be worth an extra minute or two to find out?

5) Financial contract grading – Intrade just has no desire to grade the contract fairly. The information is out there. Your grading is wrong, and at some point it will come back to bite you. You cannot pause a contract at a specified ending point xx:00:00. Then grade it based on data some random number of seconds later while no one can trade on that data. Maybe your new management should think about the logic of that. We all know what can happen in a few seconds.

6) Market makers – I linked you to conversations with market makers where they openly say there are disagreements with TEN. But everything has changed from when I started here a year ago, and when some of the long time traders in my forward list started here. MMers now leave events during the game, they reduce their size in game by sometimes up to 75%, they change their in game spreads. You are looking to have fair and accurate markets? Well, they don&#8217-t exist here.

I hope you feel I represented your email fairly, ideally, and intact like you seemed concerned that I wouldn&#8217-t. My goal here was not to make you look bad, but to make you understand what its like on the other side of the computer screen. And that is obviously a position no one at TEN has every taken the time to view their business structure from. Maybe you feel everything works great, but from the support of my ideas in many forums by huge volume traders, not just the TEN forum, things are not running smoothly from our perspective.

I am aware of the split. I honestly feel bad for whichever company is keeping the current management. Hopefully, whichever site gets the new management- they are more open to suggestions that would benefit customer satisfaction and corporate bottom line. The current management does not seem to have the desire to create the industry leader that is so often referred to in statements from management. When your users/customers call you the industry leader, only then will it be reality.

Sincerely,
Todd Griepenburg

Previous: A Big Trader’s Open Letter to TradeSports-InTrade

Next: Third E-mail to InTrade-TradeSports + InTrade-TradeSports to Todd Griepenburg: GO TO HELL.

Dan Laffan to Todd: #1 – #2

XM-Sirius merger

No Gravatar

So at Justin&#8217-s and my suggestion, Intrade has just listed a contract on whether the XM-Sirius merger will close.

(We&#8217-ve been waiting for a nice, juicy, controversial merger like this ever since Hp-Compaq.)

Interestingly, the MM has it at 60 bid/70 ask to close by June 08, but if you look at the stock prices of XMSR and SIRI, they have lost some of their initial announcement effect. There might be some free money on the table for the quick (I am abstaining).

If there is interest in this contract, I might be able to get them to run a contract on future subscription prices and subscribers (either conditional on the merger closing or just straight up). If you think about it, this could be an interesting tool for evaluating mergers in the future.

Like the idea? Spread the word.

Sirius 5 day chart

XM 5 day chart