Please, make WordPress a bit like Wikipedia.

No Gravatar

Folks, here is my proposal to the WordPress developers:

Assign a great number of editors to some specific pages

Right now, if you are an editor in WordPress, you can edit any posts and pages. Hence, the administrator of a big group blog would not have many editors &#8212-because the blog posters would not like the idea that their colleagues can edit their posts.

But it would be great to be able to have a great number of editors for some specific pages. That way, any group blog powered by WordPress would be able to tap in the &#8220-wisdom of crowds&#8221- (see James Surowiecki book by the same name) &#8212-the same way Wikipedia does. For more on Wikipedia, see these two posts.

Collective intelligence (a.k.a. wisdom of crowds) is a mechanism at the heart of Google PageRank, Wikipedia, open-source software, prediction markets, etc. It is very powerful. WordPress could tap into that very easily, by allowing a page-by-page editing role.

The WP admin would set who are the editor(s) of a particular page &#8212-one registered person, two, a bunch of blog authors&#8230- or any internet citizens like in Wikipedia.

Thanks a lot for your attention. Contact me for more info, or leave a comment below.

NEXT: WordPress is a bit like WikiMedia (the software powering Wikipedia), now.

Previous blog posts by Chris F. Masse:

  • The definitive proof that FOR-PROFIT prediction exchanges (like BetFair and InTrade) are the best organizers of socially valuable prediction markets (like those on global warming and climate change).
  • Fairness Doctrine prediction markets
  • 2 MILLION TRADES LATER: Inkling’s play-money prediction markets are accurate —too.
  • Web Forums on Prediction Markets
  • Jason Ruspini will answer SOME of these CFTC questions. — 12 days left, Jason.
  • QUIZZ OF THE DAY: Which blog is the most open minded?
  • Prediction Markets TV — Will the controversial but indispensable Max Keiser (ex-HSX) stay true to his purpose, or will he f*** it up?

Amateur Journalists (Bloggers) Vs. Professional Journalists (Media) Vs. Wisdom Of Crowds & Collective Intelligence (Wikipedia)

No Gravatar

And the wisdom of crowds won, of course. That&#8217-s the conclusion I draw from reading Rogers Cadenhead at WorkBench, who assessed what would be the settlement of the LongBets wager on:

In a Google search of five keywords or phrases representing the top five news stories of 2007, weblogs will rank higher than the New York Times&#8217- Web site.

AGREE
Dave Winer

Stakes
$2,000
($1,000 each)

DISAGREE
Martin Nisenholtz

For Rogers Cadenhead, Dave Winer will win the bet. But he also says that the overall winner is&#8230- WIKIPEDIA.

[…] So Winer wins the bet 3-2, but his premise of blog triumphalism is challenged by the fact that on all five stories, a major U.S. media outlet ranks above the leading weblog in Google search. Also, the results for the top story of the year reflect poorly on both sides. In the five years since the bet was made, a clear winner did emerge, but it was neither blogs nor the Times. Wikipedia, which was only one year old in 2002, ranks higher today on four of the five news stories: 12th for Chinese exports, fifth for oil prices, first for the Iraq war, fourth for the mortgage crisis and first for the Virginia Tech killings. Winer predicted a news environment &#8220-changed so thoroughly that informed people will look to amateurs they trust for the information they want.&#8221- Nisenholtz expected the professional media to remain the authoritative source for &#8220-unbiased, accurate, and coherent&#8221- information. Instead, our most trusted source on the biggest news stories of 2007 is a horde of nameless, faceless amateurs who are not required to prove expertise in the subjects they cover.

So the real winner is Wikipedia &#8212-a news and knowledge aggregator&#8230- using anonymous volunteers. But Wikipedia is only an information aggregator&#8230- it feeds on both media and blogs to gather the facts. Wikipedia is the common denominator of knowledge &#8212-not the primary source of reporting. Just like prediction markets feed on polls and other advanced indicators.

External Link: See a previous assessment of the bet by Jason Kottke.

NEXT: Amateur Experts (Yahoo! Answers) Vs. Wisdom Of Crowds &amp- Collective Intelligence (Wikipedia)

UPDATE: An empty comment from Read &#038- Write Web.

James Surowieckis The Wisdom Of Crowds… still stands.

No Gravatar

James Surowiecki&#8217-s 4 comments at Overcoming Bias (in October 2007), responding to accusations that he got it all wrong about Francis Galton:

&#8212-

James Surowiecki&#8217-s 1st comment:

&#8220-Galton did not even bother to calculate a mean, as he saw his data was clearly not normally distributed. He used the median (of 1207), which was much further off than the mean, but by modern standards clearly the better estimator. It was Karl Pearson in 1924 who calculated the mean.&#8221-

Robin [Hanson], before repeating falsehoods, you might want to go back to the original sources &#8212- or, in this case, to the footnotes to my book. Galton did, in fact, calculate the mean, long before Karl Pearson did. Galton&#8217-s calculation appeared in Nature, Vol. 35, No. 1952 (3/28/07), in a response to letters regarding his original article. One of the correspondents had gone ahead and calculated a mean from the data that Galton had provided in his original piece, and had come up with the number 1196. Galton writes, &#8220-he makes it [the mean] 1196 lb. . . . whereas it should have been 1197 lb.&#8221-

I find the fact that Levy and Peart wrote an entire article about Galton (and, to a lesser extent, about my use of him), and never went back and checked the original sources is astounding in its own right. (They actually wonder in the paper, &#8220-However the new estimate of location came to be part of Surowieki’s account,&#8221- as if the answer isn&#8217-t listed right there in the footnotes.) What makes it even more astounding, though, is that they&#8217-ve written an entire paper about the diffusion of errors by experts who &#8220-pass along false information (wittingly or unwittingly)&#8221- while passing along false information themselves.

It also seems bizarre that Levy and Peart caution, &#8220-The expectation of being careful seems to substitute for actually being careful,&#8221- and yet they were somehow unable to figure out how to spell &#8220-Surowiecki&#8221- correctly. The article is a parody of itself.

I&#8217-m happy to enter into a discussion of whether the median or the mean should be used in aggregating the wisdom of crowds. But whether Galton himself thought the mean or the median was better was and is irrelevant to the argument of my book. I was interested in the story of the ox-weighing competition because it captures, in a single example, just how powerful group judgments can be. Galton did calculate the mean. It was 1197 lbs., and it was 1 lb. away from the actual weight of the ox. The only &#8220-falsehood&#8221- being perpetrated here are the ones Levy and Peart are putting out there, and the ones that you uncritically reprinted.

&#8212-

James Surowiecki&#8217-s 2nd comment:

Here are the links for the letter from Galton, where he reports the mean:

http://galton.org/cgi-bin/search/images/galton/search/essays/pages/galton-1907-ballot-box_1.htm
http://galton.org/cgi-bin/search/images/galton/search/essays/pages/galton-1907-ballot-box_2.htm

There&#8217-s no reason for debate here. Levy and Peart say &#8220-Pearson’s retelling of the ox judging tale apparently served as a starting point for the 2004 popular account of the modern economics of information aggregation, James Surowieki’s Wisdom of Crowds.&#8221- It wasn&#8217-t the starting point. The starting point was Galton&#8217-s own experiment, and his own reporting of the mean in &#8220-The Ballot Box.&#8221- Robin writes: &#8220-Galton did not even bother to calculate a mean.&#8221- He did calculate it, and he did report it. This fact shouldn&#8217-t be listed as an &#8220-addendum&#8221- to the original post. The original post should be rewritten completely &#8212- perhaps along the lines of &#8220-Surowiecki and Galton disagree about which estimate is a better representation of group judgment&#8221- rather than &#8220-Author Misreads Expert&#8221- &#8212- or else scrapped.

&#8212-

James Surowiecki&#8217-s 3rd comment:

I appreciate Levy and Peart admitting their mistake. But they seem not to recognize that their mistake undermines the critique that&#8217-s at the center of their paper. Their paper, they write, is about the misconstruing of Galton&#8217-s experiment. &#8220-A key question,&#8221- they write, &#8220-is whether the tale was changed deliberately (falsified) or whether, not knowing the truth, the retold (and different) tale was passed on unwittingly.&#8221- But the account of Galton&#8217-s experiment was not changed deliberately and was not falsified. It was recounted accurately. Levy and Peart want to use my retelling of the Galton story as evidence of how &#8220-experts pass along false information (wittingly or unwittingly) [and] become part of a process by which errors are diffused.&#8221- But there&#8217-s no false information here, and no diffusion of errors, which rather demolishes their thesis. If they really want to write a paper about how &#8220-experts&#8221- pass along false information, they&#8217-d be better off using themselves as Exhibit A, and tell the story of how they managed to publish such incredibly shoddy work and have prominent economists uncritically link to it.

&#8212-

James Surowiecki&#8217-s 4th comment:

To finish, Levy and Peart insist that their really important point still stands, which is that &#8220-When people quote Galton through Surowiecki, they tell Surowiecki&#8217-s tale, not Galton&#8217-s,&#8221- and that this is a problem because Galton&#8217-s thinking is being misrepresented. But as I said earlier, &#8220-The Wisdom of Crowds&#8221- was not intended to be a discussion of Francis Galton&#8217-s opinions on what&#8217-s the best method to capture group judgment, nor, as far as I know, has anyone who&#8217-s &#8220-Surowiecki&#8217-s tale&#8221- used the Galton example since used it to analyze Galton&#8217-s opinions. People aren&#8217-t quoting the Galton story because they&#8217-re interested in what Galton himself thought about the median vs. mean. They&#8217-re quoting it because they&#8217-re interested in the bigger idea, which is that group judgments (and this is true whether you use the median, the mean, or a method like parimutuel markets) are often exceptionally accurate. Levy and Peart have constructed a straw man &#8212- and, in this case, a straw man based on a falsehood &#8212- and then tried to knock it down.

Robin [Hanson] writes: &#8220-it is ironic that Galton made quite an effort to emphasize and prefer the median, in part because the data did not look like a bell curve, while your retelling focuses on him calculating a mean after checking for a bell curve.&#8221- What&#8217-s ironic about this? He did check for a bell curve, and he did calculate the mean. It&#8217-s the data themselves, not Galton&#8217-s interpretation of them, that I was writing about. (If he hadn&#8217-t calculated the mean, I would have happily told the story with the median, since it was also remarkably accurate, and demonstrated the same point about the wisdom of crowds.)

Finally, on the substantive question, Robin (and Levy and Peart) seem to think that because the distribution of guesses wasn&#8217-t normal, that makes using the mean a mistake. But this is precisely what&#8217-s so interesting: if the group is large enough, even if the distribution isn&#8217-t normal, the mean of a group&#8217-s guesses is nonetheless often exceptionally good.

Prediction Markets = Clear Expiry + Disperse Information + Participation Incentives

No Gravatar

Jed Christiansen at Forbes (just after John Delaney&#8217-s ill-written and pointless comment):

A market effectively aggregates the information from everyone participating. So anything where:

  • there is a clear result
  • information is dispersed between people and/or locations
  • people have an incentive to participate in the market

will likely provide better results than any other forecasting method. Experts just aren&#8217-t as good as they (or anyone else) think they are. It&#8217-s simply better to ask the crowd in these cases.

Missing from Jed Christiansen&#8217-s comment is the emphasis on long series for comparison. Takes time and hundreds of prediction markets to prove the wisdom of crowds.

&#8212-

UPDATE: Jed Christiansen comments&#8230-

Chris, I agree that for probability assessment, a number of measurements are required to assess success. However, for metrics (ie, sales of widget X, rating of product Y) it doesn&#8217-t require a long series at all. Depending on how poor the current forecasting model is performing, a prediction market could prove successful after just a few measurements.

Austan Goolsbee on Iraq and the Collective Wisdom of Bond Markets

No Gravatar

Austan Goolsbee, writing in the New York Times, discusses Michael Greenstone&#8217-s paper (discussed here at Midas Oracle in September) that examines the market for Iraq&#8217-s bonds for an assessment of the long-term future of the Iraq government. Goolsbee&#8217-s quick conclusion: &#8220-But global financial markets have been monitoring the war for months, and with remarkable consistency, they have concluded that the long-term prospects for a stable Iraq are very bleak.&#8221-

It wasn’t until Professor Greenstone began examining the financial markets’ pricing of Iraqi government debt that he had his eureka moment. It was immediately clear that the bond market — which, historically, has often been an early indicator of the demise of a political system — was pessimistic about the Iraqi government’s chances for survival.

First, some background &#8230- the Iraqi government issued about $3 billion of new bonds in January 2006. These dollar-denominated bonds pay 2.9 percent twice a year and mature in 2028, paying the face value of $100.

To say the least, the market for these bonds is not robust: as of last week, a bond with a face value of $100 was trading at around $60. Professor Greenstone calculated that, from the markets’ standpoint, the implied default risk over the life of the bond was about 80 percent.

The important point is that anyone who owns one of these Iraqi bonds has to decide each day whether the Iraqi government is likely to be functional enough to make its debt payments, or will default along the way. All else being equal, if the surge policy is effective, it ought to be raising the market price of these bonds.

Bondholders “aren’t politically motivated,” Professor Greenstone said. “They don’t have to rationalize their previous statements or justify their votes from years past. All they care about is whether there will be a functioning Iraq in the future such that they will receive their payments.” At a certain price, most securities will find a buyer, and there are still buyers for Iraqi bonds. But the price they are willing to pay is very low.

Goolsbee tosses in a few examples which show, in his words, &#8221- the collective wisdom of financial markets has proved remarkably adept at evaluating events and predicting the future, even the turning points of war&#8220-:

During the American Civil War, for example, when Confederate forces lost at Gettysburg, Confederate cotton bonds traded in England dropped by about 14 percent. During World War II, German government bonds fell 7 percent when the Russians started their counterattack at Stalingrad in 1942, and French government bonds rose 16 percent after the Allied invasion at Normandy in 1944. Many such examples of the prescience of financial markets have been documented by economic historians.

Of course a few cherry picked examples, while suggestive, should not be considered conclusive.

Chris Masse, in a post about negative comments on the war by a just-retired high ranking military officer, said:

We can’t rely on retirees to tell us the truth. We need an anonymous information aggregation mechanism that gives an incentive to people who come forward with advanced information: the prediction markets.

While bond markets might be useful as a stand in for prediction markets, presumably well-designed prediction markets could provide a somewhat more articulated position than can be extracted from a twenty-year bond market.

NOTE: Greenstone&#8217-s paper, &#8220-Is the &#8216-Surge Working? Some New Facts,&#8221- is available from the SSRN.

MIT Center for Collective Intelligence – Play-money prediction exchange

No Gravatar

Yesterday, I blogged about the MIT CCI&#8217-s collective book project, &#8220-We Are Smarter Than Me&#8220-, which will be presented today at a live web cast (at lunch time, EST).

I completely overlooked that the MIT CCI is launching a play-money prediction exchange. The topics are CCI self-centric and thus totally uninteresting.

PREDICTION TOOL FAQs

What is a &#8220-Prediction Tool&#8221-?

The Prediction Tool on this site is based on the idea of prediction markets. &#8220-Prediction markets are speculative markets created for the purpose of making predictions. Assets are created whose final cash value is tied to a particular event or parameter (e.g., Will there be at least 10,000 registered community members by March 31, 2007?). The current market prices can then be interpreted as predictions of the probability of the event or the expected value of the parameter. Other names for prediction markets include information markets, decision markets, idea futures, and virtual markets.&#8221- (Source: Wikipedia)

OK I get it, sort of, but what does that mean?

We have made a set of predictions about the success of the &#8220-We&#8221- community. You get to buy and sell stock in these predictions based on how likely you think they are to come true. If the prediction turns out to be true, the stock will pay out $100 per share. If it turns out not to be true, the stock will pay out $0 per share.

The hope is that through trading stocks back and forth, the market value of the stocks will eventually closely match the probability of each event coming true.

My Question: Does anybody know which software/design the MIT CCI is using here?

The Answer (added October 25): Shared Insights runs the MIT CCI&#8217-s play-money prediction exchange with the software provided by Consensus Point.