News
One experimental result doesn’t mean much in science. To truly know whether a result is valid, it needs to be reproduced in the same way over and over again. Yet research that may not be reproduced often finds its way into well-regarded journals, due to limited resources, human error or, rarely, outright fraud.
Unreplicable research is especially problematic for drug trials and other clinical research. A recent estimate put the costs associated with irreproducible preclinical research at $28 billion a year in the United States. Short of spending money to run the published experiment again, no mechanisms exist to quickly identify findings that are unlikely to be replicated.
New research from the John A. Paulson School of Engineering and Applied Sciences takes a page from economics to predict whether experiments can be replicated.
Yiling Chen, the Gordan McKay Professor of Computer Science, is part of an international team of researchers who used prediction markets — investment platforms that reward traders for correctly predicting future events — to estimate the reproducibility of more than 40 experiments published in prominent psychology journals. The researchers found that prediction markets correctly predicted replicability in 71 percent of the cases studied.
“This research shows for the first time that prediction markets can help us estimate the likelihood of whether or not the results of a given experiment are true,” said Chen. “This could save institutions and companies time and millions of dollars in costly replication trials and help identify which experiments are a priority to re-test.”
Sixty-one percent of the replications used in this study did not reproduce the original results — further highlighting the need for a timely and cost effective method to suss out these reproducibility challenges.
"Top psychology journals seem to focus on publishing surprising results rather than true results,” said Anna Dreber, of the Stockholm School of Economics and one of two first-authors of the paper. “Surprising results do not always hold up under re-testing. There are different stages at which an hypothesis can be evaluated and given a probability that it is true. The prediction market helps us get at these probabilities.”
The research was published in The Proceedings of the National Academy of Sciences (PNAS).
Prediction markets are gaining popularity in a number of realms beyond economics, especially in politics. In prediction markets, investors make predictions of future events by buying shares in the outcome of the event and the market price indicates what the crowd thinks the probability of the event is.
Pollsters and pundits are relying more and more on prediction markets to forecast elections and other events because prediction markets rely on the average answer of a group of well-informed participants, otherwise known as the wisdom of the crowd.
Chen and the rest of the team harnessed that wisdom to predict the reproducibility of scientific research. Partnering with The Reproducibility Project: Psychology — an open science project that tests the reproducibility to psychological research — the team chose 44 studies published in prestigious journals that were in the process of being re-tested or the results of which were not yet known.
Then, they set up markets for each study and provided their pool of traders — all psychologists — with $100 to invest. Armed with information about each market, including the original publication and their knowledge of the field, the participants chose to invest anywhere between 1 and 99 cents on the outcome of the event — in this case, whether or not the research could be reproduced.
For example, suppose investor Beth specializes in post traumatic stress disorder (PTSD) and there are two markets involving PTSD research. After reading the papers and drawing on her knowledge of the field, Beth thinks the findings from one study probably could not be reproduced and is very confident that the findings of another study can be replicated. So, she invests in ‘Reproducible’ shares in one market and ‘Not-Reproducible’ shares in the other . If Beth’s thinking is in line with the wisdom of the crowd, the market value of those two shares would be similar to Beth’s investments.
If Beth sees that the price of ‘Reproducible’ shares is very low on a project she knows is reproducible, it’s in her best interest to buy a lot of the those cheap shares and drive up the price of the contract in the market.
“One of the advantages of the market is that participants can pick the most attractive investment opportunities” said Thomas Pfeiffer, co-first author and professor of computational biology at the New Zealand Institute for Advanced Study. “If the price is wrong and I’m confident I have better information than anyone else, I have a strong incentive to correct the price so I can make more money. It’s all about who has the best information.”
If the price for ‘Reproducible’ shares are low when the market closes, that means that most people in the field don’t believe the experiment can be replicated.
“Our research showed that there is some ´wisdom of the crowd´ among psychology researchers,” said Brian Nosek, co-author and professor of psychology at the University of Virginia. “Prediction accuracy of 70 percent offers an opportunity for the research community to identify areas to focus reproducibility efforts to improve confidence and credibility of all findings.”
The next step in the research is to test whether or not prediction markets are accurate forecasters for the reproducibility of results in other fields, such as economics and cell biology.
This research was supported by Jan Wallander and Tom Hedelius Foundation, the Knut and Alice Wallenberg Foundation and the US National Science Foundation. Additional authors include Johan Almenberg, of Sveriges Riksbank, Siri Isaksson, of Stockholm School of Economics, Brad Wilson, of Consensus Point and Magnus Johannesson of the Stockholm School of Economics.
Cutting-edge science delivered direct to your inbox.
Join the Harvard SEAS mailing list.
Scientist Profiles
Yiling Chen
Gordon McKay Professor of Computer Science
Press Contact
Leah Burrows | 617-496-1351 | lburrows@seas.harvard.edu