Risk of being scooped drives scientists to shoddy methods

In the race for a COVID-19 vaccine, second place still offers glory—unlike some scientific fields.


Leonid Tiokhin, a metascientist at Eindhoven University of Technology, learned early on to fear being scooped. He recalls emails from his undergraduate adviser that stressed the importance of being first to publish: “We’d better hurry, we’d better rush.”

A new analysis by Tiokhin and his colleagues demonstrates how risky that competition is for science. Rewarding researchers who publish first pushes them to cut corners, their model shows. And although some proposed reforms in science might help, the model suggests others could unintentionally exacerbate the problem.

Tiokhin’s team is not the first to make the argument that competition poses risks to science, says Paul Smaldino, a cognitive scientist at the University of California (UC), Merced, who was not involved in the research. But the model is the first to use details that explore precisely how those risks play out, he says. “I think that’s very powerful.”

In the digital model, Tiokhin and his colleagues built a toy world of 120 scientist “bots” competing for rewards. Each scientist in the simulation toiled away, collecting data on a series of research questions. The bots were programmed with different strategies: Some were more likely than others to collect large, meaningful data sets. And some tended to abandon a research question if someone else published on it first, whereas others held on stubbornly. As the bots made discoveries and published, they accrued rewards—and those with the most rewards passed on their methods more often to the next generation of researchers.

Tiokhin and his colleagues documented the successful tactics that evolved across 500 generations of scientists, for different simulation settings. When they gave the bots bigger rewards for publishing first, the populations tended to rush their research and collect less data. That led to research filled with shaky results, they report in Nature Human Behaviour today. When the difference in reward wasn’t so high, the scientists veered toward larger sample sizes and a slower publishing pace.

The simulations also allowed Tiokhin and colleagues to test out the effects of reforms to improve the quality of scientific research. For example, PLOS journals, as well as the journal eLife, offer “scoop protection” that gives researchers the chance to publish their work even if they come in second. There is no evidence yet that these policies work in the real world, but the model suggests they should: Larger rewards for scooped research led the bots to settle on bigger data sets as their winning tactic.

But there was also a surprise lurking in the results. Rewarding scientists for publishing negative findings—an oft-discussed reform—lowered research quality, as the bots figured out that they could run studies with small sample sizes, find nothing of interest, and still get rewarded. Advocates of publishing negative findings often highlight the danger of publishing only positive results, which drives publication bias and hides the negative results that help build a full picture of reality. But Tiokhin says the modeling suggests rewarding researchers for publishing negative results, without focusing on research quality, will incentivize scientists to “run the crappiest studies that they can.”

In the simulations, making it more difficult for scientist bots to run reams of low-cost studies helped correct the problem. Tiokhin says that points to the value of real-world reforms like registered reports, which are study plans, peer reviewed before data collection, that force researchers to invest more effort at the start of their projects, and discourage cherry-picking data.

Science is meant to pursue truth and self-correct, but the model helps explain why science sometimes veers in the wrong direction, says Cailin O’Connor, a philosopher of science at UC Irvine who wasn’t involved with the work. The simulations—with bots gathering data points and testing for significance—reflect fields like psychology, animal research, and medicine more than others, she says. But the patterns ought to be similar across disciplines: “It’s not based on some tricky little details of the model.”

Scientific disciplines vary just like the simulated worlds—in how much they reward being first to publish, how likely they are to publish negative results, and how difficult it is to get a project off the ground. Now, Tiokhin hopes metaresearchers will use the model to guide research into how these patterns play out with flesh-and-blood scientists.

For more updates check below links and stay updated with News AKMI.
Life and style || E Entertainment News || Automotive News || Science News || Tech News || Lifetime Fitness || Giant Bikes


Related News  Think Climate Change Is Messy? Wait Until Geoengineering
Show More

Related Articles

Back to top button

usa news wall today prime news newso time news post wall