Science Has Its Problems, But the Web Could Be the Fix

The software a group of scientists used to replicate 100 psychology studies is a framework for the future of science.
STORY GettyImages469257908
Multicolor abstract bright backgroundGetty Images

For many researchers, the scientific method is as close to a religion as they'll ever get. The appeal is similar: The scientific method, rigorously followed, provides disciples with a hint of the objective truth an all-knowing god might impart. But the path to that truth is rocky. While the rules of the scientific method are wonderful guidelines, they, like religious commandments, can be broken. Scientists can make simple mistakes or be subtly biased by their desire for prestige, interfering with the sanctity of their results.

It doesn't have to be that way. Right now, science is undergoing a correction of sorts—trying as hard as it can to remove all the little ways scientists get in the way of their own work. Today, in a big step toward that correction, the Open Science Collaboration published the results of 100 psychology studies, studies that already had been done. By replicating those studies and checking to see whether their results could be reproduced, the project seeks to understand the ways in which science's current procedures are flawed.

The results, published in Science, may on the face seem discouraging. The collaboration successfully reproduced less than half of the results of those 100 studies. Only 36 percent of the replicated studies showed significant results, compared to 97 percent of the originals. But the results themselves are not so much the point here. What's more important is the framework used to successfully conduct those replications, which points toward a revolution—made possible by the new, Internet-connected world—in how science is conducted, reviewed, and consumed.

Over the last three years, the Open Science Collaboration brought together 270 co-authors and 86 contributing volunteers to replicate the studies, all of which were first published in 2008 by three large psychology journals. That level of coordination was made possible by a custom-built science-doing environment called the Open Science Framework—free, open-source software in which researchers can compile materials, study designs, and data. “It makes it very easy to make parts or all of that data publicly available, to increase transparency and reproducibility,” says psychologist Brian Nosek, leader of the study and executive director of the Center for Open Science, which supports the framework.

Such communication will be essential in the type of scientific utopia Nosek and other scientists envision: One in which studies are conducted without an eye toward the earthly rewards of a gee-whiz result. In the current scientific publishing system, journals are more likely to publish novel, positive results—which subtly incentivizes researchers to conduct studies that will produce exciting outcomes. In an ideal world, those incentives would disappear. Scientists would conduct experiments with an emphasis on methodological and statistical rigor, including frequent replication of studies to monitor their reproducibility.

In a culture that rewards previously-thankless replications, a solid organizational scaffold like the Open Science Framework would be indispensable. Replicating research, it turns out, is really hard—especially if you don’t plan for it. Many of the original 100 studies in this analysis started in 2006 or earlier, and so their materials were attached to technology of the same age. “But in 10 years, technology advances and analytical techniques advance,” says Johanna Cohoon, one of the project's coordinators.

Carmel Levitan, a cognitive scientist at Occidental College, ran into technological issues in one of the two studies she helped replicate. “The software was basically obsolete,” she recalls. “So they just sent us the computer.” The old Mac was close to crashing every time she used it, so between participants, Levitan would put data on a memory stick for safekeeping. Her other study fared even worse: The original author lost his materials entirely when a building collapsed on top of his hard drives.

If original studies were built in a framework like the OSF from the bottom up, those problems would go away. It also would make it easier to keep track of who does what. With a GitHub-like version control system, it’s clear exactly who takes responsibility for what part of a research project, and when—helping resolve problems of ownership and first publication. “Publication and research is a very vertical process,” says Leslie Alvarez, a psychologist at Adams State University who contributed to a replication. “This makes it a little more horizontal.”

A framework like the Open Science Framework would support another element on a scientific utopia’s wishlist: preregistration. Right now, the method for evaluating research is shamefully subjective. Because people want new, novel claims, peer review—the step before publication when scientific colleagues comment on a paper—is likely to be influenced by the results of those studies. The solution, says Nosek, is to move peer review up, having a collection of scientists evaluate an experiment’s design before it’s even completed.

You might see where this is going. If a scientist is judged by the quality of his research and not the results of that research, the final moment of publishing a paper becomes less important. “The framework becomes an environment for doing science, and the momentous occasion of publishing a paper becomes a decision of the original researcher,” says Nosek. “Ultimately, the conclusion is journals go away.”

In the end, the software that these 356 researchers used to examine the state of scientific inquiry will be more than an experiment—it will be a framework for restructuring that inquiry. “If it actually works well, it is really going to revolutionize how we think about publication,” says Nosek. And maybe bring science a step closer to the truth.