NEWS
GOOD PEOPLE
HISTORY
LIFE HACKS
THE PLANET
SCIENCE & TECH
POLITICS
WHOLESOME
WORK & MONEY
Contact Us Privacy Policy
© GOOD Worldwide Inc. All Rights Reserved.

Fighting Scientific Bias Through Crowdsourcing

“Humans desire certainty, and science infrequently provides it.”

You’ve probably seen (and even posted) these sorts of questions on social media—queries like “Does anyone near me know whether they finished the construction work at the post office yet?” or “Help me win an argument: What are the first words that come to mind when you hear the name ‘Ferris Bueller’?”


That’s crowdsourcing, of course, and it can be a great way to seek advice or take an informal poll. But can it also be used to make science better?

Raphael Silberzahn, an assistant professor at IESE Business School in Barcelona, and Eric L. Uhlmann, an associate professor of organizational behavior at the Singapore campus of INSEAD, another international business school, say yes.

Their recent article recommending that scientific research be crowdsourced was one of several approaches published by the journal Nature after an August article in Science laid bare the results of an international project that revealed a rather alarming reality: Out of 100 studies published in three respected psychology journals in 2008, scientists were able to replicate or mostly replicate the results of fewer than half. That effort was part of the Reproducibility Project: Psychology, led by University of Virginia psychologist Brian Nosek.

“The present results suggest that there is room to improve reproducibility in psychology,” the study finds. “A large portion of replications produced weaker evidence for the original findings”—despite using the same materials and methodology as the original authors.

So what’s the next step? Recommendations include the seemingly straightforward move of insisting on rigorous research standards, which is easier said than done. Researchers could more frequently leverage blind analysis, in which the scientists themselves don’t know what data the values represent until the analysis is done and the blind is lifted. But these days, a uniquely modern solution has presented itself: Crowdsourcing.

Silberzahn and Uhlmann aren’t exactly recommending that we start crowdsourcing science on Facebook or Twitter. Instead, they’re advocating that scientists and researchers crowdsource with their peers. Right now, most researchers attempt to serve as their own devil’s advocates—a single team comes up with their own findings and also tries to poke holes in them. But with human beings thrown in the mix, such a task is at best a challenging one. At worst, it veers towards the unethical.

So how would crowdsourcing as a bias-check work in practice, and what would it mean for science?

Well, let’s look at the crowdsourced experiment Silberzahn and Uhlmann conducted last year. They asked 29 teams of researchers to use the same data set to figure out whether soccer referees are more likely to give red cards to dark-skinned players than to light-skinned ones. Each team came up with its own method of analysis and had a chance to revise its analytical technique based on feedback from the other researchers.

The findings of all the research teams taken together were decidedly more tentative than the results of any one study would have been. Though there was general agreement that darker skin did result in more red cards, the extent varied widely, with findings ranging from a strong trend of dark-skinned players being more heavily penalized to a slight tendency—notably, not statistically significant—for referees to give more red cards to light-skinned players.

Basically, crowdsourcing can be expected to lead to results that might be a little less sexy, but are a lot more reliable. And, especially because this method is resource-intensive, it may perhaps best be reserved for occasions when research will likely serve as the basis of real-life policy decisions. “The transparency resulting from a crowdsourced approach should be particularly beneficial when important policy issues are at stake,” Silberzahn and Uhlmann write. “The uncertainty of scientific conclusions about, for example, the effects of the minimum wage on unemployment, and the consequences of economic austerity policies, should be investigated by crowds of researchers rather than left to single teams of analysts.”

The issue extends beyond the choice of analytical model and into actual bias, even if unintended. In the soccer example, let’s say that before embarking on the study, some of the scientists involved were expecting that referees would be harsher with darker-skinned players, while others expected race to have no bearing on what happens on the field. Couldn’t this affect how they interpreted the results, even assuming they had no conscious intention of skewing their findings?

That’s more or less what a September study on gender bias found. The study, led by Montana State University psychologist Ian M. Handley and published in Proceedings of the National Academy of Sciences, asked members of the general public as well as academics (both STEM and non-STEM) to evaluate the quality of research on gender bias by having them read either an abstract of a PNAS study from 2012 that found bias against women in the sciences—or an altered abstract that purported to find no bias. Handley’s examination of how people react to a study indicating gender bias found what appears to be, well, gender bias: Men view the findings less favorably than women, and, of greatest concern, this difference is especially prominent among male STEM faculty members.

But it’s not just racial and gender biases that can motivate scientists to distort the inferences they draw from the data. The authors note that other prejudicial factors can include the desire, whether conscious or unconscious, to support your own theory or refute someone else’s—or to be the first to report what seems to be a new phenomenon.

Maybe the core problem is that people often have unrealistic expectations of science. “Scientific progress is a cumulative process of uncertainty reduction that can only succeed if science itself remains the greatest skeptic of its explanatory claims,” write the authors of the Science article that kicked off all this important talk about increasing reproducibility and reducing bias. “Humans desire certainty, and science infrequently provides it.”

More on Good.is

Meet OANN: Fox News' even more extreme younger sibling - GOOD

More Stories on Good