We rely on scientific research findings to form the basis for a variety of important decisions about issues like how money should be spent, how to keep people safe, and what kinds of foods to eat. Research findings that are widely accepted as true by the public affect decisions we make. A study of how the lay public is confused by news reports on cholesterol and diet provides one example (http://onlinelibrary.wiley.com/doi/10.1111/1467-9566.ep10932547/abstract). When published research findings conflict with each other and there’s no consensus on the truth, the trustworthiness of that research is called into question.
Lately, the popular media has summarized a number of debates among academic researchers regarding the trustworthiness of published research. An article on attitudes toward same-sex marriage was retracted by Science last year, and Slate reported that "a whole field of psychology research may be bunk," which led to an ongoing exchange in Science on how to interpret a report that re-tested the findings of 100 published studies.
To this point, a number of large-scale efforts have been undertaken to verify past psychological research findings. These efforts include the Reproducibility Project: Psychology, which re-did 100 experimental and correlational studies and evaluated the extent to which the same results were found. Another, called the Many Labs Replication Project (MLRP), involved over six thousand participants in 12 countries and reported successful replications in 10 out of 13 effects. As scientists debate whether failure to repeat research results nullifies the original study or studies (in one case, 83), some say that psychology research is in a "replicability crisis" whereas others attribute replication results short of 100% to other factors such as the skill of the researchers or how similar their studies were to the originals. As the debate continues, consumers of research will need to evaluate not only whether an effect was significant or of a particular size, but also the quality of evidence.
Headlines across pop-culture blogs and publications tend toward sensationalism, using terms like "debunk," "crumble," and "crisis."
Even John Oliver had an entire segment dedicated to ‘science’ and shared clips on some of the most ridiculous misinterpretations of scientific studies. I think most of us can all agree – researchers or not – that the whole of social science research is not at risk. But, the ongoing discussion is a good reminder that the body of research on a topic, conflicting results and all, should be considered as a whole. As well-trained and professional researchers, we can do our part to employ rigorous designs, share often, and be aware of our own biases.