Skip to main content
Medicine LibreTexts

4.21: Is Science Even Trustworthy?

  • Page ID
    119544
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \( \newcommand{\dsum}{\displaystyle\sum\limits} \)

    \( \newcommand{\dint}{\displaystyle\int\limits} \)

    \( \newcommand{\dlim}{\displaystyle\lim\limits} \)

    \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

    ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\id}{\mathrm{id}}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\kernel}{\mathrm{null}\,}\)

    \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\)

    \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\)

    \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

    \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

    \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vectorC}[1]{\textbf{#1}} \)

    \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

    \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

    \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)

    As information becomes more and more accessible, it can also become more confusing. It almost seems that people can use the phrase “studies say ____” and then insert whatever factoid they want to be true. This line works in sales pitches for everything from mattresses to coffee, online arguments for “which diet is best”, and of course starting new drama at the family gathering over the effectiveness of vaccines. To be fair, if a person has access to Google they may be able to dig up a random research article that “proves” their point, no matter what that point is. But do we really know if it is quality research? Or if it's being interpreted correctly? This is where a lot of health “influencers” - and even researchers themselves - can end up spreading misinformation, even with good intentions.

    First, let’s be clear that “the science” - or what we should more accurately call scientific consensus - does change over time, and it should. Scientific consensus is merely what scientists and researchers can mostly agree on, based on the available evidence, at a given point in time. As new research is performed, or new methods of investigation become available, and we learn more about ourselves and the world, the scientific consensus should be updated. A hallmark of the scientific method is that it recognizes the limitations of research and promotes an open mind to learning and testing new discoveries. “Good science” is constantly testing and trying to prove ourselves wrong in as many ways possible before we can consider our conclusions “correct”.

    Two ways to ensure that a particular research study is done well are to make sure that the research methods are valid and reliable. Validity refers to how we are measuring outcomes. Does the testing method actually measure what it is supposed to? Another way to think of validity is the accuracy of an arrow hitting a target. If the test hits the “bullseye” of the target (thing we are trying to measure) consistently, then it is accurate or valid. Most testing methods have to be validated or compared to a “gold standard” - which is the best measurement we have at the time. For example, if we are using the 1.5 mile run to estimate a person’s cardiorespiratory fitness level (CRF), it should have been validated against a VO2max test - which is the gold standard for measuring CRF. All of our study participants who score high on the 1.5 mile run would presumably also have scored high on a VO2max test, and there needs to be ample evidence that the 1.5 mile run score is based on the subject’s ability to perform cardio exercise.

    Reliability simply means that the test will give us the same results each time it is used. There are two kinds of reliability: intertester and intratester reliability. Inter-tester means between testers, so for example, if we had two coaches at the finish line with the stopwatch measuring the 1.5 mile run, they would get the same results for the same subject. If the test is too subjective or open to interpretation by the tester, we are going to get different scores from different testers, and therefore the test is unreliable in the way it is performed. We should also be able to get the same score by the same tester, if the test is repeated. This is called intra-tester reliability. If the same researcher gets a significantly different outcome each time they repeat the test on a subject, then the test itself is not reliable and shouldn’t be used. To use our previous analogy with arrows and the bullseye on a target, a reliable test is one that will cluster all of the arrows closely around the same area. See Figure \(\PageIndex{1}\) for a visual representation of this analogy.

    Diagram of validity and reliability.
    Figure \(\PageIndex{1}\): Validity and Reliability Analogy.

    Another requirement for good research lies in the interpretation of the research by the researchers themselves. Researchers are human just like everyone else, and the conclusions they draw from their own studies may have errors. The goal of the peer-review process is to try to identify errors and weaknesses in the study methods or in the interpretation of the results and correct those before the study is published. This is one reason why research articles often don't get published for months - or even years after the research study was completed. And even though we have other scientists - the peers of the researchers, who are experts in the area being studied - reviewing their articles before publication, sometimes mistakes still slip through or the conclusions are debated. Scientists often have lengthy debates over the interpretation of research results. Over time, these studies may be repeated, and new studies will be published perhaps providing new insights. Once a lot of research has been done that supports a particular conclusion, often scientists will get together and develop a consensus. This consensus is simply an agreement between the folks who know the most on a particular topic, that they are confident in their conclusions. Sometimes a consensus isn’t reached, because there are too many research studies with conflicting evidence, or the evidence is equivocal - which means it is open to several different interpretations.

    Image of human colorectal cancer cells.
    Figure \(\PageIndex{2}\): Human colorectal cancer cells treated with a topoisomerase inhibitor and an inhibitor of the protein kinase ATR (ataxia telangiectasia and Rad3 related), a drug combination under study as a cancer therapy. Cell nuclei are stained blue; the chromosomal protein histone gamma-H2AX marks DNA damage in red and foci of DNA replication in green. , 2014. (Copyright; Created by Yves Pommier, Rozenn Josseauthor via National Cancer Institute on Unsplash )

    A historical problem with high-quality research is that it is often behind a paywall. The most reputable scientific journals often require expensive annual professional membership fees from the readers. Sometimes these fees can be overcome during enrollment in higher education, through the college or university’s library access to scholarly journals - but then the college or university is footing the bill. Once a student graduates, they no longer have access to these library databases and are forced to pay professional fees in order to have access to a particular journal’s current or back issues. Researchers also typically have to pay a fee to submit their research for publication, and peer-reviewers scrutinize their work without compensation. All of these practices amount to an inequitable “gatekeeping” of information to the professional communities or academic institutions who can afford membership and researchers who can afford publication. In response to this inequity, many publications are becoming open-access, or freely available online. While this is a positive response, there are still challenges faced by open-access publishers. In an effort to collect revenue, some have reduced (or entirely eliminated) their peer-review process and require researchers to pay even more exorbitant fees for publication. This calls into question the scientific rigor of these “predatory” publications, even though they are freely accessible. Not all open-access publications are this untrustworthy, but sometimes it is hard for the average internet-surfer to tell the difference. Academics, librarians, and universities across the globe are calling for a revolution of sorts to make high-quality, peer-reviewed scientific research publications free to the public (Resnick & Belluz, 2019).


    This page titled 4.21: Is Science Even Trustworthy? is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Erin Calderone.

    • Was this article helpful?