
By: Steven Long DO, MS-HSA, NASM-CPT
In an age of health podcasts, social media influencers, supplement start-ups, and rapid-fire studies with clickbait headlines, it’s becoming harder—not easier—to know what to trust. Whether you’re evaluating a new therapy, medication, diet, or “biohack,” the real question remains: Does this intervention actually cause the outcome it claims to?
At Beyond Health, we welcome curiosity. Patients today are more informed, more inquisitive, and more invested in their own health journeys than ever before—and that’s a good thing. But curiosity without a framework can lead to confusion or even harm. To separate science from speculation, we rely on tools like the Bradford Hill criteria—a timeless and methodical approach to evaluating whether a relationship between two things is likely to be causal.
We teach this framework to our patients, especially because we are so often asked to evaluate articles and protocols shared by friends, social media, or marketing-driven platforms. This is how you become a more discerning, empowered, and evidence-savvy consumer.
Originally proposed by British epidemiologist Sir Austin Bradford Hill in 1965, the Bradford Hill criteria help determine whether an observed association is likely to be causal. These nine principles continue to be a cornerstone of public health, clinical research, and evidence-based medicine.
Let’s break them down—one by one—with examples.
A strong correlation between two variables increases the likelihood that one is causing the other. For instance, smokers are more than 20 times more likely to develop lung cancer compared to nonsmokers. That’s not subtle. That’s robust.
Why it matters: Weak associations (e.g., “5% improvement in memory from a supplement”) may be due to chance, bias, or confounding variables. Strong associations hold up better across populations.
Have the findings been replicated in different studies, populations, and conditions? The link between hypertension and stroke risk has been shown consistently across decades of research.
Why it matters: One-off findings from a single lab, especially if industry-funded, are more suspect. Reproducibility is a hallmark of credible science.
This asks whether a particular exposure leads to a specific outcome. While few health conditions are this straightforward, specificity increases our confidence in a causal link. An example is how thalidomide exposure in pregnant women led to very specific birth defects.
Why it matters: If a single factor seems to cause 10 unrelated symptoms, skepticism is warranted.
The cause must occur before the effect. If you lose weight before starting a supplement, that supplement likely didn’t help.
Why it matters: Many health claims fall into the trap of post hoc reasoning. Temporality protects us from making backward assumptions.
Does more of something produce more effect? For example, the more cigarettes someone smokes, the higher their risk of cancer. Or, higher statin doses correlate with greater LDL reduction.
Why it matters: Dose-response supports the idea that the variable is not only related to the outcome, but may be driving it.
Is there a biologically sound mechanism? For example, we understand how beta-blockers slow heart rate by blocking adrenergic receptors.
Why it matters: If the mechanism defies physiology, it’s unlikely to be true—even if a study reports statistical significance. Implausible claims require extraordinary evidence.
Does the relationship fit with what we already know about biology, chemistry, and related research? If a supplement supposedly “reverses aging” but contradicts decades of aging science, coherence is lacking.
Why it matters: If something feels too good to be true—and doesn’t fit the larger evidence base—it usually is.
Randomized controlled trials (RCTs) are the gold standard. If changing the exposure (e.g., removing a toxin, starting a medication) produces a predictable effect, that’s strong evidence.
Why it matters: Observational studies are useful but limited. Experimental evidence helps us account for placebo effects, confounding variables, and regression to the mean.
Do we have similar examples that make this relationship more believable? For instance, knowing that certain antibiotics cause tendon rupture makes us more cautious about others in the same class.
Why it matters: Analogy doesn’t prove causation but supports cautious interpretation when direct data is limited.
Many of our patients arrive with links to articles, influencer posts, or supplement company blogs claiming amazing benefits. While curiosity is welcome, many of these sources fail even the most basic Bradford Hill standards.
Research suggests a significant portion of health-related information on the internet is misleading or incorrect. A study by Cuan-Baltazar et al. (2020) in JMIR Public Health and Surveillance found that only 30% of online COVID-19 health information met HONcode standards for quality and transparency (Cuan-Baltazar et al., 2020).
Another paper in Nature by Ioannidis (2005) estimated that most published research findings are false, especially when studies are small, effects are weak, or conflicts of interest exist (Ioannidis, 2005).
We use this framework every day:
More importantly—we teach it to you. You don’t need a PhD to ask smart questions. You just need a framework.
If you’re curious, analytical, and want to make informed decisions—this is your roadmap. Don’t settle for headlines. Ask better questions:
We believe in exploration. But we insist on structure. Whether you're exploring peptides, new diets, CGMs, or personalized protocols, let’s run it through the Bradford Hill filter—together.
Because real progress doesn’t come from hype. It comes from asking the right questions.
Schedule a free consultation and see how Beyond Health helps turn curiosity into clarity.
Cuan-Baltazar, J. Y., Muñoz-Perez, M. J., Robledo-Vega, C., Pérez-Zepeda, M. F., & Soto-Vega, E. (2020). Misinformation of COVID-19 on the Internet: Infodemiology Study. JMIR Public Health and Surveillance, 6(2), e18444. https://doi.org/10.2196/18444
Ioannidis, J. P. A. (2005). Why Most Published Research Findings Are False. PLoS Medicine, 2(8), e124. https://doi.org/10.1371/journal.pmed.0020124
Hill, A. B. (1965). The Environment and Disease: Association or Causation? Proceedings of the Royal Society of Medicine, 58(5), 295–300.