During the first 3 months of the COVID-19 pandemic, researchers identified over 2,300 distinct pieces of health misinformation circulating on social media in 25 languages. WHO called it an "infodemic." False cure claims, fabricated death counts, conspiracy theories about virus origins.
All of it spread faster than the virus itself. A 2020 study in the American Journal of Tropical Medicine and Hygiene linked misinformation directly to approximately 800 deaths worldwide during that period, including poisonings from methanol consumption promoted as a COVID cure.
Every outbreak since has followed the same pattern. Mpox in 2022, avian influenza in 2024, Marburg scares in 2025. The disease changes. The misinformation playbook doesn't.
Why does misinformation spread faster during outbreaks?
Fear, uncertainty, and information gaps create the perfect conditions. When a new pathogen emerges, legitimate science can't produce answers as fast as people demand them. Social media fills that vacuum with speculation, and algorithms reward engagement over accuracy.
MIT researchers found in 2018 that false news stories on Twitter were 70% more likely to be retweeted than true stories and reached 1,500 people six times faster. That research predated the pandemic era. Add genuine fear of illness and death, and sharing velocity increases further. You don't carefully evaluate a post about a "proven natural cure" when you're scared for your family. You share it just in case.
Outbreak misinformation also spreads because it offers simplicity. Real epidemiology is complicated, full of confidence intervals, caveats, and evolving understanding. Misinformation offers certainty: this causes it, this cures it, these people are to blame. Certainty feels better than ambiguity, even when it's fabricated.
What patterns show up every time?
Misinformation during outbreaks follows recognizable templates. Once you can name them, you start spotting them immediately.
Miracle cure claims. Bleach, colloidal silver, ivermectin, hydroxychloroquine, essential oils, alkaline water. Every outbreak produces a wave of cure claims for substances that have no demonstrated efficacy against the pathogen in question. The ivermectin cycle during COVID-19 is the clearest example: a single in-vitro study showing antiviral activity at concentrations impossible to achieve in a human body became the basis for a global movement that led to poison control calls increasing 245% in the US during August 2021.
Case count manipulation. "The real numbers are 10x higher" or "these deaths are actually from something else." Both directions serve different narratives, but both distort your understanding of actual risk. Real undercount estimates exist and are published by WHO and academic researchers with methodology you can verify. Social media posts claiming specific alternate numbers almost never cite verifiable sources.
Source fabrication. "A Harvard study found..." or "CDC insiders say..." with no link, no author name, no journal citation. Fabricated authority is the cheapest misinformation tactic. It costs nothing to type "according to researchers at Johns Hopkins" before a false claim. It takes 30 seconds on Google Scholar to confirm whether that study exists.
The "just asking questions" tactic. Phrasing false claims as questions creates plausible deniability. "Why isn't anyone talking about the connection between [unrelated thing] and [outbreak]?" People are talking about it. It's been investigated and found to have no evidence. Framing misinformation as suppressed knowledge makes it feel more credible, not less.
How do you verify a health claim in 60 seconds?
You don't need a medical degree. You need a process.
Step 1: Check the source. Does the claim link to a named author, a specific study, or an identifiable institution? If the source is "a doctor on TikTok" or a screenshot of text with no attribution, stop sharing. Legitimate health findings are published in identifiable places by people willing to attach their names.
Step 2: Search WHO and CDC directly. Go to who.int and cdc.gov and search for the specific claim. If a pathogen is supposedly spreading uncontrollably or a treatment has been proven effective, these agencies will have statements. Absence of confirmation from primary health authorities is itself a data point.
Step 3: Check Google Scholar. For claims referencing scientific research, paste the key terms into Google Scholar. Find the actual paper. Read the abstract. A study showing a substance killed a virus in a petri dish does not mean it works as a treatment in humans, a distinction that misinformation consistently ignores.
Step 4: Look for the red flags. Emotional language designed to provoke outrage or fear. Phrases like "they don't want you to know" or "what the media won't tell you." Urgency to share before you verify. Anonymous sources. Claims that an entire scientific establishment is wrong and one outsider has the answer.
Which sources can you actually trust?
No source is infallible, but some have institutional accountability, peer review processes, and track records that make them fundamentally more reliable than anonymous social media accounts.
Tier 1, Primary authorities: WHO Disease Outbreak News, CDC Morbidity and Mortality Weekly Report (MMWR), ECDC Communicable Disease Threats Report. These agencies make errors and sometimes move slowly, but they correct errors publicly and their data collection methods are transparent.
Tier 2, Peer-reviewed journals: The Lancet, New England Journal of Medicine (NEJM), Nature, Science, JAMA. Peer review is imperfect. Fraudulent papers do get published. But the retraction and correction process means bad science gets identified and flagged, usually within weeks or months, not years.
Tier 3, Aggregated surveillance: PandemicAlarm pulls data from WHO, CDC, ECDC, ProMED, and verified news sources. Aggregation from multiple authoritative feeds reduces single-source bias and gives you a consolidated view of what's actually confirmed versus what's rumored.
Tier 4, Responsible journalism: Reuters Health, AP Medical, STAT News, BBC Health. Professional health journalists verify claims before publishing and issue corrections when wrong. They're not perfect, but they have editorial processes that social media posts do not.
Why doesn't correcting people always work?
Presenting someone with factual evidence that contradicts their belief can actually strengthen that belief. Psychologists call this the backfire effect, and it's particularly strong when health beliefs are tied to identity, political affiliation, or distrust of institutions.
A 2010 study by Brendan Nyhan and Jason Reifler found that corrective information about vaccine safety made vaccine-skeptical parents less likely to vaccinate, not more. The correction triggered a defensive reaction that reinforced the original belief.
What does work, according to subsequent research: acknowledging uncertainty honestly rather than presenting science as infallible, sharing information from within someone's trusted community rather than from outside authorities they've already rejected, and reducing shame. People who shared misinformation are more likely to update their views when they aren't made to feel stupid for believing it in the first place.
You can't fact-check your way out of an infodemic. But you can control what you share, where you get your information, and how you talk about outbreaks with the people around you. Start with your own information diet. Check PandemicAlarm for verified outbreak data. Verify before you forward. And when you're uncertain, say so. Honest uncertainty is more valuable than confident misinformation.