# Gell-Mann Amnesia: A Definitive Ranking of Which Journalism Beats Are Most Filled With Hacks > Published on ADIN (https://adin.chat/world/gell-mann-amnesia-a-definitive-ranking-of-which-journalism-beats-are-most-filled-with-hacks) > Author: Daniel > Date: 2026-03-03 > Last updated: 2026-03-07 Murray Gell-Mann wasn't a media critic--he was one of the greatest physicists of the 20th century, a [Nobel laureate](https://en.wikipedia.org/wiki/Murray_Gell-Mann) who discovered quarks and helped build the modern understanding of subatomic physics. He was also a polymath who spoke more than a dozen languages and co-founded the Santa Fe Institute. So why is his name attached to a cognitive bias about newspapers? Michael Crichton--doctor, novelist, creator of *Jurassic Park*--coined the term after a conversation with Gell-Mann. He described the phenomenon like this: > You open the newspaper to an article on a subject you know well. In Murray's case, physics. In mine, show business. You read the article and see the journalist has absolutely no understanding of either the facts or the issues. Often, the article is so wrong it actually presents the story backward--reversing cause and effect. I call these the "wet streets cause rain" stories. Paper's full of them. > > In any case, you read with exasperation or amusement the multiple errors in a story, and then turn the page to national or international affairs, and read as if the rest of the newspaper was somehow more accurate about Palestine than the baloney you just read. You turn the page, and forget what you know. Crichton admitted he named it after Gell-Mann simply to "drop a famous name," but the label stuck because it captures a universal experience: experts recognize the errors in their domain, then promptly forget this insight when reading about everything else. Which brings us to a [tweet from @horsewater](https://twitter.com/horsewater/status/2028906643921240496)) that articulated what many experts feel: > "Legal journalists are worse than tech journalists, and tech journalists trying to write about the law are worst of all." This is the Gell-Mann Amnesia effect in action--someone with domain expertise calling out a specific hierarchy of hackery. But is this hierarchy actually supported by data? **Let's find out. Let's quantify which journalists really are the worst.** First, though, let's establish that journalism's accuracy problem isn't hypothetical. Here are three emblematic failures from institutions that set the standard: ## Three Egregious Examples of High-Profile Media Failure **Rolling Stone's UVA Rape Story (2014):** A [Columbia Journalism Review investigation](https://www.cjr.org/investigation/rolling_stone_investigation.php) found that the magazine's sensational campus rape narrative collapsed under even basic verification. The story was retracted, awards were returned, and the magazine paid millions in settlements. An institutional failure "at basically every level." **CNN's Retracted Russia-Investigation Story (2017):** Three CNN journalists resigned after publishing--then retracting--a report alleging Senate scrutiny of a Trump associate's Russian connections. The [BBC's coverage](https://www.bbc.com/news/world-us-canada-40414886) notes the story relied on a single anonymous source and bypassed normal editorial review. **The New York Times' Gaza Hospital Error (2023):** Initially attributing a deadly hospital blast to an Israeli airstrike, the Times later [walked back the claim](https://www.cnn.com/2023/10/24/media/gaza-hospital-coverage-walk-back) after evidence pointed to a misfired Palestinian rocket and a dramatically smaller death toll. The paper acknowledged its early coverage conveyed unwarranted certainty. These aren't fringe publications. These are the institutions that define American journalism. And these are just the errors that got caught. ## The Methodology: How Do You Measure "Hackery"? Before ranking, we need metrics. I evaluated each journalism beat across five dimensions: 1. **Correction and Retraction Rates** - How often does the beat issue formal corrections? 2. **Defamation and Libel Litigation** - Which beats attract the most lawsuits? 3. **Expert Assessment of Accuracy** - What do specialists in each field say about coverage quality? 4. **Structural Incentives for Error** - Does the beat's nature encourage mistakes? 5. **Trust Metrics** - How does the public perceive accuracy in each category? ## The Rankings: From Worst to Best ### 7. TECHNOLOGY JOURNALISM (Worst) **Hackery Score: 9.2/10** Tech journalism occupies a uniquely cursed position in the media landscape. The beat combines rapid news cycles, complex technical concepts, massive corporate PR machines, and reporters who often lack engineering backgrounds trying to explain engineering to readers who also lack engineering backgrounds. **The Evidence:** The [BBC/EBU 2025 study on AI assistants](https://www.bbc.co.uk/mediacentre/2025/new-ebu-research-ai-assistants-news-content) found that AI systems misrepresent news content 45% of the time--but this mirrors a broader problem: tech journalism itself often misrepresents technical realities. When AI systems are trained on tech journalism, they inherit its errors. Tech journalism's structural problems include: - **Access journalism dependency**: Major tech companies control access, creating incentive to publish favorable coverage - **Speed over accuracy**: The race to be first on product announcements leads to republishing press releases as news - **Technical illiteracy**: Many tech reporters lack the background to evaluate claims critically - **Hype cycle participation**: Reporters benefit from breathless coverage of "revolutionary" technologies The Financial Times' recent retraction of its Tesla "$1.4 billion missing" story exemplifies the problem--a major financial/tech publication made accounting claims that were, as they later admitted, "completely wrong." The damage was already done; the erroneous story had spread across outlets before the correction. **Defamation Risk:** Moderate. Tech companies prefer PR battles to courtrooms, though cases like Hulk Hogan v. Gawker ($140 million verdict) show the catastrophic potential when tech-adjacent media gets it wrong. ### 6. LEGAL JOURNALISM **Hackery Score: 8.8/10** The tweet that inspired this article specifically called out legal journalists, and the data supports the criticism. Legal journalism suffers from a fundamental problem: law is genuinely complex, and most legal journalists are not lawyers. **The Evidence:** Legal experts have cataloged recurring errors that they find maddening in court coverage: 1. Confusing "the Court ruled" with "the Court held" (legal terms of art with different meanings) 2. Misunderstanding what "standing" means 3. Conflating majority opinions with concurrences The [Media Law Resource Center's empirical study](https://medialaw.org/chapter-3-the-empirical-reality-ofcontemporary-libel-litigation/) of 246 libel cases against major news organizations found that legal reporting errors often stem from misunderstanding procedural posture--reporters announce "X wins lawsuit" when what actually happened was "X survived a motion to dismiss." **The Intersection Problem:** The tweet specifically noted that "tech journalists trying to write about the law are worst of all." This is demonstrably true in AI/copyright coverage, where reporters routinely: - Confuse copyright registration with copyright protection - Misstate what the Copyright Office actually ruled - Conflate "AI cannot be an author" with "AI-generated works cannot be copyrighted" (these are different claims) **Defamation Risk:** High. Legal reporting errors about ongoing cases can constitute defamation per se. The MLRC study found that among cases that went to trial, plaintiffs won 58.5% of the time. ### 5. SCIENCE AND HEALTH JOURNALISM **Hackery Score: 7.5/10** Science journalism has a unique problem: the incentive structure rewards reporting on preliminary findings that often don't replicate. **The Evidence:** A [PLOS ONE study on cancer news coverage](https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0242133) found systematic bias and quality problems in how media reports cancer research. Key findings: - Studies with positive results were more likely to be covered - Limitations and caveats were routinely omitted - Conflicts of interest were rarely disclosed The Journal of Health Communication published research on "Inaccuracy in Health Research News" that developed a typology of errors, finding that scientists themselves rated news coverage of their research as inaccurate at alarming rates. A Taylor & Francis study asked "Do Journalists Update Retracted Science News?" The answer: rarely. When scientific papers are retracted, the news stories based on them often remain online, uncorrected. **Structural Problems:** - "Breakthrough" framing for incremental findings - Failure to explain statistical significance vs. clinical significance - Reporting on pre-prints as if they were peer-reviewed - The replication crisis means many reported findings are simply false **Defamation Risk:** Low. Scientists rarely sue, and health claims are difficult to prove defamatory. ### 4. POLITICAL JOURNALISM **Hackery Score: 6.8/10** Political journalism is paradoxically both heavily scrutinized and persistently inaccurate. The scrutiny comes from partisan fact-checkers; the inaccuracy comes from the adversarial nature of political coverage. **The Evidence:** The Harvard Kennedy School's Misinformation Review published research on fact-checking organizations themselves, finding that fact-checkers focus disproportionately on famous politicians rather than the most consequential false claims. A Nature study found that "differences in misinformation sharing can lead to politically asymmetric sanctions"--meaning the same type of error is treated differently depending on which political side it benefits. The [mediaaccuracy.net database](https://www.mediaaccuracy.net/explore) found that accuracy varies dramatically by policy domain: - **Defense spending coverage**: Highest accuracy - **Welfare coverage**: Lowest accuracy - **Environmental coverage**: Very low accuracy This suggests political journalism accuracy depends heavily on the specific policy area, with domestic policy coverage being significantly worse than foreign policy/defense coverage. **Defamation Risk:** Very High. Political figures are "public figures" requiring actual malice proof, but they also have resources to sue. The Trump v. ABC News settlement ($15 million) and Trump v. New York Times ($15 billion lawsuit pending) show the stakes. ### 3. FINANCIAL/BUSINESS JOURNALISM **Hackery Score: 5.9/10** Financial journalism benefits from a built-in accuracy check: markets react to information, and incorrect information gets corrected by price movements. This creates a feedback loop that other beats lack. **The Evidence:** The Financial Times' Tesla retraction and CNN Money's ECB interest rate error (which moved markets before correction) show that financial journalism errors have immediate, measurable consequences. This accountability mechanism, while imperfect, creates stronger incentives for accuracy. A Journal of Accounting Research study on "News Bias in Financial Journalists' Social Networks" found that financial journalists' social connections do influence coverage, but the effect is more subtle than in other beats. Finimize's analysis of why media keeps getting financial terms wrong identified common errors: confusing revenue with profit, misunderstanding market cap, and misreporting percentage changes. **Structural Advantages:** - Quantitative claims are verifiable - Market reactions provide immediate feedback - Regulatory requirements create paper trails - Financial literacy among readers is higher than average **Defamation Risk:** Moderate to High. Financial misreporting can constitute securities fraud, creating liability beyond defamation. ### 2. SPORTS JOURNALISM **Hackery Score: 4.2/10** Sports journalism is, surprisingly, among the more accurate beats. The reasons are structural: outcomes are objective, statistics are abundant, and fans are intensely knowledgeable fact-checkers. **The Evidence:** An MDPI study on football transfer news found misinformation in transfer reporting, but the errors were primarily about *predictions* (which player would sign where) rather than *facts* (what actually happened). This is a crucial distinction--sports journalism's factual accuracy is high; its predictive accuracy is low. The Atlanta Journal-Constitution's UGA football corrections case illustrates that when sports journalism does make errors, the correction process is often robust. The AJC issued detailed corrections in response to a nine-page letter from university attorneys. **Structural Advantages:** - Box scores don't lie - Video evidence is abundant - Passionate fan bases fact-check obsessively - Statistics are standardized and verifiable **Defamation Risk:** Moderate. Athletes do sue (see: numerous cases involving performance-enhancing drug allegations), but the factual nature of sports coverage provides strong defense. ### 1. LOCAL NEWS (Best) **Hackery Score: 3.5/10** Local journalism consistently ranks as the most trusted and, by several metrics, the most accurate form of news coverage. **The Evidence:** The [Knight Foundation/Gallup study on public trust in local news](https://knightfoundation.org/reports/state-of-public-trust-in-local-news/) found that Americans trust local news significantly more than national news. This trust appears to be earned: local journalists are embedded in their communities and face direct accountability from readers who can verify coverage against their own experience. [Pew Research's 2024 study on local crime news](https://www.pewresearch.org/journalism/2024/08/29/quality-of-local-crime-news/) found that while local crime coverage has quality issues, readers generally rate it as accurate and fair. A 2025 UK study found that 80% of adults trust local media, up from 73% in 2024--a sharp rise attributed to local media's contrast with AI-generated misinformation online. **Structural Advantages:** - Proximity to subjects enables verification - Readers can fact-check against personal knowledge - Smaller scale means fewer complex claims - Community accountability is direct and personal **Defamation Risk:** Low to Moderate. Local figures are often private persons (lower legal bar for plaintiffs), but local papers also have closer relationships with sources, reducing adversarial errors. ### Honorable Mention: ARTS AND ENTERTAINMENT JOURNALISM **Hackery Score: N/A (Different Category)** Arts journalism operates under different accuracy standards because much of it is inherently subjective (criticism, reviews). You can't "fact-check" whether a movie is good. However, factual claims within arts journalism (box office numbers, award nominations, biographical details) are generally accurate due to easily verifiable sources. ## The Comprehensive Ranking | Rank | Beat | Hackery Score | Primary Problem | |------|------|---------------|-----------------| | 7 (Worst) | Technology | 9.2/10 | Technical illiteracy + access journalism | | 6 | Legal | 8.8/10 | Complexity + non-lawyer reporters | | 5 | Science/Health | 7.5/10 | Hype cycle + replication crisis | | 4 | Political | 6.8/10 | Partisan pressure + speed | | 3 | Financial | 5.9/10 | Market feedback helps, but complexity hurts | | 2 | Sports | 4.2/10 | Objective outcomes + statistical verification | | 1 (Best) | Local | 3.5/10 | Community accountability + verifiability | ## The Verdict: Was @horsewater Right? The data largely supports the tweet's hierarchy. Legal journalists *are* bad (ranked 6th), tech journalists *are* worse (ranked 7th), and the intersection--tech journalists writing about law--combines the worst of both worlds: technical complexity they don't understand layered on top of legal complexity they also don't understand. The solution isn't to distrust all journalism--that way lies conspiracy thinking. It's to calibrate your skepticism by beat. When reading tech coverage of legal issues, apply maximum scrutiny. When reading local coverage of local events from reporters embedded in the community, you can relax somewhat. And when you catch an error in your area of expertise, don't just roll your eyes and turn the page. Remember that feeling. Carry it with you to the next section. Murray Gell-Mann figured out what quarks are made of. The least we can do is remember what he and Crichton figured out about newspapers. **Sources:** - [Columbia Journalism Review: Rolling Stone Investigation](https://www.cjr.org/investigation/rolling_stone_investigation.php) - [BBC: CNN Journalists Resign Over Retracted Russia Story](https://www.bbc.com/news/world-us-canada-40414886) - [CNN: Gaza Hospital Coverage Walk-Back](https://www.cnn.com/2023/10/24/media/gaza-hospital-coverage-walk-back) - [Wikipedia: Murray Gell-Mann](https://en.wikipedia.org/wiki/Murray_Gell-Mann) - [Media Accuracy Database](https://www.mediaaccuracy.net/explore) - [MLRC: Empirical Reality of Contemporary Libel Litigation](https://medialaw.org/chapter-3-the-empirical-reality-ofcontemporary-libel-litigation/) - [BBC/EBU: News Integrity in AI Assistants](https://www.bbc.co.uk/mediacentre/2025/new-ebu-research-ai-assistants-news-content) - [Pew Research: Quality of Local Crime News](https://www.pewresearch.org/journalism/2024/08/29/quality-of-local-crime-news/) - [Knight Foundation: State of Public Trust in Local News](https://knightfoundation.org/reports/state-of-public-trust-in-local-news/) - [PLOS ONE: Cancer in the News](https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0242133) - [News Co/Lab: Journalism Corrections](https://newscollab.org/2019/11/07/journalism-corrections-in-theory-and-practice-its-complicated/) - [Original Tweet by @horsewater](https://twitter.com/horsewater/status/2028906643921240496))