Gell-Mann Amnesia: A Definitive Ranking of Which Journalism Beats Are Most Filled With Hacks

Murray Gell-Mann wasn't a media critic--he was one of the greatest physicists of the 20th century, a Nobel laureate who discovered quarks and helped build the modern understanding of subatomic physics. He was also a polymath who spoke more than a dozen languages and co-founded the Santa Fe Institute.
So why is his name attached to a cognitive bias about newspapers?
Michael Crichton--doctor, novelist, creator of Jurassic Park--coined the term after a conversation with Gell-Mann. He described the phenomenon like this:
You open the newspaper to an article on a subject you know well. In Murray's case, physics. In mine, show business. You read the article and see the journalist has absolutely no understanding of either the facts or the issues. Often, the article is so wrong it actually presents the story backward--reversing cause and effect. I call these the "wet streets cause rain" stories. Paper's full of them.>
In any case, you read with exasperation or amusement the multiple errors in a story, and then turn the page to national or international affairs, and read as if the rest of the newspaper was somehow more accurate about Palestine than the baloney you just read. You turn the page, and forget what you know.
Crichton admitted he named it after Gell-Mann simply to "drop a famous name," but the label stuck because it captures a universal experience: experts recognize the errors in their domain, then promptly forget this insight when reading about everything else.
Which brings us to a tweet from @horsewater)
that articulated what many experts feel:
"Legal journalists are worse than tech journalists, and tech journalists trying to write about the law are worst of all."
This is the Gell-Mann Amnesia effect in action--someone with domain expertise calling out a specific hierarchy of hackery. But is this hierarchy actually supported by data? Let's find out. Let's quantify which journalists really are the worst.
First, though, let's establish that journalism's accuracy problem isn't hypothetical. Here are three emblematic failures from institutions that set the standard:
Three Egregious Examples of High-Profile Media Failure
Rolling Stone's UVA Rape Story (2014): A Columbia Journalism Review investigation found that the magazine's sensational campus rape narrative collapsed under even basic verification. The story was retracted, awards were returned, and the magazine paid millions in settlements. An institutional failure "at basically every level."
CNN's Retracted Russia-Investigation Story (2017): Three CNN journalists resigned after publishing--then retracting--a report alleging Senate scrutiny of a Trump associate's Russian connections. The BBC's coverage notes the story relied on a single anonymous source and bypassed normal editorial review.
The New York Times' Gaza Hospital Error (2023): Initially attributing a deadly hospital blast to an Israeli airstrike, the Times later walked back the claim after evidence pointed to a misfired Palestinian rocket and a dramatically smaller death toll. The paper acknowledged its early coverage conveyed unwarranted certainty.
These aren't fringe publications. These are the institutions that define American journalism. And these are just the errors that got caught.
The Methodology: How Do You Measure "Hackery"?
Before ranking, we need metrics. I evaluated each journalism beat across five dimensions:
- Correction and Retraction Rates - How often does the beat issue formal corrections?
- Defamation and Libel Litigation - Which beats attract the most lawsuits?
- Expert Assessment of Accuracy - What do specialists in each field say about coverage quality?
- Structural Incentives for Error - Does the beat's nature encourage mistakes?
- Trust Metrics - How does the public perceive accuracy in each category?
The Rankings: From Worst to Best
7. TECHNOLOGY JOURNALISM (Worst)
Hackery Score: 9.2/10
Tech journalism occupies a uniquely cursed position in the media landscape. The beat combines rapid news cycles, complex technical concepts, massive corporate PR machines, and reporters who often lack engineering backgrounds trying to explain engineering to readers who also lack engineering backgrounds.
The Evidence:
The BBC/EBU 2025 study on AI assistants found that AI systems misrepresent news content 45% of the time--but this mirrors a broader problem: tech journalism itself often misrepresents technical realities. When AI systems are trained on tech journalism, they inherit its errors.
Tech journalism's structural problems include:
- Access journalism dependency: Major tech companies control access, creating incentive to publish favorable coverage
- Speed over accuracy: The race to be first on product announcements leads to republishing press releases as news
- Technical illiteracy: Many tech reporters lack the background to evaluate claims critically
- Hype cycle participation: Reporters benefit from breathless coverage of "revolutionary" technologies
Defamation Risk: Moderate. Tech companies prefer PR battles to courtrooms, though cases like Hulk Hogan v. Gawker ($140 million verdict) show the catastrophic potential when tech-adjacent media gets it wrong.
6. LEGAL JOURNALISM
Hackery Score: 8.8/10
The tweet that inspired this article specifically called out legal journalists, and the data supports the criticism. Legal journalism suffers from a fundamental problem: law is genuinely complex, and most legal journalists are not lawyers.
The Evidence:
Legal experts have cataloged recurring errors that they find maddening in court coverage:
- Confusing "the Court ruled" with "the Court held" (legal terms of art with different meanings)
- Misunderstanding what "standing" means
- Conflating majority opinions with concurrences
The Intersection Problem: The tweet specifically noted that "tech journalists trying to write about the law are worst of all." This is demonstrably true in AI/copyright coverage, where reporters routinely:
- Confuse copyright registration with copyright protection
- Misstate what the Copyright Office actually ruled
- Conflate "AI cannot be an author" with "AI-generated works cannot be copyrighted" (these are different claims)
5. SCIENCE AND HEALTH JOURNALISM
Hackery Score: 7.5/10
Science journalism has a unique problem: the incentive structure rewards reporting on preliminary findings that often don't replicate.
The Evidence:
A PLOS ONE study on cancer news coverage found systematic bias and quality problems in how media reports cancer research. Key findings:
- Studies with positive results were more likely to be covered
- Limitations and caveats were routinely omitted
- Conflicts of interest were rarely disclosed
A Taylor & Francis study asked "Do Journalists Update Retracted Science News?" The answer: rarely. When scientific papers are retracted, the news stories based on them often remain online, uncorrected.
Structural Problems:
- "Breakthrough" framing for incremental findings
- Failure to explain statistical significance vs. clinical significance
- Reporting on pre-prints as if they were peer-reviewed
- The replication crisis means many reported findings are simply false
4. POLITICAL JOURNALISM
Hackery Score: 6.8/10
Political journalism is paradoxically both heavily scrutinized and persistently inaccurate. The scrutiny comes from partisan fact-checkers; the inaccuracy comes from the adversarial nature of political coverage.
The Evidence:
The Harvard Kennedy School's Misinformation Review published research on fact-checking organizations themselves, finding that fact-checkers focus disproportionately on famous politicians rather than the most consequential false claims.
A Nature study found that "differences in misinformation sharing can lead to politically asymmetric sanctions"--meaning the same type of error is treated differently depending on which political side it benefits.
The mediaaccuracy.net database found that accuracy varies dramatically by policy domain:
- Defense spending coverage: Highest accuracy
- Welfare coverage: Lowest accuracy
- Environmental coverage: Very low accuracy
Defamation Risk: Very High. Political figures are "public figures" requiring actual malice proof, but they also have resources to sue. The Trump v. ABC News settlement ($15 million) and Trump v. New York Times ($15 billion lawsuit pending) show the stakes.
3. FINANCIAL/BUSINESS JOURNALISM
Hackery Score: 5.9/10
Financial journalism benefits from a built-in accuracy check: markets react to information, and incorrect information gets corrected by price movements. This creates a feedback loop that other beats lack.
The Evidence:
The Financial Times' Tesla retraction and CNN Money's ECB interest rate error (which moved markets before correction) show that financial journalism errors have immediate, measurable consequences. This accountability mechanism, while imperfect, creates stronger incentives for accuracy.
A Journal of Accounting Research study on "News Bias in Financial Journalists' Social Networks" found that financial journalists' social connections do influence coverage, but the effect is more subtle than in other beats.
Finimize's analysis of why media keeps getting financial terms wrong identified common errors: confusing revenue with profit, misunderstanding market cap, and misreporting percentage changes.
Structural Advantages:
- Quantitative claims are verifiable
- Market reactions provide immediate feedback
- Regulatory requirements create paper trails
- Financial literacy among readers is higher than average
2. SPORTS JOURNALISM
Hackery Score: 4.2/10
Sports journalism is, surprisingly, among the more accurate beats. The reasons are structural: outcomes are objective, statistics are abundant, and fans are intensely knowledgeable fact-checkers.
The Evidence:
An MDPI study on football transfer news found misinformation in transfer reporting, but the errors were primarily about predictions (which player would sign where) rather than facts (what actually happened). This is a crucial distinction--sports journalism's factual accuracy is high; its predictive accuracy is low.
The Atlanta Journal-Constitution's UGA football corrections case illustrates that when sports journalism does make errors, the correction process is often robust. The AJC issued detailed corrections in response to a nine-page letter from university attorneys.
Structural Advantages:
- Box scores don't lie
- Video evidence is abundant
- Passionate fan bases fact-check obsessively
- Statistics are standardized and verifiable
1. LOCAL NEWS (Best)
Hackery Score: 3.5/10
Local journalism consistently ranks as the most trusted and, by several metrics, the most accurate form of news coverage.
The Evidence:
The Knight Foundation/Gallup study on public trust in local news found that Americans trust local news significantly more than national news. This trust appears to be earned: local journalists are embedded in their communities and face direct accountability from readers who can verify coverage against their own experience.
Pew Research's 2024 study on local crime news found that while local crime coverage has quality issues, readers generally rate it as accurate and fair.
A 2025 UK study found that 80% of adults trust local media, up from 73% in 2024--a sharp rise attributed to local media's contrast with AI-generated misinformation online.
Structural Advantages:
- Proximity to subjects enables verification
- Readers can fact-check against personal knowledge
- Smaller scale means fewer complex claims
- Community accountability is direct and personal
Honorable Mention: ARTS AND ENTERTAINMENT JOURNALISM
Hackery Score: N/A (Different Category)
Arts journalism operates under different accuracy standards because much of it is inherently subjective (criticism, reviews). You can't "fact-check" whether a movie is good. However, factual claims within arts journalism (box office numbers, award nominations, biographical details) are generally accurate due to easily verifiable sources.
The Comprehensive Ranking
| Rank | Beat | Hackery Score | Primary Problem |
|---|---|---|---|
| 7 (Worst) | Technology | 9.2/10 | Technical illiteracy + access journalism |
| 6 | Legal | 8.8/10 | Complexity + non-lawyer reporters |
| 5 | Science/Health | 7.5/10 | Hype cycle + replication crisis |
| 4 | Political | 6.8/10 | Partisan pressure + speed |
| 3 | Financial | 5.9/10 | Market feedback helps, but complexity hurts |
| 2 | Sports | 4.2/10 | Objective outcomes + statistical verification |
| 1 (Best) | Local | 3.5/10 | Community accountability + verifiability |
The Verdict: Was @horsewater Right?
The data largely supports the tweet's hierarchy. Legal journalists are bad (ranked 6th), tech journalists are worse (ranked 7th), and the intersection--tech journalists writing about law--combines the worst of both worlds: technical complexity they don't understand layered on top of legal complexity they also don't understand.
The solution isn't to distrust all journalism--that way lies conspiracy thinking. It's to calibrate your skepticism by beat. When reading tech coverage of legal issues, apply maximum scrutiny. When reading local coverage of local events from reporters embedded in the community, you can relax somewhat.
And when you catch an error in your area of expertise, don't just roll your eyes and turn the page. Remember that feeling. Carry it with you to the next section.
Murray Gell-Mann figured out what quarks are made of. The least we can do is remember what he and Crichton figured out about newspapers.
Sources: