# The AI Accent: A Linguistic History > Published on ADIN (https://adin.chat/s/the-ai-accent-a-linguistic-history) > Type: Article > Date: 2026-05-06 > Description: How Institutional English Became the Most Suspicious Way to Write. A day care worker and a word he learned in school Last winter, Jared Hewitt's coworker accused him of using AI to write an incident report. She did it in front of the children. She read it out loud, pointing to the words... *How Institutional English Became the Most Suspicious Way to Write.* ## A day care worker and a word he learned in school Last winter, Jared Hewitt's coworker accused him of using AI to write an incident report. She did it in front of the children. She read it out loud, pointing to the words *juxtaposition* and *circumstantial* as evidence. Hewitt has a stutter. Writing is how he says everything he means without interruption. He'd spent years working on his prose. "I don't write in a casual way but a much more serious, professional way," he told *New York* magazine. He's been accused multiple times since. This is not a story about AI detectors being imperfect (though they are). It's a story about what AI actually learned, and what that reveals about who "good writing" was designed to include. --- ## The fingerprint is grammatical, not lexical In February 2025, Carnegie Mellon published a study in *PNAS* asking whether LLMs write like humans. The answer: not quite, and not randomly. Instruction-tuned models (ChatGPT, Claude, Gemini) produce writing that is distinctly **noun-heavy and informational**. Human writing is comparatively verb-heavy and casual. The pattern holds regardless of topic. What does noun-heavy prose sound like? It sounds like a document. *"The importance of X lies in its capacity to..."* rather than *"X matters because..."* The information is all there. The sense that someone wanted you to understand something is gone. This matters because it locates the problem in sentence architecture, not word choice. You can't fix it by swapping out "delve." --- ## The vocabulary spike That said, the word-level data is real. A 2024 Stanford study (Liang et al.) tracked 15 million PubMed abstracts before and after ChatGPT's release. The spike is unlike any prior vocabulary shift in the corpus. COVID drove content words: "vaccine," "mortality." The AI shift drove *register* words: - **"delve"**: 25.2x its pre-AI frequency - **"showcasing"**: 9.2x - **"underscores"** (as a verb): 9.1x The full Liang marker set: *across, additionally, comprehensive, crucial, enhancing, exhibited, insights, notably, particularly, within.* Not jargon. Not technical terms. The vocabulary of performing seriousness: the words a formal English education trains you to use. The words that get essays A's. --- ## The structural tells Beyond word choice, linguists point to a few patterns that are harder to mask. **The compulsive tricolon.** AI organizes arguments into three approximately equal points. Every claim, three supports. Every section, three examples. Sam Kriss, writing in *The New York Times Magazine*, called this more diagnostic than any individual word. He's right. **Discourse markers on autopilot.** *Additionally. Furthermore. Notably.* A human who writes *Moreover* once is making a choice. A system that opens every third paragraph with *Furthermore* is running a template. **No unresolved tension.** Every question AI raises gets answered. Every paragraph lands somewhere settled. Real writing has threads that trail off, ideas introduced and dropped, moments where the writer clearly got distracted or ran out of certainty. AI's compulsive completeness is, paradoxically, what makes it feel hollow. --- ## Who gets accused The groups most affected share one thing: they learned to write formally. Non-native English speakers trained in post-colonial curricula are among the most heavily penalized. The Kenyan writer Marcus Olang' put it plainly: ChatGPT "accidentally replicated the linguistic ghost of the British Empire." The formal English taught in those schools (careful grammar, textbook vocabulary, structured sentences modeled on canonical British prose) is exactly the English AI reproduced. A business professor who trained in Asia had an academic paper flagged. The vocabulary he'd spent decades acquiring (*boast, testament, foster*) is Commonwealth English. His reviewers took it as ChatGPT. Autistic writers are disproportionately accused. One researcher observed that autistic readers and AI models may have similar media consumption patterns: systematic, high-volume, text-heavy. The writing that results (detailed, pattern-consistent, attentive to completeness) looks alike from the outside. One fantasy author now has readers join her writing sessions on video call so she has witnesses. Genre fiction writers follow conventions. Conventions look like patterns. Patterns look like AI. Kerry Chaput, a historical novelist, had a social media post about her health flagged. "There are word-count conventions, there are sentence conventions," she said. "There are rules to writing that we all follow." Those rules predate ChatGPT by decades. --- ## The institutional English problem AI didn't invent a new kind of writing. It concentrated and reproduced the writing that institutional power already endorsed. LLM training data is skewed toward formal, edited, institutionally produced text: the kind that gets published, indexed, cited. The voice that emerged is the house style of Western knowledge institutions: organized, thorough, neutral, hedged, comprehensive. Academic journals. Corporate memos. Wikipedia. The educated middle register that English instruction has taught as the ideal for over a century. The irony is almost too neat. The writing being accused of being inhuman is the writing institutions defined as the standard for human communication. The people accused of writing like machines are often the ones who worked hardest to master that standard: in a second language, against neurodivergent instincts, as a class credential. They achieved formal fluency. They now face suspicion for it. The writing least likely to trigger a detector is writing that would have gotten a C in high school English: personal, digressive, inconsistent, specific in ways that serve no structural purpose. --- ## What holds up Linguists studying what AI can't replicate reliably point to a few things. **Hyper-specific personal knowledge.** Not "the memory was vivid" but the actual content of the memory, with its irreproducible details. AI can approximate grief. It can't own the name of the street. **Commitment.** AI hedges. A human with an actual opinion sounds different. "X is wrong." Full stop. Rather than "while X has merit in some contexts, there are important considerations on both sides." Taking a position without immediately softening it is still, apparently, a human tell. **Dialect and code-switching.** AAVE, regional vernacular, multilingual slippage: these are genuinely difficult to reproduce at scale, and detectors trained on formal English corpora are bad at reading them. Writing that carries genuine community voice reads as human for the same reason a perfect accent is hard to fake: what's behind it is too complex and too implicit to be learned from observation. **The incomplete thought.** AI always resolves. Real writing sometimes doesn't. --- The practical upshot is uncomfortable: writing that sounds most human right now is writing that sounds least educated in the traditional sense. That's not a reason to write worse. It is a reason to be suspicious of any system that treats formal fluency as evidence of inauthenticity, because those systems have a history of knowing exactly whose fluency they distrust.