Books
Danny Bate Explains Why English Isn’t Broken
Danny Bate is a linguist, writer, broadcaster, and podcaster whose work sits at the intersection of language history and etymology. He holds a BA from the University of York, an MPhil from the University of Cambridge, and a PhD in linguistics from the University of Edinburgh. Bate is also the host of the podcast A Language I Love Is… and is based in Prague. His debut book, Why Q Needs U: A History of Our Letters and How We Use Them, was published in the United Kingdom by Blink, an imprint of Bonnier Books, in October 2025, with a U.S. release slated for 2026 to broaden its reach among readers.
In this conversation, Bate pushes back against the familiar complaint that English spelling is chaotic, recasting it instead as a densely layered system shaped by centuries of migration, empire, trade, and accumulated convention. The alphabet, in his telling, is less a neutral instrument than a living archive, preserving the sediment of linguistic change even as it frustrates modern users. He explains how readers process patterns that extend beyond individual letters, why multiple “Englishes” are not aberrations but inevitabilities, and how technologies such as spellcheckers and artificial intelligence may inadvertently harden what was once fluid.
Along the way, he reflects on what endures within alphabetic systems, why the Great Vowel Shift continues to echo through contemporary spelling, and how linguistics can be rendered accessible without sacrificing its intellectual weight for a broad audience.
Scott Douglas Jacobsen: We tend to treat language as if it were a fixed system—something like a coordinate grid into which time is later introduced. But your work suggests the opposite: that time is embedded within language itself, even at the level of individual letters. When did you begin to see the alphabet not as a neutral tool, but as a historical archive shaped by migration, empire, and exchange?
Danny Bate: My appreciation of the alphabet and of English has improved because I have dived into how amazing this thing is that unites the world, and we find it everywhere; so many of its members are currently flashing before my eyes on my computer. And yet we do not really think about it, which is a sign of the alphabet’s success: it has slotted into our lives so unobtrusively. Thinking more specifically about English, there is a sense nowadays—and this is reinforced by modern technology, particularly spell checkers and AI—that English, written English in particular, is, as you say, static and fixed and can be taken for granted. But so much of it has changed. In fact, everything has changed. Everything old about English was once new, and the alphabet is a prime example. There is basically no aspect of the alphabet that can be taken for granted. So the process of really getting familiar with the alphabet and turning that familiarity into a book has made me appreciate it, marvel at it, and see it as something we cannot take for granted.
Jacobsen: English spelling is often dismissed as chaotic. You argue instead that it is structured, but layered—more like a record of accumulated histories than a broken system. What gives English its underlying coherence, and why does it only appear irregular on the surface?
Bate: Why is it not chaotic? True chaos in a language would look completely different. In any aspect of English—whether it’s sounds, its word order, or, in my case, its spelling—what we do not see is randomness. Rather, we see a collection of irregularities: overlapping rules that, in combination, can appear irregular because they compete with one another. There are many ways to construct a spelling system and apply its rules, and these different approaches have all contributed to English. They accumulate into a system that may seem chaotic at first glance but is in fact structured through layers of historical influence. For example, there is an ongoing debate in English over the name of its last letter. For many speakers, it is “zed,” while for others it is “zee.” Both forms have their own logic. “Zed” reflects the historical origin of the letter, which comes from the Greek zeta. The alternative, “zee,” fits the pattern of other letter names such as B, C, D, G, P, and T.
This illustrates two competing principles: historical continuity and internal patterning. Because English is an international language with no single regulatory authority that can definitively impose one standard, these forms coexist. What appears to be an inconsistency is often the visible result of competing consistencies.
Jacobsen: The English alphabet consists of 26 letters, a number so familiar that it feels almost inevitable. But is it? To what extent is the size and composition of an alphabet shaped by historical contingency rather than linguistic necessity?
Bate: There is. It is a product of history. If you think of the alphabet in its basic form, many languages have added letters or modified them with additional lines or dots—diacritics—which increases the number of symbols. But if you think of the core set, which English has maintained relatively faithfully, that number is largely defined by the Roman alphabet derived from Latin. The early history of the alphabet shows considerable flexibility as it spread around the Mediterranean.
It likely originated in ancient Egypt, derived from Egyptian writing systems, then developed through Phoenician, moved into Greek, and later into Italy with Latin. At each stage, there was innovation: letters were added, removed, or adapted to suit the sounds of different languages. The Latin alphabet worked reasonably well for Latin, which had a more limited set of sounds. However, it was not designed for English. English developed a wider, shifting range of sounds over the centuries, leading to mismatches between letters and pronunciation. As a result, there are sounds without a single dedicated letter and letters that represent multiple sounds. The reason we have roughly 26 letters is largely historical, reflecting the Latin system adopted and then preserved. Many languages using this alphabet have been reluctant to alter that inherited structure, in part because of Rome’s enduring cultural and historical influence.
Jacobsen: Many people have seen those viral examples—often circulating on social media—where words with scrambled internal letters remain surprisingly readable, while others quickly dissolve into something closer to Finnegans Wake.
With only 26 letters, the number of possible combinations is vast, yet our ability to recognize meaning is highly uneven. What does this reveal about how we actually process written language—and to what extent is that capacity shaped by the historical development of English itself?
Bate: That is a great question. First, I should come to the defence of James Joyce. I love him.
Jacobsen: I first read Finnegans Wake at 16—without reading the preface. In one edition, it apparently begins by warning the reader that the book is, in some sense, essentially unreadable. I suspect that might have been useful to know beforehand. What does a text like that reveal about the limits of legibility—and how far language can be stretched before meaning begins to break down?
Bate: It can be a very challenging experience. At the same time, approaching it without preconceptions can make it feel like an adventure into an extraordinary range of language. I do not claim that I fully understand the book, but I appreciate it. I also appreciate your example of widely shared images where words are scrambled but still readable. We are very good at reading because we have strong cognitive abilities that allow us to recognize words not simply as sequences of sounds, but as units of meaning—recognizable clusters of symbols that function together to represent something in language or in the world. That is how proficient readers operate. We do not process every letter sequentially as beginners do.
Early readers often map letters to sounds one by one, but experienced readers recognize larger patterns and whole words. This makes reading highly efficient. This capacity also has historical ‘precedents.’ Early alphabetic systems, such as the Phoenician script, primarily represented consonants, leaving readers to infer vowels. Modern scripts like Arabic and Hebrew still operate largely on this principle, though they can optionally mark all vowels. Readers are accustomed to reconstructing meaning even when not all phonetic information is explicitly written. This perspective also offers a partial defense of English spelling. Consider silent letters, such as the “gh” in night or the “k” in know. These are often criticized as unnecessary. However, if reading is not purely about decoding sounds but also about recognizing meaning, such letters can serve a purpose. For example, the “k” in know helps distinguish it from now. These features assist the reader in quickly identifying words and their relationships, especially when reading at speed. In that sense, English spelling, despite its irregularities, can support efficient recognition. I am, reluctantly, a defender of it in its current form.
Jacobsen: Audiences often encounter parodies of “ye olde English”—whether in British comedy or shows like Saturday Night Live—and can still make sense of fragments like “thou” or “thine,” even while speaking modern English themselves. What explains that partial continuity across time, and why does English evolve into multiple “Englishes” rather than remaining a single, unified language?
Bate: That is a big question. The central answer is that language changes. This may sound simple, but it is fundamental: language naturally evolves. Part of this change comes from internal linguistic pressures. Sound change often occurs because speakers favour efficiency—merging sounds, simplifying pronunciations, or altering how sounds are learned and reproduced across generations. These small shifts accumulate over time into noticeable differences. But language does not exist in a vacuum. External pressures also play a major role. Social change, technological developments, and cultural shifts introduce new vocabulary and influence how people speak. Social identity and group belonging can shape which forms of language gain prominence.
These processes vary across time and geography. When communities are separated—historically by distance, such as across the Atlantic Ocean in the case of British and American English—they develop independently. This leads to distinct varieties emerging over time. There is also a chronological dimension. Changes can occur in pronunciation, vocabulary, grammar, and word order. These shifts are gradual, so speakers rarely notice them as they happen. Yet they accumulate, producing new forms of English. In that sense, English is not a single fixed entity, but a continuum of evolving varieties—past, present, and still emerging.

Jacobsen: You suggest that linguistic change unfolds gradually, but not uniformly—emerging at different speeds over time and spreading unevenly across communities. How do these asynchronous and asymmetric patterns of change shape the English we speak and write today?
Bate: I like that. That is very well put. It is not neat. Linguists do not have an easy time identifying clear-cut boundaries. Sometimes we can point to changes that are complete—for example, where one variety consistently differs from another—but the situation is rarely tidy. More often, there is extensive variation. This can occur across regions, even within a single country. In the United States, accents in places like Boston or New York are distinct from other regional patterns.
These differences may reflect ongoing changes rather than fully settled ones. Variation also exists at smaller scales. Individuals shift how they speak depending on context—what linguists call register or style. I do not speak the same way in every situation. Speaking with you, I may use a more formal style; speaking with family, my pronunciation and rhythm may relax. What we see is not a uniform system but a complex and dynamic mixture of variation. From this, new forms of English gradually emerge over time. The alphabet, despite its limitations, represents this variation reasonably well, which I respect.
Jacobsen: We are now seeing rapid advances in what is broadly termed artificial intelligence—particularly large language models—alongside more familiar tools like spellcheckers and grammar checkers. At a deeper, structural level, how well do these systems actually perform in everyday use, and does the very idea of “fixing” language run counter to how language naturally evolves?
Bate: They may be “fixing” language in another sense—reinforcing and standardizing what they recognize as correct. They are not as creative with language as humans are. Instead, they tend to produce language—often English—in a highly standardized form, consistent with expectations for public or formal communication. These systems do not generate dialects organically or experiment with variation. They rely on established norms, and those norms have consequences beyond the systems themselves. They can reinforce conventions that might otherwise be under pressure to change.
In that sense, technology can act as a brake on certain kinds of linguistic evolution. For example, in spelling, many features of standard English may gradually shift over time. However, tools like spellcheckers often flag deviations as incorrect. This discourages variation, even when those variations reflect natural changes in speech or emerging usage patterns. These technologies are not well-suited to recognizing or adapting to subtle, ongoing linguistic change. Instead, they tend to reflect dominant standards. As a result, they may help stabilize certain forms of language, sometimes at the expense of innovation or variation.
Jacobsen: Your work sits at the intersection of academic research and public communication—a space long shaped, in the British tradition, by accessible scholarship and public lectures, and in the United States by figures like Carl Sagan and Neil deGrasse Tyson. In an era increasingly influenced by non-specialists and online personalities, what advice would you offer to those seeking to communicate serious linguistic scholarship to a broader audience without sacrificing rigor?
Bate: Firstly, thank you. That is kind, and it is how I hope to come across. It is a big question. There are many reasons to engage in public academic communication—often called science communication. One is that it is enjoyable. I take satisfaction in talking to people about language, seeing their reactions, and helping spark moments where ideas click. There is also a broader context. Coming from an academic background, I am aware that academia faces pressures that were less pronounced in the past. Some fields, including linguistics and language studies, can be especially vulnerable. There is a growing need to demonstrate their value and relevance to a wider audience.
As for recommendations, it is more of an art than a science. In a field like linguistics, you cannot assume pre-existing interest or background knowledge. Language is something people use effortlessly, often without reflecting on it. That makes it both familiar and invisible. The task is to meet people where they are. Focus on aspects that are immediately engaging—words, for example—rather than more abstract features like syntax. Make ideas accessible and take responsibility for explaining them clearly. I try to balance complex concepts with humour and memorable examples, so the material is both informative and enjoyable. Finding the right level is a matter of practice. You have to pay attention to what people respond to, what they find confusing, and what they already understand. It takes time to develop that sense, and experience helps refine it.
Jacobsen: Individual letters can change dramatically over time, sometimes becoming almost unrecognizable from their earlier forms. Yet certain features persist across centuries. What, in your view, proves most durable within an alphabet—and why?
Bate: The most durable aspects of an alphabet may be those that are not directly visible. Letters have multiple properties: their shape, the direction in which they are written, and the sounds they represent. All of these features can change over time. This is true across writing systems and throughout the historical development of the alphabetic tradition to which English belongs. Take the letter A, for example. Its shape, its associated sound, and its orientation in early forms have all changed over centuries. None of these features is identical to the earliest known versions of the letter.
What has remained consistent is the underlying principle of the alphabet: representing the sounds of a language with a set of symbols. More precisely, alphabetic systems aim—at least in principle—to map individual sounds (phonemes) to individual symbols (letters). This is not the only way writing systems can function, but it is the defining feature of alphabets like the one used for English. In practice, English has diverged significantly from a strict one-to-one correspondence between sounds and letters, but the foundational principle remains the most durable aspect.
Jacobsen: What is the oldest letter?
Bate: It is difficult to identify a single “oldest” letter. A group of early letters can be traced back to the earliest alphabetic systems, which emerged in the ancient Near East and were influenced by Egyptian writing. These early alphabets, such as the Proto-Sinaitic and Phoenician scripts, contained a core set of characters—roughly 20 to 30 signs. As the alphabet spread across the Mediterranean—from Phoenician to Greek and then to Latin—letters were added, adapted, or sometimes dropped to suit different languages. For example, the Greeks introduced additional letters to represent vowel sounds explicitly, while the Romans later modified and expanded the Latin alphabet, including the addition of the letter G. There is a recognizable continuity in many of the earliest letters—such as those corresponding to A, B, C, and D—through to T. However, it is not accurate to say that T marked a definitive original endpoint; the composition and ordering of letters evolved. Instead of a single oldest letter, it is more accurate to speak of a foundational set of early alphabetic signs, many of which have recognizable descendants in the modern English alphabet.
Jacobsen: What is the Great Vowel Shift?
Bate: What a question to end on. I will take an opportunity to talk about the Great Vowel Shift. It refers to a major set of changes in English pronunciation that took place roughly between 1400 and 1700, during the transition from Middle English to Early Modern English. It primarily affected the long vowels of English. In Middle English—the language of Geoffrey Chaucer—long vowels were pronounced differently from how they are today. During the Great Vowel Shift, these vowels changed systematically, generally moving “upward” in the mouth.
In cases where a vowel was already at the top of the vowel space, it shifted toward a diphthong. This created a chain reaction across the vowel system. Although this may sound abstract, its effects are still present in modern English. It marks a key dividing line between Middle and Modern English pronunciation. Because it occurred before English spread globally, most major varieties of English reflect its outcomes. One consequence is the mismatch between spelling and pronunciation. The writing system preserved older spellings, while pronunciation changed. As a result, letters such as “a” or “i” represent a wider range of sounds than they once did. This is one of the main reasons English spelling can appear inconsistent today. Understanding the Great Vowel Shift helps explain why.
Jacobsen: Thank you very much for your time. I appreciate your expertise.
Bate: Thank you for your questions. They were excellent, and I thoroughly enjoyed answering them. Please keep in touch and let me know if you need anything further.