Typst Optimized for the Wrong Reader
Typst Optimized for the Wrong Reader
Typst isn't losing to LaTeX because of missing features. It's losing because it built for the wrong customer. The complexity it's scrambling to add now is proof that it's discovering this too late.
That's a strong claim. Here's how I got there.
The Consensus Being Challenged
The current narrative in academic tooling circles goes something like this: LaTeX is a legacy system. Its macro syntax is an archaeological artifact. Its error messages are malicious. It compiles through four engines depending on which decade you fell into the ecosystem. Typst arrives with clean syntax, incremental compilation, fast feedback loops, readable error messages, and a proper scripting language. The old guard defends LaTeX out of sunk-cost nostalgia. The future belongs to Typst.
This argument isn't wrong exactly. It's wrong precisely enough to be dangerous.
The Battery Scar: An IEEE Template and an Uncomfortable Observation
I compiled the same paper twice. Once with pdfLaTeX plus the microtype package, once with Typst. IEEE two-column format, identical content, identical fonts as closely as possible. The difference was subtle. If you weren't looking for it, you'd miss it. But once you see it, you can't unsee it.
The pdfLaTeX output had immaculate interword spacing. The Typst output had imperceptible but real white rivers running through paragraphs, the telltale sign of greedy line justification. In two-column layouts with narrow line measures, this matters more than anywhere else. Every extra pixel of stretch across a short line is visible.
When I looked into why, the answer was specific: microtype performs automatic font expansion. The engine analyzes each paragraph and stretches or compresses individual glyphs by imperceptible amounts (one to three percent) to achieve uniform interword spacing without sacrificing the typeset block's rectangular integrity. Typst, at the time, had no equivalent. Font expansion was a manual parameter. The automatic paragraph-level optimization simply didn't exist.
That gap has since been addressed in Typst 0.14 with justification-limits in par. But the fact that it had to be added at all is the real data point. Users had to produce side-by-side comparisons and file issues before the problem was addressed.
Why LaTeX Works the Way It Does
Before tearing into Typst's design decisions, the steel-man case deserves full treatment.
Typst is genuinely excellent. For the overwhelming majority of documents (reports, theses, anything not requiring publisher-specific formatting contraptions), it produces output indistinguishable from LaTeX to any reader who hasn't calibrated their eyes by comparing rendered paragraphs under direct light. Its math typesetting is exceptional. Its error messages don't require a cryptography background to parse. You can compile a document in milliseconds instead of watching a CPU tortured for three seconds through decades-old macro expansions. These are real advantages, not marketing copy.
And LaTeX is genuinely hostile. The \expandafter\expandafter\expandafter sequences that pepper production LaTeX code aren't accumulated wisdom. They're a 1970s token expansion model that nobody fixed because fixing it would break everything that depends on it. The distinction between pdflatex, xelatex, and lualatex shouldn't exist from a user's perspective. Default Computer Modern renders poorly in certain PDF viewers because it uses bitmap representations instead of true vector fonts. A well-configured Word document can outperform default LaTeX output. These criticisms land.
So why does LaTeX win anyway at the frontier of academic typography?
The Teardown: Two Overlapping Problems
Problem One: Essential Complexity Doesn't Disappear
Frederick Brooks drew the distinction forty years ago between accidental complexity and essential complexity. Accidental complexity is what bad tooling introduces: hostile syntax, opaque errors, unnecessary configuration ceremony. It can be eliminated with better engineering. Essential complexity is inherent to the problem itself. It can be distributed, hidden, or absorbed by the tool, but it cannot be removed.
Professional academic typography is a domain of very high essential complexity.
The Knuth-Plass algorithm illustrates this directly. Where greedy line-breakers (Word, LibreOffice, early Typst) place as many words on a line as fit and move on (a strictly local, myopic decision), Knuth-Plass constructs an abstract network of all possible paragraph breakpoints and runs dynamic programming across it to find the minimum-badness path for the entire paragraph as a single entity. A large word on line eight actively alters the spacing calculations for line one. The badness function penalizes deformation of invisible elastic springs ("glue") connecting typographic boxes, and the optimal solution minimizes total deformation across the whole block.
This is computationally expensive. That's not a flaw. That's what the problem actually costs to solve correctly.
When Typst began adding character-level justification algorithms, iterative relayout engines, and automatic micrography, it wasn't improving on its original design. It was discovering, version by version, the exact complexity map that TeX spent the 1970s and 1980s exploring under Donald Knuth. The difference: Knuth was a physicist and mathematician who spent years in the domain before writing a line of compiler code, making decisions with accumulated typographic knowledge. Typst is making equivalent decisions reactively, under pressure from GitHub issues filed after users noticed the output was wrong.
This explains why Typst's progress toward LaTeX-quality output feels slower than it should: it's not implementing features. It's re-learning without domain expertise what Knuth learned in productive isolation over a decade.
Problem Two: The Wrong Premise Was Imported From the Wrong Domain
This is the sharper argument, and the one Typst's design philosophy never fully reckoned with.
The foundational premise behind Typst's design is recognizable: code is read more times than it is written. This is an accepted engineering truth in software development. Systems live for decades. They're maintained by engineers who didn't build them. The dominant cost is comprehension, not creation. Optimizing for the author who writes the code therefore optimizes for the wrong person. The system should be optimized for the engineer who reads and modifies it years later.
This premise is correct in the context where it was formed: production software with long maintenance cycles and rotating teams.
Typst imported the premise wholesale and applied it to academic documents. The problem is that academic documents have a completely different phenomenology:
- A paper is written once, revised a small number of times, submitted, and then rarely touched again.
- The "reader" of the
.texor.typsource file is almost always the original author, who has full context. - The final product, the PDF, is read thousands of times by people who will never see the source file.
The read/write asymmetry is inverted. The person who reads the output the most is not the author; it's the reader holding the published PDF. LaTeX was built to make that reader's experience as close to perfect as possible. Implicitly so: Knuth built it for the book whose typesetting had embarrassed him. Everything else was secondary.
Typst built for the author. That's the wrong customer for this domain.
This isn't an incompetence critique. Typst's engineers are clearly talented. The error is epistemological: a design premise was transplanted from one domain to another without verifying that the conditions making it true still held. "Optimize for the person who reads the system's output" is correct in both cases, but who that person is differs completely between production software and academic publishing. In software, the author and the primary reader are often the same team. In academic typography, they're almost never the same person.
Problem Three: The Abstraction Leaks Where It Matters Most
Joel Spolsky's law of leaky abstractions holds that every non-trivial abstraction eventually fails to hide the underlying complexity it was built to conceal. In normal conditions, Typst's clean motor hides typography's essential complexity gracefully. The gaps appear at the frontier: real academic publishing with irrational, non-standardized publisher requirements.
LaTeX's response to an impossible publisher demand is different in kind from Typst's. Because TeX is Turing-complete at the macro level, you can redefine internal macros, create hooks, monkey-patch at any depth. The CTAN archive, which reads as a catalogue of LaTeX's accumulated failings, is actually evidence of this architectural property: the community has been able to express arbitrary typographic requirements as packages precisely because the underlying system allows it. When a journal demands page layout behavior that violates every expectation, LaTeX can be made to comply.
Typst's scripting layer is a modern, pleasant language. But it's constrained by the motor's architecture. You can write sophisticated packages in Typst; you can't express concepts the motor's design didn't anticipate. Architectural constraint limits what the language layer can fix. LaTeX's constraint (archaic macro syntax) is paradoxically its freedom (Turing completeness means any computable transformation is expressible). Typst's freedom (modern language) is constrained by the motor beneath it.
The CTAN archive didn't accumulate because academics enjoy unnecessary complexity. It accumulated because real-world academic publishing requirements are a chaotic, irrational, historically stratified mess, and LaTeX is the only tool with the architectural properties to satisfy them at all.
The Paradigm Shift: Ask the Right Question Before Designing the System
The new mental model isn't "choose LaTeX over Typst." Typst is the right tool for a large fraction of use cases: anything short of the frontier where publisher requirements become arbitrary and micrographic precision becomes measurable. The paradigm shift is in the question you ask before designing any system that moves between domains:
Who reads this system's output, and how many times? What is the read/write asymmetry for the artifact this system produces?
In software: code is the primary artifact. Maintenance engineers read it more than authors write it. Optimize for comprehension.
In academic publishing: the PDF is the primary artifact. Thousands of readers encounter it; one author produced it. Optimize for the PDF reader's experience.
These sound like the same question. They produce opposite answers.
Knuth never needed to ask it explicitly because the question answered itself: he was building a system for a book whose printed quality had embarrassed him. The customer was the reader holding that book. Everything followed from that framing: the Knuth-Plass algorithm, the glue model, the microtypographic precision, even the years of solitary mathematical work before a usable compiler existed.
Typst answered it implicitly too, but incorrectly. And as it discovers the correct answer by accumulating complexity under market pressure, it's validating the original design it sought to replace.
The analogy that captures this precisely: a junior developer finds a function they find ugly, spends three weeks refactoring it, and ends up re-implementing the original logic after finally understanding why it was written that way. The complexity wasn't the author's failure to write clearly. The complexity was the problem.
LaTeX's ugliness is, to a significant degree, the ugliness of professional typography itself. The question is whether you encountered that ugliness before or after designing your system. Knuth encountered it first. Typst is encountering it now.