<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>hekke.io</title>
    <link>https://hekke.io</link>
    <description>Technical dispatches on software architecture, game development, and the craft of building digital systems.</description>
    <language>en</language>
    <lastBuildDate>Wed, 22 Apr 2026 11:26:24 GMT</lastBuildDate>
    <atom:link href="https://hekke.io/feed.xml" rel="self" type="application/rss+xml"/>
    
    <item>
      <title><![CDATA[Static Is Enough: The Case for Hugo Over WordPress]]></title>
      <link>https://hekke.io/blog/static-is-enough</link>
      <guid isPermaLink="true">https://hekke.io/blog/static-is-enough</guid>
      <description><![CDATA[Why most websites running WordPress don't need it, how Hugo wins on performance, cost, security, and complexity, and why LLMs just closed the last gap that was keeping WordPress in play.]]></description>
      <content:encoded><![CDATA[<p>Most WordPress sites don&#39;t need WordPress. They need a few pages of text, a contact form that emails someone, a picture on the homepage, and a blog index that almost nobody scrolls past page one of. The stack they&#39;re running to serve that content is a PHP interpreter, a MySQL database, a theme framework, twenty plugins on staggered update cycles, and a continuous low-level war against the bots knocking on <code>/wp-login.php</code>. The server costs fifteen dollars a month when things are quiet, and occasionally falls over when they aren&#39;t.</p>
<p>A static site built with Hugo serves the exact same content from a CDN for nothing, loads in a fraction of the time, and has no admin URL to defend. That&#39;s not a niche case. It&#39;s most of the web.</p>
<p>Static generators like Hugo, Zola, and Eleventy have been the right default for brochure sites, blogs, docs, and portfolios for years. The one argument that held back wider adoption was that WordPress was easier to set up for someone who didn&#39;t want to learn anything. That argument is gone now, because an LLM can scaffold a themed Hugo site in a single prompt. What&#39;s left is the comparison on the things that always mattered: performance, cost, security, and complexity. WordPress loses all four.</p>
<h2>01. FOUR PAGES, FORTY KILOBYTES</h2>
<p>Load a vanilla WordPress install in a browser. The initial HTML is around 60KB before anything dynamic has happened. Add the theme&#39;s CSS and JS and the page crosses 400KB before a single image. Time to first byte on a cold request against a modest VPS sits somewhere between 400 and 700ms, because the server is booting the PHP interpreter, opening a database connection, executing the theme&#39;s templates, and rendering markup on every request that doesn&#39;t hit cache.</p>
<p>A Hugo site with a stock theme serves a similar-looking page in roughly 40KB of HTML and CSS combined. The server isn&#39;t rendering anything. The CDN hands over a prebuilt file. TTFB is typically under 50ms because the response is already on the edge, sitting in front of the user&#39;s network. Core Web Vitals scores fall out of this for free. LCP under 2.5 seconds is trivial when the HTML is pre-rendered and the critical CSS is inline.</p>
<p>This isn&#39;t a framework vs framework benchmark. It&#39;s rendering vs not rendering. WordPress decides what the page looks like on every request. Hugo decided at build time and the decision is frozen into the file you&#39;re reading. For a site that updates once a week, rendering on every request is pure waste.</p>
<h2>02. THE HOSTING BILL</h2>
<p>Cheapest plausible WordPress hosting is not free. Managed WordPress starts at eight to fifteen dollars a month for a single site, and that&#39;s before you hit a traffic tier, decide you want automatic backups, or add a staging environment. A raw VPS is cheaper per box, but now you own the security updates, the database backup job, and the 3am reboot when the LAMP stack falls over.</p>
<p>Static hosting is free until the traffic is absurd.</p>
<p>Cloudflare Pages, Netlify, and Vercel all have free tiers that cover roughly 100GB of monthly bandwidth and unlimited requests. For a personal site, a small business brochure, or a developer blog, you will never reach those limits. If your site gets linked on Hacker News and takes a million requests in two hours, the static bill is still zero. A WordPress site on a fifteen dollar VPS is unreachable for the entire time the link is trending, because the PHP workers are busy rendering the same homepage over and over.</p>
<p>The scale curve is what&#39;s interesting. WordPress costs grow roughly linearly with traffic, because you&#39;re buying CPU to render pages. Static costs are flat, then step up once you cross a free tier. Most sites never cross it. The ones that do are paying tens of dollars a month for traffic that would have cost hundreds on managed WordPress.</p>
<h2>03. THE ATTACK SURFACE</h2>
<p>Patchstack&#39;s <a href="https://patchstack.com/whitepaper/state-of-wordpress-security-in-2026/">State of WordPress Security in 2026</a> report logged 11,334 new WordPress-related vulnerabilities in 2025, a 42% jump on the year before. Ninety-one percent of them landed in plugins. Core took six in total, all low priority. And 46% of the disclosures arrived with no patch available on the day they were published, meaning nearly half of every year&#39;s findings are live in the wild with nothing to install to fix them. A typical install runs twenty plugins from nineteen different authors, each on their own release cadence, each able to execute arbitrary PHP on your server the day one of them ships a bad version.</p>
<p>The attack surface of a Hugo site is the CDN.</p>
<p>That&#39;s the whole sentence. There is no database, no PHP runtime, no admin login, no XMLRPC endpoint, no user accounts, no plugin ecosystem. Compromising a static site means compromising Cloudflare or Netlify, and if that happens, the WordPress installs on those same providers have bigger problems too.</p>
<p>Maintenance reduces to the same shape. WordPress asks for attention in two places. Core updates, which are usually fast and safe. And plugin updates, which are neither. A plugin abandoned by its author turns into a silent liability that nobody on the support forum is going to fix for you. You either audit the code yourself, swap the plugin for a maintained alternative, or accept the risk.</p>
<p>A Hugo site needs the Hugo binary updated occasionally. That&#39;s a binary swap. If a theme is abandoned, the site still works forever, because the theme has already done its job at build time. The generated HTML has no runtime dependency on anything, and nobody can push a compromised update to a file that&#39;s already sitting on a CDN.</p>
<p>This is the axis where the WordPress argument is weakest, and it&#39;s the one people routinely underweight. Every running WordPress install is an ongoing commitment to patch discipline. Every static site is a folder of files that will still load in ten years if the hosting account still exists.</p>
<h2>04. WHAT YOU ACTUALLY OWN</h2>
<p>Open the hood on a WordPress site and count the moving parts. The platform is PHP, at a specific version range that you probably didn&#39;t pick. A web server, usually Apache or nginx, configured to hand requests to PHP. A database, usually MySQL or MariaDB, tuned for whatever plugins you installed. A theme, which is itself a collection of PHP templates and a compiled CSS bundle. Plugins, typically between fifteen and thirty, each carrying JavaScript, CSS, and database tables. A caching layer, because the default rendering path is too slow to be used unaided.</p>
<p>A Hugo site is a single binary, a folder of Markdown, and a theme that&#39;s usually a Git submodule.</p>
<pre><code>my-site/
├── config.toml
├── content/
│   ├── _index.md
│   ├── about.md
│   └── posts/
│       └── 2026-04-15-hello.md
├── layouts/
├── static/
│   └── images/
└── themes/
    └── ananke/        (git submodule)
</code></pre>
<p>You can list every file that contributes to the output. <code>hugo</code> reads the content directory, applies the theme&#39;s templates, and writes HTML to <code>public/</code>. That&#39;s the entire build system. There is no state outside those files.</p>
<p>The architect&#39;s question isn&#39;t which is more powerful. WordPress is more powerful. The question is which you can still understand in two years. WordPress drifts. Plugin versions move independently, the database schema accumulates rows from plugins you removed, the PHP version gets forced up by your host, the theme gets patched or doesn&#39;t. By year three, nobody on earth can tell you exactly what&#39;s running on the server. A Hugo project looks the same in year three as it did on day one, because it&#39;s a directory of text files and a tool that converts them.</p>
<p>That&#39;s what local-first means for websites. You own the canonical content as Markdown, the build tool is a single open source binary, and the output is vanilla HTML. You never bought anyone&#39;s platform.</p>
<h2>05. WHERE STATIC ISN&#39;T ENOUGH</h2>
<p>Worth being honest about where the argument breaks down.</p>
<p>There are legitimate reasons to reach for a dynamic runtime, and WordPress is one option among several for each of them. Membership and paywalled content need server-side logic. WordPress can handle that, and so can Ghost, Memberful, or a small custom application. Busy comment sections need real-time moderation. WordPress can do it, and so can Disqus, Discourse, or hosted widgets like Giscus bolted onto a static site. E-commerce at scale needs catalogs, cart state, and payment flows, and for most storefronts Shopify is the more pragmatic answer than WooCommerce. It ships payments, inventory, and tax compliance as a managed service instead of a plugin stack. BigCommerce and Squarespace Commerce cover similar ground.</p>
<p>The last real pull toward a full CMS is the non-technical editor. When the person maintaining the site is going to live inside the admin panel every week and a Git workflow is the wrong abstraction for them, WordPress works. So do Ghost, Webflow, and Squarespace. Headless CMSs like Sanity, Decap, and TinaCMS sit in the middle ground, pairing a familiar admin UI with a static build pipeline so the output is still a folder of HTML on a CDN.</p>
<p>These are all real use cases. None of them has a single correct answer. And they&#39;re still a small fraction of the WordPress installs currently running on the internet. The rest are blogs that update once a month, brochures that don&#39;t update at all, and small business pages that were set up once and haven&#39;t been touched since. For all of those, static is enough.</p>
<h2>06. THE LAST MOAT</h2>
<p>Until recently, the honest answer to &quot;why WordPress&quot; for a non-technical user was that it shipped with a theme picker, a visual editor, and a plugin store. Setting up a Hugo site required cloning a repo, editing TOML, and understanding how templates worked. That friction was the real moat.</p>
<p>The moat is drying up fast. A single prompt to a capable LLM now produces a Hugo site with a chosen theme, a styled homepage, a blog index, an about page, and an RSS feed, ready to drop onto Netlify by dragging the folder into a browser. The entire setup that used to take a technical evening now takes a chat. The LLM is also the one who debugs the site when the user reports &quot;it looks broken on my phone&quot;, and the user never needs to learn how templates work because they can ask for changes in English.</p>
<p>This doesn&#39;t kill WordPress outright. Plenty of people will keep using it, and the WordPress economy around themes, hosting, and agency work is not going anywhere soon. But the one argument that defended WordPress against static for most of its addressable market is gone. WordPress was easier because humans were slow. The humans aren&#39;t slow anymore.</p>
<h2>THE LESSON</h2>
<p>Static is enough for most of the web because most of the web is text with occasional pictures. The reasons to pick WordPress are real, but they describe a minority of the sites currently running it. Every other site is paying for a dynamic runtime to serve content that doesn&#39;t change between requests. That runtime gets paid for in hosting costs, update cycles, and a security posture that needs tending on a schedule.</p>
<p>If you&#39;re starting a new personal site, a blog, a docs page, or a small business brochure, pick a static generator. Ship it on a free CDN. Update it when there&#39;s something to say, and leave it alone when there isn&#39;t. That used to be the harder path. It isn&#39;t anymore.</p>
]]></content:encoded>
      <pubDate>Sun, 19 Apr 2026 00:00:00 GMT</pubDate>
      <category>architecture</category>
    </item>
    <item>
      <title><![CDATA[Ghostty 1.3: The Terminal That Zigs]]></title>
      <link>https://hekke.io/blog/ghostty-1-3-the-terminal-that-zigs</link>
      <guid isPermaLink="true">https://hekke.io/blog/ghostty-1-3-the-terminal-that-zigs</guid>
      <description><![CDATA[Mitchell Hashimoto's GPU-accelerated terminal emulator ships its biggest release yet. Six months of work, 2,858 commits, and the most-requested missing feature from 1.0 finally arrives.]]></description>
      <content:encoded><![CDATA[<p>Mitchell Hashimoto built Ghostty because he wanted to learn Zig. That is the origin story. He had just stepped back from HashiCorp, the company he co-founded and where he spent a decade building Vagrant, Terraform, Vault, and Consul. He wanted to do some graphics programming, understand how terminals actually work, and play with a language he was curious about.</p>
<p>He never planned to release it.</p>
<p>That changed when he kept running into the same tradeoffs in every other terminal. Speed came at the cost of features. Features came at the cost of native platform behavior. Native behavior came at the cost of cross-platform consistency. His thesis became: these are not actual tradeoffs. You should not have to pick.</p>
<h2>01. THE ARCHITECTURE</h2>
<p>The center of Ghostty is <code>libghostty</code>: a cross-platform, zero-dependency C-ABI compatible library written in Zig. It handles terminal emulation, font rendering, and GPU rendering logic. It exposes a pure C API, which means any language that can call C can embed a terminal. A proof-of-concept project called <code>ghostling</code> demonstrates this with a minimal host.</p>
<p>On top of <code>libghostty</code> sit platform-native UI layers, built specifically for each operating system rather than abstracted away.</p>
<p>On macOS, the UI layer is Swift using AppKit and SwiftUI. Not because it was the easy choice. Mitchell initially wrote it in a single Zig file, then realized Apple&#39;s new APIs are Swift-only and Objective-C is clearly being deprecated. The boundary between Zig and Swift required its own engineering work, which he documented in detail. The result is that macOS users get actual macOS behavior: native window chrome, native menus, system scrollbars, dark/light mode following system preferences.</p>
<p>On Linux, the UI layer is Zig calling GTK4&#39;s C API directly. This was completely rewritten for 1.2, fully embracing the GObject type system from Zig and validating every change with Valgrind. The rewrite fixed a class of memory safety bugs where Zig-owned and GTK-owned memory would get freed independently, and enabled modern GTK features including animated borders and Blueprint UI.</p>
<p>Rendering uses Metal on macOS and OpenGL on Linux. Each terminal surface runs three dedicated threads: one for reading, one for writing, one for rendering. As of 1.3.0, the render thread has been rearchitected to hold the terminal lock 2-5x less time, and in most frames it holds it not at all thanks to improved dirty and damage tracking.</p>
<h2>02. WHY ZIG</h2>
<p>The language choice is not aesthetic. Zig was genuinely useful for what Ghostty needed to be.</p>
<p>The build system made it trivial to compile as both an executable and a library simultaneously. This is not a given: most build systems require significant configuration to support dual output modes. For <code>libghostty</code> to exist, the build had to produce a <code>.so</code>/<code>.dylib</code>/<code>.dll</code> and an executable from the same codebase without a separate build definition.</p>
<p>Tagged unions with exhaustive switch statements gave the compiler the ability to report errors when a new case is added and not handled everywhere. For a terminal emulator that is essentially a large state machine parsing decades of escape sequence history, that property is directly useful.</p>
<p>SIMD without inline assembly allowed modern vectorized operations in the hot path of terminal parsing. No hidden allocations and explicit memory management mean predictable performance for a real-time rendering application.</p>
<p>And the C interop story is clean: prepend <code>export</code> to a function definition and it gets C calling convention. No FFI layer, no binding generator. For GTK on Linux, which exposes a C API, this is the difference between calling GTK directly and going through an indirection layer.</p>
<h2>03. GHOSTTY 1.3.0</h2>
<p>Released March 9, 2026. Six months of work. 2,858 commits. 180 contributors.</p>
<p><strong>Scrollback Search</strong> was the most-requested missing feature from the 1.0 launch, and it shipped here. Cmd+F on macOS, Ctrl+Shift+F on Linux. Search runs on a dedicated thread concurrent with terminal I/O. When you close the search bar, that thread terminates and resources are freed.</p>
<p><strong>Native Scrollbars</strong> follow the system scrollbar setting on both platforms. On Ubuntu with overlay scrollbars enabled, you get overlay scrollbars. On a system with persistent scrollbars set, you get persistent scrollbars. The behavior matches what the OS expects rather than imposing its own.</p>
<p><strong>Click-to-Move-Cursor</strong> lets you click anywhere in a shell prompt to move the cursor there. This works in Fish, Nushell, Zsh, and other shells via OSC 133 shell integration. A small feature that turns out to be constantly useful.</p>
<p><strong>Command Palette</strong> on macOS got custom entries via <code>command-palette-entry</code>, session search to jump to any running terminal by title or working directory, and tab color indicators. The Linux GTK palette has had this since 1.2.0.</p>
<p><strong>Shell Integration</strong> received a much more complete OSC 133 implementation. Fish 4.1 and Nushell 0.111 support it natively. The jump-to-prompt and copy-command-output features are now more accurate as a result.</p>
<p><strong>SSH Integration</strong> adds <code>ssh-terminfo</code>, which auto-wraps SSH to transmit Ghostty terminfo to the remote host. When you SSH into a server that does not have Ghostty&#39;s terminfo installed, things break in quiet ways: color support drops, keybindings misbehave, <code>$TERM</code> lookups fail. This wraps the handshake to transmit the terminfo entry automatically.</p>
<p><strong>Command Completion Notifications</strong> surface a native OS notification when a long-running command finishes. Kick off a test suite or a build, switch to another window, and get notified when it is done. This sounds minor but it is the kind of quality-of-life feature that is hard to add without native platform integration.</p>
<p><strong>I/O Performance</strong> is the change that does not show up in the feature list. The 1.3.0 release dramatically improved I/O processing. Replaying an asciinema dataset went from taking minutes to taking tens of seconds. These gains propagate into synthetic benchmarks too: Ghostty is now competitive with Alacritty on vtebench while running a significantly richer feature set.</p>
<h2>04. WHERE IT STANDS</h2>
<p>Ghostty&#39;s position in the terminal landscape is specific. Alacritty is faster at idle and lighter on RAM (~30MB vs 60-100MB for feature-rich terminals), but it has no built-in splits, no ligatures, and no image protocol. WezTerm has deep multiplexing and Lua-based configuration but runs 2-5x slower in throughput benchmarks. kitty has a rich extension ecosystem but uses a custom rendering approach that feels less native on macOS.</p>
<p>Ghostty is the terminal that treats native platform behavior as a first-class concern without sacrificing performance or feature depth. That is a rarer combination than it should be.</p>
<p>In December 2025, Hashimoto moved Ghostty to a non-profit structure, fiscally sponsored by Hack Club (a 501c3). He donated $150K personally and transferred all IP to Hack Club. The explicit goal was to make Ghostty something that outlives its creator rather than a project that stalls if he moves on to something else.</p>
<p>v1.3.0 is on GitHub at <a href="https://github.com/ghostty-org/ghostty">ghostty-org/ghostty</a>, MIT licensed.</p>
]]></content:encoded>
      <pubDate>Tue, 31 Mar 2026 00:00:00 GMT</pubDate>
      <category>tools</category>
    </item>
    <item>
      <title><![CDATA[go-blueprint: The create-next-app for Go]]></title>
      <link>https://hekke.io/blog/go-blueprint-scaffolding-go</link>
      <guid isPermaLink="true">https://hekke.io/blog/go-blueprint-scaffolding-go</guid>
      <description><![CDATA[If you've ever started a TypeScript project with npx create-next-app and wondered why Go doesn't have an equivalent, go-blueprint is your answer. One command, production-ready structure, no setup overhead.]]></description>
      <content:encoded><![CDATA[<p>Every TypeScript developer knows the ritual. <code>npx create-next-app</code>, answer a few questions about TypeScript and Tailwind, and you have a working project structure in under a minute. The framework is wired up. The config files are in place. You can write code immediately.</p>
<p>Starting a Go project from scratch does not work that way. You have <code>go mod init</code> and an empty <code>main.go</code>, and then a sequence of decisions: which router, which database driver, how to structure packages, where to put the server setup, how to wire in Docker, whether to use Air for hot reload. Community estimates put that manual overhead at 17 to 33 hours per new service. That is before you have written a single line of business logic.</p>
<p>go-blueprint collapses that to minutes.</p>
<h2>01. WHAT IT DOES</h2>
<p>go-blueprint is a CLI scaffolding tool that generates a production-ready Go project structure from a single command. Pick a framework, a database driver, and a set of optional features, and you get a project that compiles, connects, and handles HTTP requests out of the box.</p>
<p>Installation is available three ways. The npm path is deliberate: it exists specifically for JavaScript developers who may not have <code>$GOPATH</code> configured yet.</p>
<pre><code class="language-bash"># If you have Go installed
go install github.com/melkeydev/go-blueprint@latest

# If you are coming from the JS ecosystem
npm install -g @melkeydev/go-blueprint

# If you use Homebrew
brew install go-blueprint
</code></pre>
<p>Running <code>go-blueprint create</code> without flags launches an interactive TUI that walks through every option. Fully non-interactive:</p>
<pre><code class="language-bash">go-blueprint create \
  --name my-api \
  --framework chi \
  --driver postgres \
  --advanced \
  --feature docker \
  --feature githubaction \
  --git commit
</code></pre>
<h2>02. WHAT GETS GENERATED</h2>
<p>The base structure is consistent across all configurations:</p>
<pre><code>my-api/
├── .github/
│   └── workflows/
│       ├── go-test.yml
│       └── release.yml
├── cmd/
│   └── api/
│       └── main.go
├── internal/
│   ├── database/
│   │   ├── database.go
│   │   └── database_test.go
│   └── server/
│       ├── server.go
│       ├── routes.go
│       └── routes_test.go
├── .air.toml
├── docker-compose.yml
├── Dockerfile
├── go.mod
├── Makefile
└── README.md
</code></pre>
<p>The package layout follows Go conventions: <code>cmd/</code> for the entrypoint, <code>internal/</code> for packages that should not be imported externally. The <code>database.go</code> file includes connection setup, a ping-based health check, and environment variable loading. The <code>routes.go</code> file has the framework wired up and ready for your first route. Air is configured for hot reload. The Makefile includes targets for running, building, testing, and linting.</p>
<p>The generated code is intentionally minimal. There is no deep framework abstraction layer, no opaque generated boilerplate. You can read every file and understand exactly what it does.</p>
<h2>03. FRAMEWORK AND DATABASE OPTIONS</h2>
<p>Seven router options are supported:</p>
<table>
<thead>
<tr>
<th>Framework</th>
<th>Character</th>
</tr>
</thead>
<tbody><tr>
<td><strong>Chi</strong></td>
<td>Lightweight, composable middleware, idiomatic Go</td>
</tr>
<tr>
<td><strong>Gin</strong></td>
<td>Most popular Go web framework, fast</td>
</tr>
<tr>
<td><strong>Fiber</strong></td>
<td>Express-inspired, built on fasthttp</td>
</tr>
<tr>
<td><strong>Echo</strong></td>
<td>High-performance with built-in middleware</td>
</tr>
<tr>
<td><strong>Gorilla/mux</strong></td>
<td>Classic, battle-tested</td>
</tr>
<tr>
<td><strong>HttpRouter</strong></td>
<td>Minimalist radix-tree router</td>
</tr>
<tr>
<td><strong>net/http</strong></td>
<td>Standard library only, zero dependencies</td>
</tr>
</tbody></table>
<p>Database drivers: PostgreSQL, MySQL, SQLite, MongoDB, Redis, and ScyllaDB via GoCQL.</p>
<p>Advanced features via <code>--feature</code>: HTMX with Templ templates, GitHub Actions pipelines, WebSocket setup, Tailwind, Docker, and a React frontend option. The React option generates a full TypeScript Vite frontend alongside the Go backend. For TypeScript developers wanting a Go API without abandoning the frontend tooling they know, this is a useful starting point.</p>
<h2>04. BLUEPRINT UI</h2>
<p>The web interface at <code>go-blueprint.dev</code> lets you compose your project configuration visually. Toggle framework, database, and features via dropdowns and checkboxes, watch the file tree preview update in real time, and copy the exact CLI command to run. This is the equivalent of Next.js&#39;s <code>create-next-app</code> web playground: a low-friction way to explore what you will get before committing to running anything.</p>
<h2>05. THE CREATOR</h2>
<p>go-blueprint was built by Melkey, a Senior ML Infrastructure Engineer at Twitch who has been writing Go professionally for several years. He streams live coding at <code>twitch.tv/melkey</code> and teaches on Frontend Masters: a complete Go for professional developers course and a follow-up on building Go applications that scale on AWS.</p>
<p>The creator profile matters for understanding why the tool exists. Melkey bridges the gap between JavaScript ecosystem thinking and idiomatic Go. He is familiar with <code>create-react-app</code>, with npm publish, with build tooling that prioritizes developer experience. go-blueprint was partly a teaching tool, partly a community utility. The npm install path, the Blueprint UI web app, the React frontend option: these are all nods to developers who are coming from the JavaScript world and need a lower-friction entry point into Go.</p>
<p>The project is listed in <a href="https://awesome-go.com">awesome-go</a>, has roughly 8,800 GitHub stars, and is actively maintained with regular releases.</p>
<h2>06. HOW IT COMPARES TO STARTING FROM SCRATCH</h2>
<p>The honest comparison is not about what go-blueprint generates versus what you could write by hand. A senior Go developer can set up a clean project structure in a couple of hours. The comparison is about what you skip: the research time, the framework evaluation, the decision fatigue about package layout conventions, the Docker Compose boilerplate you have written a dozen times before.</p>
<p>go-blueprint&#39;s value is the same as any scaffolding tool: it handles the decisions that have already been made well enough so that your first commit contains something meaningful rather than infrastructure setup.</p>
<p>The source is at <a href="https://github.com/Melkeydev/go-blueprint">Melkeydev/go-blueprint</a>, MIT licensed.</p>
]]></content:encoded>
      <pubDate>Tue, 31 Mar 2026 00:00:00 GMT</pubDate>
      <category>tools</category>
    </item>
    <item>
      <title><![CDATA[Loomdraft: Architecture of a Local-First Writing App]]></title>
      <link>https://hekke.io/projects/loomdraft/loomdraft-architecture</link>
      <guid isPermaLink="true">https://hekke.io/projects/loomdraft/loomdraft-architecture</guid>
      <description><![CDATA[How Tauri, Rust, and React combine to deliver a native desktop writing experience with no cloud dependencies, no accounts, and complete data ownership.]]></description>
      <content:encoded><![CDATA[<p>Most writing software makes a quiet assumption: your words belong on their servers. Loomdraft starts from the opposite premise. Your manuscript is a folder of plain Markdown files on your own machine. The application is a shell, fast and capable and entirely optional. Delete it tomorrow and your words are still there, readable in any text editor that has ever existed.</p>
<p>Getting there required a specific set of architectural decisions, and each one has consequences that run all the way down the stack.</p>
<h2>01. THE PLATFORM CHOICE</h2>
<p>The modern web is a capable runtime, but building a privacy-first application on top of it introduces a structural tension. Browser-based apps implicitly reach toward the network for authentication, for sync, for storage quotas that assume cloud backup. Even with service workers and IndexedDB, the model assumes connectivity.</p>
<p><strong>Tauri</strong> resolves this tension by using the OS webview as a rendering surface while the application logic runs in Rust. The result looks like an Electron app from the outside: a native window, file menus, OS-level shortcuts. But without the 150MB Chromium bundle. A production Tauri binary ships under 15MB.</p>
<pre><code class="language-rust">// src-tauri/src/main.rs
fn main() {
    tauri::Builder::default()
        .plugin(tauri_plugin_fs::init())
        .plugin(tauri_plugin_dialog::init())
        .invoke_handler(tauri::generate_handler![
            read_project,
            write_document,
            search_fulltext,
            export_manuscript,
        ])
        .run(tauri::generate_context!())
        .expect(&quot;error running Loomdraft&quot;);
}
</code></pre>
<p>The <strong>invoke handler</strong> pattern is the boundary between worlds. React components call <code>invoke(&quot;write_document&quot;, { path, content })</code> and Rust handles the actual filesystem I/O. The frontend never touches the disk directly. All file access goes through typed Rust commands that validate paths, handle errors, and enforce the application&#39;s data model.</p>
<h2>02. THE DATA MODEL</h2>
<p>Every Loomdraft project is a directory tree. Open the project folder in Finder and you see exactly what you&#39;d expect: folders named <code>manuscript/</code> and <code>kb/</code> containing <code>.md</code> files with YAML frontmatter.</p>
<pre><code>my-novel/
├── manuscript/
│   ├── chapter-01.md
│   ├── chapter-02.md
│   └── scenes/
│       ├── opening.md
│       └── inciting-incident.md
├── kb/
│   ├── characters/
│   │   ├── protagonist.md
│   │   └── antagonist.md
│   └── locations/
│       └── city.md
└── .app/
    ├── index.db        ← SQLite FTS5 index
    └── backups/        ← Auto-saves (up to 20 per file)
</code></pre>
<p>The frontmatter on each document carries just enough metadata to reconstruct the tree:</p>
<pre><code class="language-yaml">---
type: chapter
title: &quot;The First Night&quot;
order: 1
created: 2026-01-12T09:14:00Z
wordcount: 4823
---
</code></pre>
<p>Fourteen document types are supported: chapters, scenes, acts, characters, locations, factions, items, lore, timelines, notes, outlines, templates, references, and miscellaneous. The type determines the icon in the sidebar, the default fields in the frontmatter, and where in the tree the document is permitted to nest.</p>
<h2>03. THE FRONTEND LAYER</h2>
<p>The UI is React 19 with TypeScript, built by Vite. Component architecture follows a straightforward split: layout components (sidebar, editor shell, toolbar), feature components (document tree, search overlay, theme picker), and editor primitives (the CodeMirror instance and its plugins).</p>
<p>The sidebar tree is the primary navigation surface. It maintains a flattened representation of the project hierarchy, supports drag-and-drop reordering, and updates optimistically on any write operation. The Rust backend confirms asynchronously, and the tree reconciles if there&#39;s a discrepancy.</p>
<p>State is managed locally, no Redux, no Zustand. Each feature area owns its state through React&#39;s <code>useReducer</code> and context, kept shallow enough that prop drilling is rarely a problem. The editor is the sole exception. It maintains a complex internal state covering cursor position, selection, and undo history that lives entirely within the CodeMirror instance and is never surfaced to React&#39;s state tree.</p>
<h2>04. THE IPC BOUNDARY</h2>
<p>Every filesystem operation crosses the Tauri IPC bridge. The bridge is typed end-to-end: TypeScript interfaces on the frontend match Rust structs on the backend, with <code>serde</code> handling serialization.</p>
<pre><code class="language-typescript">// src/lib/tauri.ts
interface WriteDocumentArgs {
  projectPath: string;
  relativePath: string;
  content: string;
}

export async function writeDocument(args: WriteDocumentArgs): Promise&lt;void&gt; {
  await invoke&lt;void&gt;(&quot;write_document&quot;, args);
}
</code></pre>
<pre><code class="language-rust">// src-tauri/src/commands/documents.rs
#[tauri::command]
pub async fn write_document(
    project_path: String,
    relative_path: String,
    content: String,
) -&gt; Result&lt;(), String&gt; {
    let full_path = Path::new(&amp;project_path).join(&amp;relative_path);
    fs::write(&amp;full_path, content)
        .map_err(|e| e.to_string())?;
    Ok(())
}
</code></pre>
<p>Auto-save fires every ten seconds from a React <code>useEffect</code> that diffs the current editor content against the last persisted snapshot. If there&#39;s a change, it calls <code>writeDocument</code>, updates the word count in the sidebar, and queues a FTS index refresh.</p>
]]></content:encoded>
      <pubDate>Mon, 30 Mar 2026 00:00:00 GMT</pubDate>
      <category>architecture</category>
    </item>
    <item>
      <title><![CDATA[Loomdraft: Building a Pro Writing Experience with CodeMirror 6]]></title>
      <link>https://hekke.io/projects/loomdraft/loomdraft-editor-engine</link>
      <guid isPermaLink="true">https://hekke.io/projects/loomdraft/loomdraft-editor-engine</guid>
      <description><![CDATA[How CodeMirror 6's extension system powers distraction-free writing modes, wiki-link previews, real-time word counts, and typewriter scroll — without fighting the framework.]]></description>
      <content:encoded><![CDATA[<p>The editor is the product. Everything else in Loomdraft, the sidebar, the search, the export pipeline, exists to serve the moments when a writer is staring at the cursor and trying to put the right words in the right order. Getting the editor wrong means the application doesn&#39;t work, no matter how good the rest of it is.</p>
<p>CodeMirror 6 is the right foundation for this, but not for the obvious reason. It isn&#39;t chosen because it ships as a React component you drop into a form. It&#39;s chosen because it&#39;s an extension system first and an editor second. That distinction determines everything about how Loomdraft&#39;s writing modes work.</p>
<h2>01. WHY CODEMIRROR 6</h2>
<p>Version 5 was a monolith. You got a feature set and customized around the edges. Version 6 was a full rewrite organized around a composable extension model. Every capability, syntax highlighting, keybindings, autocomplete, decorations, line numbers, is an extension. You assemble the editor you need from small pieces.</p>
<p>For a writing application, this matters because the feature set required for a novelist is genuinely different from the feature set required for a programmer. Loomdraft needs real-time word counts, not line numbers. It needs paragraph-level soft wrapping with a maximum prose width, not horizontal scrolling. It needs wiki-link hover previews, not LSP completions.</p>
<pre><code class="language-typescript">// src/editor/extensions.ts

  EditorView,
  keymap,
  lineWrapping,
  drawSelection,
} from &quot;@codemirror/view&quot;;


export function buildExtensions(config: EditorConfig): Extension[] {
  return [
    lineWrapping,
    markdown({ base: markdownLanguage }),
    syntaxHighlighting(proseTheme),
    drawSelection(),
    keymap.of(proseKeymap),
    wordCountField,
    wikiLinkPlugin,
    config.typewriterMode ? typewriterScroll : [],
    config.focusMode ? dimSurroundingParagraphs : [],
    config.readingWidth ? maxProseWidth(config.readingWidth) : [],
  ].flat();
}
</code></pre>
<p>The <code>buildExtensions</code> function returns a fresh array whenever the writing mode changes. CodeMirror&#39;s <code>EditorState.reconfigure</code> accepts a new extension array without destroying the document or losing cursor position. Switching from standard mode to typewriter mode is a state transaction, not a remount.</p>
<h2>02. WRITING MODES</h2>
<p>Loomdraft ships four modes. Each is a composition of extensions added or removed from the base set.</p>
<p><strong>Standard</strong> is the default. Syntax-highlighted Markdown, document outline in the right gutter, word count in the status bar.</p>
<p><strong>Focus</strong> activates <code>dimSurroundingParagraphs</code>, a decoration extension that applies reduced opacity to every paragraph except the one containing the cursor. The active paragraph renders at full opacity and everything else at 25%. The effect is subtle but immediate. It collapses the peripheral visual field and anchors attention.</p>
<pre><code class="language-typescript">// src/editor/plugins/focus-mode.ts
const dimSurroundingParagraphs = ViewPlugin.fromClass(
  class {
    decorations: DecorationSet;
    constructor(view: EditorView) {
      this.decorations = this.compute(view);
    }
    update(update: ViewUpdate) {
      if (update.selectionSet || update.docChanged) {
        this.decorations = this.compute(update.view);
      }
    }
    compute(view: EditorView): DecorationSet {
      const cursor = view.state.selection.main.head;
      const activeLine = view.state.doc.lineAt(cursor).number;
      const builder = new RangeSetBuilder&lt;Decoration&gt;();
      for (let i = 1; i &lt;= view.state.doc.lines; i++) {
        if (i !== activeLine) {
          const line = view.state.doc.line(i);
          builder.add(
            line.from,
            line.to,
            Decoration.mark({ class: &quot;cm-dim-paragraph&quot; })
          );
        }
      }
      return builder.finish();
    }
  },
  { decorations: (v) =&gt; v.decorations }
);
</code></pre>
<p><strong>Typewriter</strong> keeps the active line vertically centered in the viewport regardless of scroll position. It&#39;s implemented as a <code>scrollIntoView</code> that fires on every selection change, offset to the viewport midpoint.</p>
<p><strong>Manuscript</strong> removes all chrome. No sidebar, no status bar, no toolbar. The editor fills the window with a constrained prose width (65ch by default, configurable), large line height, and no syntax decoration. Just text on the page. It&#39;s the mode you switch into when you&#39;re not optimizing the tool, you&#39;re using it.</p>
<h2>03. WIKI-LINKS</h2>
<p>The wiki-link system is a hover preview plugin. Type <code>[[character-name]]</code> anywhere in a document and the bracketed text becomes a clickable link. Hovering shows a popover with the first 200 characters of the linked document and clicking navigates to it, adding a backlink entry to the target&#39;s frontmatter.</p>
<p>The plugin works in two layers. The first is a syntax extension that teaches CodeMirror to parse <code>[[...]]</code> patterns as a distinct token type within the Markdown grammar:</p>
<pre><code class="language-typescript">// src/editor/plugins/wiki-links.ts
const wikiLinkSyntax = new MarkdownExtension({
  parseInline: [
    {
      name: &quot;WikiLink&quot;,
      parse(cx, next, pos) {
        if (next !== 91 || cx.char(pos + 1) !== 91) return -1; // [[
        const end = cx.slice(pos, cx.end).indexOf(&quot;]]&quot;);
        if (end &lt; 0) return -1;
        return cx.addElement(
          cx.elt(&quot;WikiLink&quot;, pos, pos + end + 4)
        );
      },
    },
  ],
});
</code></pre>
<p>The second layer is a decoration plugin that applies a <code>cm-wiki-link</code> class to all <code>WikiLink</code> nodes, enabling CSS hover effects, and registers a <code>hoverTooltip</code> that fetches document previews from the Rust backend on demand.</p>
<h2>04. WORD COUNT AND AUTO-SAVE</h2>
<p>The word count is a CodeMirror <code>StateField</code>, a piece of state stored in the editor state object and recomputed whenever the document changes.</p>
<pre><code class="language-typescript">const wordCountField = StateField.define&lt;number&gt;({
  create(state) {
    return countWords(state.doc.toString());
  },
  update(count, tr) {
    return tr.docChanged
      ? countWords(tr.newDoc.toString())
      : count;
  },
});

function countWords(text: string): number {
  return text.trim().split(/\s+/).filter(Boolean).length;
}
</code></pre>
<p>React reads the word count from the editor via <code>EditorView.state.field(wordCountField)</code> in a <code>useEffect</code> that subscribes to view updates. This keeps the count in sync without requiring React re-renders on every keystroke. The state field computes inside CodeMirror&#39;s own transaction system, and React is only notified when the component needs to display a new number.</p>
<p>Auto-save runs in a <code>useEffect</code> with a 10-second interval. It reads the current document string, diffs it against the last saved snapshot using a simple hash comparison, and invokes <code>write_document</code> only if something changed. On every save, a backup copy is written to <code>.app/backups/[filename]-[timestamp].md</code> and the oldest backup is pruned if the count exceeds 20.</p>
<h2>05. THE LESSON</h2>
<p>CodeMirror 6 rewards the investment in understanding its extension model. The early hours of learning <code>StateField</code>, <code>ViewPlugin</code>, <code>Decoration</code>, and <code>Transaction</code> pay back directly in capability. Features that would require hacking the DOM in most editors are first-class extension points here.</p>
<p>For a writing application, that means the editor grows alongside the requirements. New writing modes, new document annotations, new keyboard behaviors, they&#39;re all extensions, not patches.</p>
]]></content:encoded>
      <pubDate>Mon, 30 Mar 2026 00:00:00 GMT</pubDate>
      <category>software</category>
    </item>
    <item>
      <title><![CDATA[Loomdraft: SQLite, Plain Markdown, and the Local-First Model]]></title>
      <link>https://hekke.io/projects/loomdraft/loomdraft-local-first</link>
      <guid isPermaLink="true">https://hekke.io/projects/loomdraft/loomdraft-local-first</guid>
      <description><![CDATA[Why Loomdraft stores everything in plain files, how SQLite FTS5 delivers instant full-text search without a server, and what 'local-first' actually costs in practice.]]></description>
      <content:encoded><![CDATA[<p>Local-first software is a set of tradeoffs, not a moral position. You give up effortless sync across devices and gain complete ownership of your data. You give up the server&#39;s ability to index everything for you and gain the guarantee that your words exist independently of any company&#39;s continued operation.</p>
<p>Loomdraft makes those tradeoffs consciously. This post is about the specific technical decisions that implement the local-first model, and where they required more work than the cloud alternative would have.</p>
<h2>01. PLAIN FILES AS THE CANONICAL FORMAT</h2>
<p>The project directory is the database. This is a deliberate choice with a specific failure mode in mind: vendor lock-in through proprietary format.</p>
<p>Scrivener stores projects in a binary <code>.scriv</code> bundle. Notion&#39;s export is lossy. Obsidian&#39;s format is close to plain Markdown but depends on specific plugin behaviors for some features. If any of these companies shut down tomorrow, recovery ranges from difficult to impossible.</p>
<p>Loomdraft&#39;s canonical format is a folder of <code>.md</code> files with YAML frontmatter. The application reads and writes this format directly. The SQLite database at <code>.app/index.db</code> is explicitly a derivative artifact, a cache built from the source files, always rebuildable from scratch.</p>
<pre><code># .app/index.db is a derived index, not source of truth
# Proof: delete it and run:

SELECT count(*) FROM documents;  -- 0, it&#39;s gone

# Run rebuild:
invoke(&quot;rebuild_index&quot;, { projectPath })

SELECT count(*) FROM documents;  -- 47, back
</code></pre>
<p>The practical consequence: you can open a Loomdraft project in VS Code, edit the Markdown directly, and the next time you open the project in the app, your changes are reflected. The app detects file modification timestamps on startup and reconciles any external edits.</p>
<h2>02. THE SQLITE FTS5 INDEX</h2>
<p>Full-text search across a long-form project requires an index. Scanning every file on every keystroke would be too slow, and a proper search engine like Elasticsearch would be absurdly heavy for a desktop app. SQLite&#39;s FTS5 extension is exactly the right size.</p>
<p>The schema is minimal:</p>
<pre><code class="language-sql">CREATE TABLE documents (
    id TEXT PRIMARY KEY,        -- relative path from project root
    title TEXT NOT NULL,
    type TEXT NOT NULL,
    content TEXT NOT NULL,
    modified_at INTEGER NOT NULL,
    wordcount INTEGER DEFAULT 0
);

CREATE VIRTUAL TABLE documents_fts USING fts5(
    title,
    content,
    content=&#39;documents&#39;,
    content_rowid=&#39;rowid&#39;,
    tokenize=&#39;unicode61 remove_diacritics 2&#39;
);
</code></pre>
<p>The <code>content=</code> parameter makes <code>documents_fts</code> a <em>content table</em>, which means it stores tokens but retrieves actual content from <code>documents</code> on query. Storage is more efficient and the index stays automatically in sync when the base table is updated via triggers.</p>
<p>Search queries use BM25 ranking (FTS5&#39;s default) with a boost on title matches:</p>
<pre><code class="language-rust">// src-tauri/src/search.rs
pub fn search(conn: &amp;Connection, query: &amp;str) -&gt; Result&lt;Vec&lt;SearchResult&gt;&gt; {
    let sql = &quot;
        SELECT
            d.id,
            d.title,
            d.type,
            snippet(documents_fts, 1, &#39;&lt;mark&gt;&#39;, &#39;&lt;/mark&gt;&#39;, &#39;...&#39;, 20) AS excerpt,
            bm25(documents_fts, 3.0, 1.0) AS rank
        FROM documents_fts
        JOIN documents d ON documents_fts.rowid = d.rowid
        WHERE documents_fts MATCH ?1
        ORDER BY rank
        LIMIT 50
    &quot;;
    // bm25 column weights: title = 3x, content = 1x
    let mut stmt = conn.prepare(sql)?;
    let results = stmt.query_map([query], |row| {
        Ok(SearchResult {
            id: row.get(0)?,
            title: row.get(1)?,
            doc_type: row.get(2)?,
            excerpt: row.get(3)?,
        })
    })?
    .collect::&lt;Result&lt;Vec&lt;_&gt;, _&gt;&gt;()?;
    Ok(results)
}
</code></pre>
<p>The <code>snippet()</code> function returns context around the match with configurable highlight markers. The frontend renders these as styled HTML, with <code>&lt;mark&gt;</code> elements picking up a blue background from the design system.</p>
<h2>03. THE BACKUP SYSTEM</h2>
<p>Every save writes a backup. The backup path is derived from the source file path and a timestamp:</p>
<pre><code class="language-rust">// src-tauri/src/backup.rs
pub fn write_backup(
    project_path: &amp;Path,
    relative_path: &amp;str,
    content: &amp;str,
) -&gt; Result&lt;()&gt; {
    let stem = Path::new(relative_path)
        .file_stem()
        .and_then(|s| s.to_str())
        .ok_or(&quot;invalid path&quot;)?;

    let timestamp = SystemTime::now()
        .duration_since(UNIX_EPOCH)?
        .as_secs();

    let backup_name = format!(&quot;{}-{}.md&quot;, stem, timestamp);
    let backup_dir = project_path
        .join(&quot;.app/backups&quot;)
        .join(relative_path)
        .parent()
        .unwrap()
        .to_owned();

    fs::create_dir_all(&amp;backup_dir)?;
    fs::write(backup_dir.join(&amp;backup_name), content)?;

    // Prune: keep most recent 20, delete the rest
    prune_backups(&amp;backup_dir, stem, 20)?;
    Ok(())
}
</code></pre>
<p>Backups are browsable from the UI as a version timeline. Each entry shows a timestamp and the word count delta since the previous version. Restoration replaces the current document content with the backup version and triggers a new save. The restored version becomes the new head, and the restoration is itself backed up.</p>
<h2>04. WHAT LOCAL-FIRST ACTUALLY COSTS</h2>
<p>The honest answer is index management.</p>
<p>In a cloud-first application, the server owns the index. Full-text search, backlinks, word counts, document relationships are all computed once and served to all clients. The application developer writes a few API calls.</p>
<p>In a local-first application, every client builds and maintains its own index. Loomdraft handles several failure cases that a cloud app would never encounter.</p>
<p><strong>Stale index after external edits.</strong> A writer opens the project folder in their text editor, edits three files, then returns to Loomdraft. The app compares modification timestamps on startup and rebuilds index entries for changed files. For large projects (200+ documents) this adds about 300ms to project load time.</p>
<p><strong>Concurrent writes.</strong> On some platforms, the OS may deliver file change notifications while the app is writing. Loomdraft serializes all writes through a Tokio async runtime with a single-writer mutex on the project directory, avoiding races.</p>
<p><strong>Index corruption.</strong> SQLite is remarkably resilient but <code>.app/index.db</code> can be corrupted by a force-quit during a write. The app checks database integrity on startup with <code>PRAGMA integrity_check</code> and offers to rebuild from source files if corruption is detected.</p>
<p>None of these problems are hard. But they&#39;re problems you own entirely, with no support from a platform.</p>
<hr>
<p> The application is open source under MIT. The full source is at <a href="https://github.com/H3kk3/Loomdraft">github.com/H3kk3/Loomdraft</a> and the live demo runs at <a href="https://h3kk3.github.io/Loomdraft">h3kk3.github.io/Loomdraft</a>.</p>
]]></content:encoded>
      <pubDate>Mon, 30 Mar 2026 00:00:00 GMT</pubDate>
      <category>software</category>
    </item>
    <item>
      <title><![CDATA[Pretext: 300x Faster Text Layout Without DOM Reflow]]></title>
      <link>https://hekke.io/blog/pretext-text-layout</link>
      <guid isPermaLink="true">https://hekke.io/blog/pretext-text-layout</guid>
      <description><![CDATA[Cheng Lou's new 15KB library rewrites the rules of text measurement. Pure arithmetic, no DOM thrashing, and it's already going viral.]]></description>
      <content:encoded><![CDATA[<p>Text layout on the web has always been a quiet tax. Every time your browser needs to know how tall a paragraph is, it stops everything, recalculates the entire document geometry, and hands you the answer. Do it a thousand times a second, in an animation, a drag handler, a live editor, etc, and you&#39;ve just handed the browser a performance death sentence. Cheng Lou decided to fix that.</p>
<h2>01. THE PROBLEM</h2>
<p>The browser&#39;s layout engine is both your best friend and your worst enemy. Functions like <code>getBoundingClientRect()</code>, <code>offsetHeight</code>, and <code>scrollHeight</code> return accurate answers, but at a steep cost: they force a <strong>synchronous layout recalculation</strong> of the entire page, a process called <em>layout thrashing</em>.</p>
<pre><code class="language-javascript">// This looks innocent. It&#39;s not.
for (const el of elements) {
  const height = el.offsetHeight;  // forces reflow
  el.style.top = height + &quot;px&quot;;    // invalidates layout again
}
</code></pre>
<p>Each read-write cycle triggers a full reflow. In a tight loop or animation frame, this cascades into dropped frames, jank, and UI that visibly struggles to keep up. The standard workarounds like debouncing, batching reads before writes, ResizeObserver, reduce the damage but don&#39;t eliminate it.</p>
<p>For anyone building fluid editorial layouts, real-time text editors, multi-column magazine grids, or interactive typography, the DOM is simply the wrong tool. The measurements come too late and cost too much.</p>
<h2>02. THE APPROACH</h2>
<p>Pretext sidesteps the DOM entirely. Instead of asking the browser <em>&quot;how tall is this text?&quot;</em> after rendering it, Pretext computes the answer through <strong>pure arithmetic</strong>, the same way a typesetting engine would.</p>
<p>The key insight: browsers already expose fast, non-reflow-inducing font metrics through the Canvas 2D API (<code>context.measureText()</code>). Pretext uses these measurements as ground truth, then implements the full CSS text layout algorithm, word wrapping, line breaking, <code>overflow-wrap</code>, <code>word-break</code>, bidirectional text, in pure JavaScript.</p>
<pre><code class="language-javascript">
// One-time analysis per text + font combo (~19ms for 500 texts)
const handle = prepare(&quot;The quick brown fox...&quot;, {
  font: &quot;16px Inter&quot;,
  lineHeight: 1.5,
});

// Zero-reflow height calculation at any width (0.09ms)
const { height } = layout(handle, { width: 640 });
</code></pre>
<p>The <code>prepare()</code> call does the expensive work once: it measures all the individual glyphs and builds internal lookup tables. After that, <code>layout()</code> is pure math, no DOM access, no reflow, no waiting.</p>
<h2>03. THE NUMBERS</h2>
<p>The performance gap is not marginal.</p>
<table>
<thead>
<tr>
<th>Method</th>
<th>500 texts</th>
<th>Per layout</th>
</tr>
</thead>
<tbody><tr>
<td><code>offsetHeight</code> (DOM)</td>
<td>~27,000ms</td>
<td>~54ms</td>
</tr>
<tr>
<td><code>getBoundingClientRect</code></td>
<td>~8,500ms</td>
<td>~17ms</td>
</tr>
<tr>
<td><strong>Pretext</strong></td>
<td><strong>~19ms prepare + ~45ms layout</strong></td>
<td><strong>~0.09ms</strong></td>
</tr>
</tbody></table>
<p>At 0.09ms per layout call, you can reflow 10,000 text elements inside a single 16ms animation frame and still have 9ms left over. The library itself is 15KB, smaller than most icon fonts.</p>
<h2>04. WHO BUILT IT</h2>
<p>Cheng Lou is not a newcomer. He was a core contributor to the React team at Facebook, the creator of ReasonML (a typed functional language targeting JavaScript), and later a founding engineer at Midjourney where he worked on real-time UI at scale.</p>
<p>When he posted Pretext on GitHub in March 2026, the reaction was immediate. Vercel CEO Guillermo Rauch called it <em>&quot;so good.&quot;</em> The repo hit trending within hours. Engineers who had spent years fighting DOM reflow in production applications recognized the problem instantly and recognized that this was a real solution, not a workaround.</p>
<p>Lou described the motivation bluntly: text layout had become <em>&quot;hellish infrastructure&quot;</em> that nobody wanted to own. Pretext is his attempt to make it infrastructure you don&#39;t have to think about.</p>
<h2>05. LIVE DEMO</h2>
<p>The demo below shows what Pretext&#39;s <code>layoutNextLine()</code> API makes possible: text reflowing around a moving shape in real time, at 60fps, with no DOM involvement. Move your cursor over the canvas and the circle follows it, and the paragraph reroutes itself around the obstacle on every frame.</p>
<p>This kind of interaction was previously impractical in a browser. The moment the circle moves, every line must be re-measured and re-laid-out. With DOM-based measurement, that&#39;s hundreds of reflows per second. With Pretext&#39;s approach, it&#39;s arithmetic that completes in under a millisecond.</p>
<h2>06. WHAT THIS UNLOCKS</h2>
<p>The performance headroom Pretext creates isn&#39;t just about speed, more importantly it&#39;s about what becomes <strong>possible</strong> for the first time.</p>
<ul>
<li><strong>Fluid multi-column editorial layouts</strong> that reflow live as the user resizes the window, with no observable lag</li>
<li><strong>Text-around-shape wrapping</strong> at interactive framerates (as shown above)</li>
<li><strong>Server-side text layout</strong>, run the same code in Node.js to pre-calculate column heights before the page renders, eliminating layout shift</li>
<li><strong>ASCII particle systems</strong> where thousands of characters are individually positioned using calculated typographic metrics</li>
<li><strong>Variable-width text streams</strong> for typographic animations that would otherwise require a canvas renderer and a custom font parser</li>
</ul>
<p>The library also handles everything the web throws at text: RTL and bidirectional scripts, platform-specific emoji width quirks, <code>white-space: pre-wrap</code> semantics, and the full CSS word-breaking algorithm. </p>
<p>Text layout has been a solved problem in desktop software for thirty years. On the web, it&#39;s been a recurring nightmare. Pretext is the first library that treats it as the infrastructure problem it actually is and solves it properly.</p>
]]></content:encoded>
      <pubDate>Mon, 30 Mar 2026 00:00:00 GMT</pubDate>
      <category>software</category>
    </item>
  </channel>
</rss>