Patrick Dubroy's Avatar

Patrick Dubroy

@dubroy.com

Programmer & researcher, co-creator of https://ohmjs.org. ๐Ÿ‡จ๐Ÿ‡ฆ ๐Ÿ‡ฉ๐Ÿ‡ช ๐Ÿ‡ช๐Ÿ‡บ Co-author of https://wasmgroundup.com โ€” learn Wasm by building a simple compiler in JavaScript. Prev: CDG/HARC, Google, BumpTop

1,562
Followers
293
Following
650
Posts
04.05.2023
Joined
Posts Following

Latest posts by Patrick Dubroy @dubroy.com

Just like in JavaScript, you can do shared memory multithreading in WebAssembly! I've long known this was possible, but until the other day, had never actually played with it myself, so I decided to put together a small, self-contained example.

(This is for Node, but it's pretty much the same in the browser.)

Just like in JavaScript, you can do shared memory multithreading in WebAssembly! I've long known this was possible, but until the other day, had never actually played with it myself, so I decided to put together a small, self-contained example. (This is for Node, but it's pretty much the same in the browser.)

Details
Structured cloning of WebAssembly.Module
Normally you'd instantiate a Wasm module with WebAssembly.instantiate, which gives you a module instance. Here, we use WebAssembly.compile, which gives us a WebAssembly.Module. This is a stateless object that is structured-cloneable, which allows it to be safely shared across realm boundaries.

Serialization (an implicit part of structured cloning) of WebAssembly modules is defined in ยง3 of the WebAssembly Web API, which says:

Engines should attempt to share/reuse internal compiled code when performing a structured serialization, although in corner cases like CPU upgrade or browser update, this might not be possible and full recompilation may be necessary.

Shared memory
WebAssembly.Memory also support structured cloning. When we pass shared: true, the buffer property is a SharedArrayBuffer:

The structured clone algorithm accepts SharedArrayBuffer objects and typed arrays mapped onto SharedArrayBuffer objects. In both cases, the SharedArrayBuffer object is transmitted to the receiver resulting in a new, private SharedArrayBuffer object in the receiving agent (just as for ArrayBuffer). However, the shared data block referenced by the two SharedArrayBuffer objects is the same data block, and a side effect to the block in one agent will eventually become visible in the other agent.

Atomic add
The last piece of the puzzle is the i32.atomic.rmw.add instruction used in the addId function:

  (func (export "add") (result i32)
    ;; mem[0] += workerId
    i32.const 0
    global.get 0
    i32.atomic.rmw.add))
This instruction is defined in the threads proposal (a Stage 4 proposal, so not finalized yet), which defines "a new shared linear memory type and some new operations for atomic memory access".

i32.atomic.rmw.add is equivalent to LOCK XADD on x86. As described in the threads proposal:

Details Structured cloning of WebAssembly.Module Normally you'd instantiate a Wasm module with WebAssembly.instantiate, which gives you a module instance. Here, we use WebAssembly.compile, which gives us a WebAssembly.Module. This is a stateless object that is structured-cloneable, which allows it to be safely shared across realm boundaries. Serialization (an implicit part of structured cloning) of WebAssembly modules is defined in ยง3 of the WebAssembly Web API, which says: Engines should attempt to share/reuse internal compiled code when performing a structured serialization, although in corner cases like CPU upgrade or browser update, this might not be possible and full recompilation may be necessary. Shared memory WebAssembly.Memory also support structured cloning. When we pass shared: true, the buffer property is a SharedArrayBuffer: The structured clone algorithm accepts SharedArrayBuffer objects and typed arrays mapped onto SharedArrayBuffer objects. In both cases, the SharedArrayBuffer object is transmitted to the receiver resulting in a new, private SharedArrayBuffer object in the receiving agent (just as for ArrayBuffer). However, the shared data block referenced by the two SharedArrayBuffer objects is the same data block, and a side effect to the block in one agent will eventually become visible in the other agent. Atomic add The last piece of the puzzle is the i32.atomic.rmw.add instruction used in the addId function: (func (export "add") (result i32) ;; mem[0] += workerId i32.const 0 global.get 0 i32.atomic.rmw.add)) This instruction is defined in the threads proposal (a Stage 4 proposal, so not finalized yet), which defines "a new shared linear memory type and some new operations for atomic memory access". i32.atomic.rmw.add is equivalent to LOCK XADD on x86. As described in the threads proposal:

TIL: Multithreaded WebAssembly
โ†’ github.com/pdubroy/til/...

(corrected)

07.03.2026 09:40 ๐Ÿ‘ 1 ๐Ÿ” 1 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
Post image

The playground is awesome! Btw you might want to change the CSS for the shortcutsโ€ฆit resolving to Fire Code for me, which has ligatures for many of these things, which makes it confusing.

Adding `font-variant-ligatures: none` seems to fix it.

07.03.2026 08:28 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
L14: Natural Deduction for IfArith
L14: Natural Deduction for IfArith YouTube video by Kristopher Micinski

And @krismicinski.bsky.social's "Natural Deduction for IfArith" lecture is also great: www.youtube.com/watch?v=neCr...

05.03.2026 11:15 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0
Crash Course on Notation in Programming Language Theory This blog post is meant to help my friends get started in reading my other blog posts, that is, this post is a crash course on the notation ...

A while back, someone in the @wasmgroundup.com Discord asked about resources for learning the formal notation used in the WebAssembly spec.

One I like is Jeremy Siek's "Crash Course on Notation in Programming Language Theory": siek.blogspot.com/2012/07/cras...

05.03.2026 11:01 ๐Ÿ‘ 6 ๐Ÿ” 1 ๐Ÿ’ฌ 2 ๐Ÿ“Œ 0
It comes up rarely, but on a few projects I've wanted a dead simple hash table implementation. Most recently, it was for an experiment in the Ohm WebAssembly compiler. When I'm compiling a grammar, I assign each rule name a unique ID, but I wanted a fixed size cached (e.g. 8 or 32 items) keyed by rule ID.

I discovered Fibonacci hashing, aka "Knuth's muliplicative method":

So here's the idea: Let's say our hash table is 1024 slots large, and we want to map an arbitrarily large hash value into that range. The first thing we do is we map it using the above trick into the full 64 bit range of numbers. So we multiply the incoming hash value with 2^64/ฯ† โ‰ˆ 11400714819323198485. (the number 11400714819323198486 is closer but we don't want multiples of two because that would throw away one bit) Multiplying with that number will overflow, but just as we wrapped around the circle in the flower example above, this will wrap around the whole 64 bit range in a nice pattern, giving us an even distribution across the whole range from 0 to 2^64. To illustrate, let's just look at the upper three bits. So we'll do this:

size_t fibonacci_hash_3_bits(size_t hash)
{
    return (hash * 11400714819323198485llu) >> 61;
}
All we have to do to get an arbitrary power of two range is to change the shift amount. So if my hash table is size 1024, then instead of just looking at the top 3 bits I want to look at the top 10 bits. So I shift by 54 instead of 61. Easy enough.

It turns out the Linux kernel has used this for ~6 years; here's the comment from include/linux/hash.h:

/*
 * This hash multiplies the input by a large odd number and takes the
 * high bits.  Since multiplication propagates changes to the most
 * significant end only, it is essential that the high bits of the
 * product be used for the hash value.
 *
 * Chuck Lever verified the effectiveness of this technique:
 * http://www.citi.umich.edu/techreports/reports/citi-tr-00-1.pdf
 *
 * Although a random odd number will do, it turns out that the golden
 * ratio phi = (sqrt(5)-1)/2, or its negative, has particularly nice
 * properties.  (See Knuth vol 3, section 6.4, exercise 9.)
 *
 * These are the negative, (1 - phi) = phi**2 = (3 - sqrt(5))/2,
 * which is very slightly easier to multiply by and makes no
 * difference to the hash distribution.
 */
#define GOLDEN_RATIO_32 0x61C88647
#define GOLDEN_RATIO_64 0x61C8864680B583EBull
Why would you use this?
(I may get some details of this explanation wrong, because hashing and hash table sizing are a surprisingly complex subject!)

If I understand correctly, it makes sense to use this if (a) you don't have access to a good hash function, and (b) you want power-of-two (not prime) table sizes; and/or (c) you want the bucket calculation operation to be as fast as possible. (A multiplication plus a shift is significantly faster than modulo/division.)

It comes up rarely, but on a few projects I've wanted a dead simple hash table implementation. Most recently, it was for an experiment in the Ohm WebAssembly compiler. When I'm compiling a grammar, I assign each rule name a unique ID, but I wanted a fixed size cached (e.g. 8 or 32 items) keyed by rule ID. I discovered Fibonacci hashing, aka "Knuth's muliplicative method": So here's the idea: Let's say our hash table is 1024 slots large, and we want to map an arbitrarily large hash value into that range. The first thing we do is we map it using the above trick into the full 64 bit range of numbers. So we multiply the incoming hash value with 2^64/ฯ† โ‰ˆ 11400714819323198485. (the number 11400714819323198486 is closer but we don't want multiples of two because that would throw away one bit) Multiplying with that number will overflow, but just as we wrapped around the circle in the flower example above, this will wrap around the whole 64 bit range in a nice pattern, giving us an even distribution across the whole range from 0 to 2^64. To illustrate, let's just look at the upper three bits. So we'll do this: size_t fibonacci_hash_3_bits(size_t hash) { return (hash * 11400714819323198485llu) >> 61; } All we have to do to get an arbitrary power of two range is to change the shift amount. So if my hash table is size 1024, then instead of just looking at the top 3 bits I want to look at the top 10 bits. So I shift by 54 instead of 61. Easy enough. It turns out the Linux kernel has used this for ~6 years; here's the comment from include/linux/hash.h: /* * This hash multiplies the input by a large odd number and takes the * high bits. Since multiplication propagates changes to the most * significant end only, it is essential that the high bits of the * product be used for the hash value. * * Chuck Lever verified the effectiveness of this technique: * http://www.citi.umich.edu/techreports/reports/citi-tr-00-1.pdf * * Although a random odd number will do, it turns out that the golden * ratio phi = (sqrt(5)-1)/2, or its negative, has particularly nice * properties. (See Knuth vol 3, section 6.4, exercise 9.) * * These are the negative, (1 - phi) = phi**2 = (3 - sqrt(5))/2, * which is very slightly easier to multiply by and makes no * difference to the hash distribution. */ #define GOLDEN_RATIO_32 0x61C88647 #define GOLDEN_RATIO_64 0x61C8864680B583EBull Why would you use this? (I may get some details of this explanation wrong, because hashing and hash table sizing are a surprisingly complex subject!) If I understand correctly, it makes sense to use this if (a) you don't have access to a good hash function, and (b) you want power-of-two (not prime) table sizes; and/or (c) you want the bucket calculation operation to be as fast as possible. (A multiplication plus a shift is significantly faster than modulo/division.)

TIL: Fibonacci hashing
โ†’ https://github.com/pdubr...

03.03.2026 16:01 ๐Ÿ‘ 3 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

I should also mention that these are just big, long, mega-notes in chronological order. So pretty easy to find things by visually scanning, and (rarely) searching.

01.03.2026 11:53 ๐Ÿ‘ 2 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

It's just a single note for all cooking/baking stuff. So I can do Cmd+F or just visually scan. I don't try that many new things (maybe a few times a month) so it's pretty easy to find.

01.03.2026 11:50 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
Raycast Notes search interface, with three notes highlighted with blue rectangles: 'Useful stuffโ€™ (preview shows โ€œReplacement parts for Roborockโ€ฆโ€), 'Cool stuffโ€™, and 'Cookedโ€™ (preview shows โ€œ2026-02-12: Made gyoza againโ€ฆโ€).

Raycast Notes search interface, with three notes highlighted with blue rectangles: 'Useful stuffโ€™ (preview shows โ€œReplacement parts for Roborockโ€ฆโ€), 'Cool stuffโ€™, and 'Cookedโ€™ (preview shows โ€œ2026-02-12: Made gyoza againโ€ฆโ€).

My 80/20, grug-brained personal productivity system:

- Cool stuff: URLs, books, movies, etc. I want to remember.
- Useful stuff: how/where/etc for things I do a few times a year.
- Cooked: for cooking/baking: what recipe (URL or book), any adjustments I made, how it turned out.

01.03.2026 11:10 ๐Ÿ‘ 10 ๐Ÿ” 0 ๐Ÿ’ฌ 2 ๐Ÿ“Œ 0

Oh, thanks, didn't know about that! Would have been eligible already on GitHub stars

27.02.2026 19:40 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

Heh. Thanks! Hope this is a good thing :-)

27.02.2026 16:01 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

TIL that everyone who installs the Vercel CLI now gets a copy of @ohmjs.org

27.02.2026 11:11 ๐Ÿ‘ 20 ๐Ÿ” 1 ๐Ÿ’ฌ 3 ๐Ÿ“Œ 0
Ahoy!

Hope your February has been swell. Here in southern Germany, it's been a relatively snowy winterโ€ฆbut it's finally starting to feel like spring.

First and foremost, we wanted to let you know that we published a new blog post last week, A WebAssembly interpreter (Part 2). In Part 1, we created a simple Wasm interpreter from scratch, but it was only able to evaluate expressions consisting of literals. In the latest post, we add support for local and global variables. Give it a look!

And here are your Wasm tidbits for February:
 โ€ข "WebCC is a lightweight, zero-dependency C++ toolchain and framework for building WebAssembly applications. It provides a direct, high-performance bridge between C++ and HTML5 APIs." And then there's Coi, "a modern, component-based language for building reactive web apps", which is built on WebCC. 
 โ€ข Marimo is an open-source reactive Python notebook; like Jupyter, but better in many ways (no hidden state, stored as pure Python files, โ€ฆ). And it also supports WebAssembly notebooks, powered by Pyodide; in other words, Wasm notebooks execute entirely in the browser, without a backend executing Python. 
 โ€ข Along the same lines: Pandoc for the People is a fully-featured GUI interface for Pandoc (probably the most-used Haskell program ever). It lets you run any kind of conversion that pandoc supports, without the documents ever leaving your computer. It's based on the recent Pandoc 3.9 release, which supports Wasm via the GHC wasm backend.

Ahoy! Hope your February has been swell. Here in southern Germany, it's been a relatively snowy winterโ€ฆbut it's finally starting to feel like spring. First and foremost, we wanted to let you know that we published a new blog post last week, A WebAssembly interpreter (Part 2). In Part 1, we created a simple Wasm interpreter from scratch, but it was only able to evaluate expressions consisting of literals. In the latest post, we add support for local and global variables. Give it a look! And here are your Wasm tidbits for February: โ€ข "WebCC is a lightweight, zero-dependency C++ toolchain and framework for building WebAssembly applications. It provides a direct, high-performance bridge between C++ and HTML5 APIs." And then there's Coi, "a modern, component-based language for building reactive web apps", which is built on WebCC. โ€ข Marimo is an open-source reactive Python notebook; like Jupyter, but better in many ways (no hidden state, stored as pure Python files, โ€ฆ). And it also supports WebAssembly notebooks, powered by Pyodide; in other words, Wasm notebooks execute entirely in the browser, without a backend executing Python. โ€ข Along the same lines: Pandoc for the People is a fully-featured GUI interface for Pandoc (probably the most-used Haskell program ever). It lets you run any kind of conversion that pandoc supports, without the documents ever leaving your computer. It's based on the recent Pandoc 3.9 release, which supports Wasm via the GHC wasm backend.

Is it the end of February already??

Yes, yes it is. Which means we just sent out our #Wasm tidbits.

Sign up here to get it in your inbox once a month (ish): sendfox.com/wasmgroundup

26.02.2026 14:56 ๐Ÿ‘ 4 ๐Ÿ” 2 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
Video thumbnail

Here's my first creature.

25.02.2026 10:55 ๐Ÿ‘ 5 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
Programmierung kรผnstlichen Lebens in Scratch

Lerne Programmieren und erstelle dein eigenes, interaktives, digitales Wesen. Zuerst entwirfst du deine Figur (auf Papier oder auf dem iPad). Dann lernst du, wie du ihr in Scratch Verhalten gibst. Lass sie รผber den Bildschirm laufen, nach Futter suchen, entscheiden, wann sie schlafen muss. Dein iPad wird zu einer virtuellen Welt!

Programming Artificial Life in Scratch
Learn programming and create your own interactive digital creature. First, you design your character (on paper or on the iPad). Then you learn how to give it behaviour in Scratch. Let it walk across the screen, search for food, decide when it needs to sleep. Your iPad becomes a virtual world!

Programmierung kรผnstlichen Lebens in Scratch Lerne Programmieren und erstelle dein eigenes, interaktives, digitales Wesen. Zuerst entwirfst du deine Figur (auf Papier oder auf dem iPad). Dann lernst du, wie du ihr in Scratch Verhalten gibst. Lass sie รผber den Bildschirm laufen, nach Futter suchen, entscheiden, wann sie schlafen muss. Dein iPad wird zu einer virtuellen Welt! Programming Artificial Life in Scratch Learn programming and create your own interactive digital creature. First, you design your character (on paper or on the iPad). Then you learn how to give it behaviour in Scratch. Let it walk across the screen, search for food, decide when it needs to sleep. Your iPad becomes a virtual world!

Starting another Scratch course at my kids' (Montessori) school today.

A bit different this time โ€” the theme is "artificial life". Taking some inspiration from @shiffman.lol's natureofcode.com

25.02.2026 10:53 ๐Ÿ‘ 13 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0
 * This library intercepts time at multiple levels to slow down (or speed up)
 * all animations on a web page.
 *
 * ## How it works:
 *
 * 1. **requestAnimationFrame patching**: We replace window.requestAnimationFrame
 *    with a wrapper that passes modified timestamps to callbacks. Time-based
 *    animations that use the timestamp parameter will automatically slow down.
 *
 * 2. **performance.now() patching**: We replace performance.now() to return
 *    virtual time. Libraries that use this for timing will be affected.
 *
 * 3. **Date.now() patching**: We replace Date.now() to return virtual epoch
 *    milliseconds. Libraries like Motion/Framer Motion use this for timing.
 *
 * 4. **setTimeout/setInterval patching**: We scale delays by inverse of speed
 *    so timed callbacks fire at the expected virtual time.
 *
 * 5. **Web Animations API**: We poll document.getAnimations() and modify the
 *    playbackRate of all Animation objects. This affects CSS animations,
 *    CSS transitions, and element.animate() calls.
 *
 * 6. **Media elements**: We set playbackRate on video/audio elements.
 *
 * ## Limitations:
 *
 * - Frame-based animations (that increment by a fixed amount per frame without
 *   using timestamps) cannot be smoothly slowed down.
 *
 * - Animations created by libraries that cache their own time references
 *   before we patch may not be affected. The Chrome extension runs at
 *   document_start to minimize this issue.

* This library intercepts time at multiple levels to slow down (or speed up) * all animations on a web page. * * ## How it works: * * 1. **requestAnimationFrame patching**: We replace window.requestAnimationFrame * with a wrapper that passes modified timestamps to callbacks. Time-based * animations that use the timestamp parameter will automatically slow down. * * 2. **performance.now() patching**: We replace performance.now() to return * virtual time. Libraries that use this for timing will be affected. * * 3. **Date.now() patching**: We replace Date.now() to return virtual epoch * milliseconds. Libraries like Motion/Framer Motion use this for timing. * * 4. **setTimeout/setInterval patching**: We scale delays by inverse of speed * so timed callbacks fire at the expected virtual time. * * 5. **Web Animations API**: We poll document.getAnimations() and modify the * playbackRate of all Animation objects. This affects CSS animations, * CSS transitions, and element.animate() calls. * * 6. **Media elements**: We set playbackRate on video/audio elements. * * ## Limitations: * * - Frame-based animations (that increment by a fixed amount per frame without * using timestamps) cannot be smoothly slowed down. * * - Animations created by libraries that cache their own time references * before we patch may not be affected. The Chrome extension runs at * document_start to minimize this issue.

slowmo.dev by @seflless.bsky.social is pretty damn cool โ€” "Slow down, pause, or speed up time of any web content."

Here's how it works.

25.02.2026 05:19 ๐Ÿ‘ 7 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

You can see it here: github.com/wasmgroundup...

Ended up using mostly branded types, as that seemed to provide the best ergonomics.

24.02.2026 18:11 ๐Ÿ‘ 1 ๐Ÿ” 1 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
Post image

people often don't appreciate how dramatically the intensity of knowledge work, especially in STEM, has changed in the last 60 years even prior to the advent of AI. tough to imagine that a scientist would spend most of their time literally plotting data

24.02.2026 14:47 ๐Ÿ‘ 91 ๐Ÿ” 13 ๐Ÿ’ฌ 6 ๐Ÿ“Œ 6
I just spent an annoying 30 minutes debugging an issue caused by a silly mistake โ€”

I wanted to verify that two sets had the same contents. So I wrote something like this:

assert(newSet.difference(oldSet).size === 0, 'sets are different!')
But this doesn't detect if oldSet has some items that aren't in newSet! What I should have been using was symmetricDifference:

assert(newSet.symmetricDifference(oldSet).size === 0, 'sets are different!')
Both of these methods are Baseline 2024 features.

On the names
Swift's name for difference is subtracting, which less confusing imo.

I decided to see where the naming was discussed on the original TC39 proposal, and found tc39/proposal-set-methods#7, with a the following comment from tabatkins

Sorry for this being back-and-forth, but difference has the same lack of implicit ordering as complement did โ€” it's not immediately, intuitively clear which element's values are retained. minus and subtract are both good; removeAll, while it implies mutation semantics, is also extremely clear and good in this regard.

Related: "symmetricDifference" vastly exceeds my design instincts for what is an allowable level of spelling difficulty in an API. "symmetric" is not an easy word to spell (my fingers just now tried to type it with a single "m"!), and combined with another 10 letters after, it's huge and terrible. xor has a non-obvious meaning for many people, including native English speakers, but it's short and easy to spell; unsure if it's good enough or not.

Interestingly, the conclusion in that thread was to use except. There was further discussion in #24: Method names should pick a theme, but I couldn't figure out where (or why) they decided on difference.

I just spent an annoying 30 minutes debugging an issue caused by a silly mistake โ€” I wanted to verify that two sets had the same contents. So I wrote something like this: assert(newSet.difference(oldSet).size === 0, 'sets are different!') But this doesn't detect if oldSet has some items that aren't in newSet! What I should have been using was symmetricDifference: assert(newSet.symmetricDifference(oldSet).size === 0, 'sets are different!') Both of these methods are Baseline 2024 features. On the names Swift's name for difference is subtracting, which less confusing imo. I decided to see where the naming was discussed on the original TC39 proposal, and found tc39/proposal-set-methods#7, with a the following comment from tabatkins Sorry for this being back-and-forth, but difference has the same lack of implicit ordering as complement did โ€” it's not immediately, intuitively clear which element's values are retained. minus and subtract are both good; removeAll, while it implies mutation semantics, is also extremely clear and good in this regard. Related: "symmetricDifference" vastly exceeds my design instincts for what is an allowable level of spelling difficulty in an API. "symmetric" is not an easy word to spell (my fingers just now tried to type it with a single "m"!), and combined with another 10 letters after, it's huge and terrible. xor has a non-obvious meaning for many people, including native English speakers, but it's short and easy to spell; unsure if it's good enough or not. Interestingly, the conclusion in that thread was to use except. There was further discussion in #24: Method names should pick a theme, but I couldn't figure out where (or why) they decided on difference.

TIL: Set difference vs symmetric difference
โ†’ https://github.com/pdubr...

24.02.2026 13:10 ๐Ÿ‘ 2 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

Finally! ๐Ÿ˜…

23.02.2026 16:52 ๐Ÿ‘ 8 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0
The new version of ohm-js (v18), is now in beta. It's written in TypeScript, whereas v17 was written in JavaScript with manually-updated type definitions.

I was looking for a way to make sure that I don't accidentally make changes to the API, which led me to api-extractor:

API Extractor is a TypeScript analysis tool that produces three different output types:

API Report - API Extractor can trace all exports from your project's main entry point and generate a report to be used as the basis for an API review workflow.

.d.ts Rollups - Similar to how Webpack can "roll up" all your JavaScript files into a single bundle for distribution, API Extractor can roll up your TypeScript declarations into a single .d.ts file.

API Documentation - API Extractor can generate a "doc model" JSON file for each of your projects. This JSON file contains the extracted type signatures and doc comments. The api-documenter companion tool can use these files to generate an API reference website, or you can use them as inputs for a custom documentation pipeline.

How I'm using it
In each package, I have an api-extractor.json like this:

{
  "$schema": "https://developer.microsoft.com/json-schemas/api-extractor/v7/api-extractor.schema.json",
  "mainEntryPointFilePath": "./dist/index.d.ts",
  "apiReport": {
    "enabled": true,
    "reportFolder": "./"
  },
  "newlineKind": "lf",
  "dtsRollup": {
    "enabled": false
  },
  "docModel": {
    "enabled": false
  },
  "messages": {
    "extractorMessageReporting": {
      "ae-missing-release-tag": { "logLevel": "none" },
      "ae-forgotten-export": { "logLevel": "none" }
    }
  }
}
Note that I'm only using API reports for now โ€” no .d.ts rollup or documentation. (I'm using tsdown as well, which already bundles my types into a single .d.ts.)

The new version of ohm-js (v18), is now in beta. It's written in TypeScript, whereas v17 was written in JavaScript with manually-updated type definitions. I was looking for a way to make sure that I don't accidentally make changes to the API, which led me to api-extractor: API Extractor is a TypeScript analysis tool that produces three different output types: API Report - API Extractor can trace all exports from your project's main entry point and generate a report to be used as the basis for an API review workflow. .d.ts Rollups - Similar to how Webpack can "roll up" all your JavaScript files into a single bundle for distribution, API Extractor can roll up your TypeScript declarations into a single .d.ts file. API Documentation - API Extractor can generate a "doc model" JSON file for each of your projects. This JSON file contains the extracted type signatures and doc comments. The api-documenter companion tool can use these files to generate an API reference website, or you can use them as inputs for a custom documentation pipeline. How I'm using it In each package, I have an api-extractor.json like this: { "$schema": "https://developer.microsoft.com/json-schemas/api-extractor/v7/api-extractor.schema.json", "mainEntryPointFilePath": "./dist/index.d.ts", "apiReport": { "enabled": true, "reportFolder": "./" }, "newlineKind": "lf", "dtsRollup": { "enabled": false }, "docModel": { "enabled": false }, "messages": { "extractorMessageReporting": { "ae-missing-release-tag": { "logLevel": "none" }, "ae-forgotten-export": { "logLevel": "none" } } } } Note that I'm only using API reports for now โ€” no .d.ts rollup or documentation. (I'm using tsdown as well, which already bundles my types into a single .d.ts.)

Then, in my package.json, I have a package script named api-report:

{
  "name": "@ohm-js/compiler",
  "version": "18.0.0-beta.8",
  // ...
  "scripts": {
    "api-report": "api-extractor run",
    // ...
  },
  // ...
}
This script runs in CI. When it's run, if the API report has changed, I get an error like this:

> @ohm-js/compiler@18.0.0-beta.8 api-report /Users/pdubroy/dev/ohmjs/ohm/packages/compiler
> api-extractor run


api-extractor 7.56.3  - https://api-extractor.com/

Using configuration from ./api-extractor.json
Analysis will use the bundled TypeScript version 5.8.2
Warning: You have changed the API signature for this project. Please copy the file "temp/compiler.api.md" to "compiler.api.md", or perform a local build (which does this automatically). See the Git repo documentation for more info.

API Extractor completed with warnings
โ€‰ELIFECYCLEโ€‰ Command failed with exit code 1

Then, in my package.json, I have a package script named api-report: { "name": "@ohm-js/compiler", "version": "18.0.0-beta.8", // ... "scripts": { "api-report": "api-extractor run", // ... }, // ... } This script runs in CI. When it's run, if the API report has changed, I get an error like this: > @ohm-js/compiler@18.0.0-beta.8 api-report /Users/pdubroy/dev/ohmjs/ohm/packages/compiler > api-extractor run api-extractor 7.56.3 - https://api-extractor.com/ Using configuration from ./api-extractor.json Analysis will use the bundled TypeScript version 5.8.2 Warning: You have changed the API signature for this project. Please copy the file "temp/compiler.api.md" to "compiler.api.md", or perform a local build (which does this automatically). See the Git repo documentation for more info. API Extractor completed with warnings โ€‰ELIFECYCLEโ€‰ Command failed with exit code 1

TIL: api-extractor
โ†’ github.com/pdubroy/til/...

23.02.2026 13:28 ๐Ÿ‘ 8 ๐Ÿ” 1 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

Looking forward to this โ€” I'll be doing an invited talk at MoreVMs in Munich on March 17.

Hope to see some of you there!

20.02.2026 20:38 ๐Ÿ‘ 2 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

So excited to officially released the beta version! It's been a lot of work over the past ~year.

20.02.2026 16:56 ๐Ÿ‘ 13 ๐Ÿ” 1 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
Preview
A WebAssembly interpreter (Part 2) Adding local and global variable support to our Wasm Interpreter

Happy Friday! We just published a new blog post โ€”

A WebAssembly Interpreter: Part 2
โ†’ wasmgroundup.com/blog/wasm-vm...

In the first post, we wrote a small #Wasm interpreter from scratch in JS. In Part 2, we extend the interpreter to support local and global variables.

20.02.2026 15:20 ๐Ÿ‘ 9 ๐Ÿ” 4 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
Performance comparison tables showing WebAssembly significantly outperforming JavaScript. In the "After" state: JS matching takes 3267ms with 1090.7MB memory while Wasm matching takes only 146ms with 174.5MB memory (22.4x faster). Overall, JS total is 2291ms compared to Wasm total of 100ms (22.99x speedup). Wasm uses dramatically less memory: 6.88 MB heap vs 247.52 MB for JS.

Performance comparison tables showing WebAssembly significantly outperforming JavaScript. In the "After" state: JS matching takes 3267ms with 1090.7MB memory while Wasm matching takes only 146ms with 174.5MB memory (22.4x faster). Overall, JS total is 2291ms compared to Wasm total of 100ms (22.99x speedup). Wasm uses dramatically less memory: 6.88 MB heap vs 247.52 MB for JS.

I'm extremely hyped about the performance of the upcoming (#Wasm-based) @ohmjs.org v18.

!!!

(please don't let this be a mistake in my benchmarking)

19.02.2026 20:52 ๐Ÿ‘ 19 ๐Ÿ” 1 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
Preview
โ€นProgrammingโ€บ 2026 The International Conference on the Art, Science, and Engineering of Programmingโ€”or โ€นProgrammingโ€บ for shortโ€”is a new conference focused on programming topics including the experience of programming. โ€น...

Final 24h! Early bird for #prog26 ends tomorrow, Feb 20. Don't miss the art & science of programming in Munich (Mar 16โ€“20)!

Register: 2026.programming-conference.org
Submit to the Substrates workshop: 2026.programming-conference.org/home/substra...

19.02.2026 12:34 ๐Ÿ‘ 2 ๐Ÿ” 2 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

This is exactly one of the findings of the study* on tabbed browsing I did ~15 yrs ago โ€”ย tab usage is bimodal. Funny to see that it (anecdotally at least) still holds up!

โˆ— "A Study of Tabbed Browsing Among Mozilla Firefox Users" from CHI 2010: dl.acm.org/doi/pdf/10.1...

14.02.2026 12:32 ๐Ÿ‘ 42 ๐Ÿ” 5 ๐Ÿ’ฌ 5 ๐Ÿ“Œ 5
There is a natural tendency in designing interfaces to try to make
them as fast as possible, to rapidly, and seamlessly [36], automate
tasks that are not essential facets of the task at hand. Yet, this speed
can lead users to race past useful experiences, particularly ones
that are artistically or pedagogically helpful.
5.1.1 Slowing Things Down. Reducing the rate at which different
tasks can be performed gives space for both artistry and learning,
giving users time for reflection and personal growth.
Reflection. slowness seemed to have value for the production
of and critical engagement with art.
PTega suggested that โ€œin the arts, thereโ€™s a real value to slowing down and taking the hood off things. Because it lets you ask
critical questions, [such as] if youโ€™re truly engaging with it as a
mediumโ€. PBaku argued for the value of integrating tedium into his
workflow, noting that it is useful to โ€œintegrate procedural ways of
thinking with more manual or repetitive or more tedious worksโ€.
He went on to describe how a photographer friend intentionally
used an older and slower computer to guide the type of works he
could create. Emphasizing the importance of user agency in this

There is a natural tendency in designing interfaces to try to make them as fast as possible, to rapidly, and seamlessly [36], automate tasks that are not essential facets of the task at hand. Yet, this speed can lead users to race past useful experiences, particularly ones that are artistically or pedagogically helpful. 5.1.1 Slowing Things Down. Reducing the rate at which different tasks can be performed gives space for both artistry and learning, giving users time for reflection and personal growth. Reflection. slowness seemed to have value for the production of and critical engagement with art. PTega suggested that โ€œin the arts, thereโ€™s a real value to slowing down and taking the hood off things. Because it lets you ask critical questions, [such as] if youโ€™re truly engaging with it as a mediumโ€. PBaku argued for the value of integrating tedium into his workflow, noting that it is useful to โ€œintegrate procedural ways of thinking with more manual or repetitive or more tedious worksโ€. He went on to describe how a photographer friend intentionally used an older and slower computer to guide the type of works he could create. Emphasizing the importance of user agency in this

"This speed can lead users to race past useful experiences, particularly ones that are artistically or pedagogically helpful."

Slowness, Politics, and Joy: Values That Guide Technology
Choices in Creative Coding Classrooms
โ†’ www.mcnutt.in/assets/tatto...

14.02.2026 11:05 ๐Ÿ‘ 9 ๐Ÿ” 1 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
A large letter 'C' on the left side with the title 'Is Not a Low-level Language' where 'Not' is emphasized in brown italics. Author name 'David Chisnall' appears below, with 'Your Computer Is Not A Fast PDP-11' in brown text at the bottom right.

A large letter 'C' on the left side with the title 'Is Not a Low-level Language' where 'Not' is emphasized in brown italics. Author name 'David Chisnall' appears below, with 'Your Computer Is Not A Fast PDP-11' in brown text at the bottom right.

"The root cause of the Spectre and Meltdown vulnerabilities
was that processor architects were trying to build not just
fast processors, but fast processors that expose the same
abstract machine as a PDP-11."

C Is Not a Low-level Language: spawn-queue.acm.org/doi/pdf/10.1...

11.02.2026 15:29 ๐Ÿ‘ 6 ๐Ÿ” 1 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0
Preview
GitHub - lynaghk/vibe: Easy Linux virtual machine on MacOS to sandbox LLM agents. Easy Linux virtual machine on MacOS to sandbox LLM agents. - lynaghk/vibe

if you're on macos: github.com/lynaghk/vibe/

08.02.2026 09:13 ๐Ÿ‘ 6 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0
Iโ€™ll be introducing some breaking changes in the next major version of Ohm and Iโ€™d like to make the upgrade path as smooth as possible. So Iโ€™ve been investigating patterns for specific โ€œmigrationโ€ and โ€œcompatโ€ packages in the JS ecosystem.

I found a few interesting examples. Most of them aim to support incremental upgrades, allowing you to migrate to the new API piece by piece. This isnโ€™t really a concern for Ohm. Mainly Iโ€™m interested in making it easy for to folks to try the new version behind a feature flag, and easily revert to the stable version if they run into any bugs.

Anyways, hereโ€™s what I found โ€”

react-router-dom-v5-compat
Instead of upgrading and updating all of your code at once (which is incredibly difficult and prone to bugs), the backwards compatibility package enables you to upgrade one component, one hook, and one route at a time by running both v5 and v6 in parallel. Any code you havenโ€™t touched is still running the very same code it was before. Once all components are exclusively using the v6 APIs, your app no longer needs the compatibility package and is running on v6.

Iโ€™ll be introducing some breaking changes in the next major version of Ohm and Iโ€™d like to make the upgrade path as smooth as possible. So Iโ€™ve been investigating patterns for specific โ€œmigrationโ€ and โ€œcompatโ€ packages in the JS ecosystem. I found a few interesting examples. Most of them aim to support incremental upgrades, allowing you to migrate to the new API piece by piece. This isnโ€™t really a concern for Ohm. Mainly Iโ€™m interested in making it easy for to folks to try the new version behind a feature flag, and easily revert to the stable version if they run into any bugs. Anyways, hereโ€™s what I found โ€” react-router-dom-v5-compat Instead of upgrading and updating all of your code at once (which is incredibly difficult and prone to bugs), the backwards compatibility package enables you to upgrade one component, one hook, and one route at a time by running both v5 and v6 in parallel. Any code you havenโ€™t touched is still running the very same code it was before. Once all components are exclusively using the v6 APIs, your app no longer needs the compatibility package and is running on v6.

I wrote up what I found in a short blog post โ€”

devlog: compatibility packages
โ†’ dubroy.com/blog/compati...

06.02.2026 18:30 ๐Ÿ‘ 2 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0