Skip to content

Conversation

@clason
Copy link
Contributor

@clason clason commented Jan 23, 2026

Now that we run wasm-opt explicitly after compilation, skip it during
the clang compile and link phase.

Also disable -Os optimizations (saves time, and final wasm blob size
is even slightly smaller this way).

@clason
Copy link
Contributor Author

clason commented Jan 23, 2026

A quick test didn't show meaningful performance difference for the generated parser between -Os and -O0, but more tests are needed.

@clason
Copy link
Contributor Author

clason commented Jan 23, 2026

(Some parsers -- not gonna name and shame -- take an extraordinary amount of time and memory to compile to wasm, so trying to reduce that is worth it.)

@maxbrunsfeld
Copy link
Contributor

The point of the -Os is for code size, not performance. Did you compare the size of the binaries?

@clason
Copy link
Contributor Author

clason commented Jan 23, 2026

Yes. The result with -00 is actually smaller (by 2k, so no big difference). Of course, that was just a single badly-behaved parser, so I'd like to check a few more.

@clason
Copy link
Contributor Author

clason commented Jan 24, 2026

Ok, here's a small test of 0.26.3 (with binaryen installed) vs. master vs. this PR vs. --no-wasm-opt only (all using wasi-sdk rather than emscripten), size [bytes] and time [s]:

parser 0.26.3 master PR PR (-Os)
haskell 3,840,486 (1.5s) 3,840,449 (1.3s) 3,870,921 (1.1s) 3,840,486 (1.2s)
julia 2,636,604 (2.4s) 2,636,599 (2.7s) 2,636,126 (1.8s) 2,636,604 (2.5s)
lua 49,575 (212ms) 49,559 (260ms) 57,219 (197ms) 49,575 (211ms)
markdown 380,394 (5.7s) 380,394 (6.3s) 381,607 (2.7s) 380,394 (6s)
query 17,646 (155ms) 17,636 (199ms) 20,622 (142ms) 17,646 (152ms)
vim 1,234,071 (230s) 1,234,071 (270s) 1,232,578 (164s) 1,234,071 (222s)
vimdoc 207,152 (7.1s) 207,152 (9.3s) 206,002 (5.9s) 207,152 (8.9s)
vue 28,714 (240ms) 28,707 (291ms) 36,707 (197ms) 28,714 (238ms)

So... not very conclusive. For complex parsers, -O0 makes a noticeable difference in compilation time, and it's a toss-up whether that increases or decreases size (and by how much). The double wasm-opt run does seem to make a difference in some cases, but only by a (IMO) negligible amount.

In any case, the real issue here is memory pressure: the vim parser requires ~9 Gb sustained, spiking to >20 Gb, which completely throttles the github runner for the release workflow (which takes 40 minutes o.O). Note that this is specific to WASM; normal tree-sitter b takes ~1s and ~300Mb. So it's probably worth looking into WASM-specific options (and trying to find out why that parser tanks clang so catastrophically...)

@WillLillis
Copy link
Member

So... not very conclusive. For complex parsers, -O0 makes a noticeable difference in compilation time, and it's a toss-up whether that increases or decreases size (and by how much). The double wasm-opt run does seem to make a difference in some cases, but only by a (IMO) negligible amount.

Would it make sense/be helpful to accept the optimization level as a flag? The double -OS seems to make a fairly small difference in final size, but can greatly increase time so this seems like a fair tradeoff. Maybe we could make -O0 the default but allow -Os via the CLI?

@clason
Copy link
Contributor Author

clason commented Jan 26, 2026

Since (in a perfect world) the wasm artifacts are built with a standard release workflow, I don't think that would be worth it. I think the best compromise for now is to keep -Os in the build and instead try to find out for which parsers the memory usage explodes and why. Ideally, this can then be fixed on the parser side.

Now that we run `wasm-opt` explicitly after compilation, skip it during
the `clang` compile and link phase.
@maxbrunsfeld
Copy link
Contributor

Thanks for collecting this data @clason . I agree that we want it to be standardized (and building wasm artifacts as release artifacts is gonna be fantastic).

@WillLillis
Copy link
Member

WillLillis commented Jan 27, 2026

I think this upstream issue matches what we're seeing in tree-sitter-vim: llvm/llvm-project#47793

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants