Replied in the PR, thanks for contributing :)
Replied in the PR, thanks for contributing :)
What about Rolldown? It will be the default bundler in Vite 8, so it's well supported. I don't know how the performance compares to Farm, but Rolldown is also written in Rust and quite fast.
github.com/rolldown/rol...
Test where `// @strict: false` has been added to the beginning of the file.
Fun story from today. I'm currently trying to get `--strict` on by default in TypeScript 6.0.
Our test suite has many tests which are written with `--strict false`, so I am updating them by adding a special comment our test suite recognizes for options
// @strict: false
However...
I like it! Red squiggly is overused anyway.
Try it out!
It took a little while, but I feel like using `go tool pprof` is starting to grow on me.
This has been in the works for a few months now! Excited for more people to try this out when it supports config options.
A special thanks to the @typescript-eslint.io team who have set the standard for typed lint rules here. Much of this work is based on what has come before and on their rules.
Finally had time to put more effort into implementing oxlint<->tsgolint configuration for rules. Very soon, you'll be able to configure type-aware rules like any other rules. Should be as easy as just upgrading to the latest versions once it is available.
It's only a subset of crates, but it's some of the most useful crates! Such a great feature ❤️
Thank you so much for your help on this! It's a tremendous improvement and I hope it inspires more people to contribute 🙏
So I started working on it in the evenings when I had time, and last week while I was off work for a week (recovering from eye surgery), in between naps I worked through most of the rule updates. So now almost all of them have documentation for their config options :)
github.com/oxc-project/...
I had been exploring ESLint alternatives and was bothered by the lack of consistent documentation for rule configs in oxlint. So I opened an issue about it and talked with the maintainers about the right way to solve it, and found out there was a system for auto-generating documentation with types.
Despite the warning, comments should be supported!
I believe they get stripped out so it doesn't affect the parsing, even if it's not named JSONC.
oxc.rs/docs/guide/u...
I love that this actually works.
*Oh yeah, is your list actually UNordered? Prove it.*
use no memo
hear no memo
speak no memo
I am looking for a full-time job.
Being independent in open source for 3.5+ years has been wonderful. I've gotten done most of the high-level goals I wanted to, and miss having people & structure around me.
If you know of a role for a staff-level TypeScript+web developer, let me know! 🙂
Vite and Vitest imply the existence of Viter
Thanks! Maybe I will give Go benchmarks another try. I have tried codspeed, but it only supports walltime benchmarks for now. Although the CPU simulation benchmarks are not accurate to the real-world, it has been useful for getting some consistent numbers. Have you used this successfully in CI?
Does anyone have experience with tools for benchmarking on every PR for Go projects? I'm looking to get a rough estimate of perf regressions/improvements in each PR for github.com/oxc-project/.... Looking into building something custom with `go tool pprof` currently.
If you'd like to reproduce these results:
Benchmark command: `hyperfine --ignore-failure --warmup 10 --runs 20 'pnpx oxlint@1.24 --silent' 'pnpx oxlint@1.23 --silent' 'pnpx oxlint@1.22 --silent' 'pnpx oxlint@1.21 --silent'`
elastic/kibana@2169baef
microsoft/vscode@fa994275
Hyperfine benchmark showing that oxlint 1.24 is 1.11 times faster than v1.23, 1.26 times faster than v1.22, and 1.27 times faster than v1.21.
On the Kibana repository, oxlint 1.24 is 11% faster than v1.23, and up to 26% faster than v1.22 and below!
On my laptop, oxlint 1.24 is 3% faster than 1.23 on the `vscode` repository, with even larger improvements for very large codebases.
That means if you haven't updated to one of the latest versions in a few weeks, your linting step could be >10% slower than it should be!
Making something 1% faster 20 times > making something 20% faster once
But that doesn't stop me from trying to get that juicy big one 😅
Big up to new #oxc contributor @arsh.sh, who showed up out of nowhere and is tearing through our issue list! He's just implemented support for all the comment-based APIs in Oxlint JS plugins. github.com/oxc-project/...
Benchmark command, if you'd like to reproduce this (microsoft/vscode@fa99427534301606eb84c30cd8f7701ac2c35f90):
hyperfine --ignore-failure --warmup 5 --runs 20 'pnpx oxlint@1.23 --silent' 'pnpx oxlint@1.22 --silent' 'pnpx oxlint@1.21 --silent' 'pnpx oxlint@1.20 --silent' 'pnpx oxlint@1.19 --silent'
A benchmark of oxlint runs against the vscode repository. The final summary shows 'pnpx oxlint@1.23 --silent' ran 7% faster than 1.21, 8% faster than 1.22, 9% faster than 1.20, and 9% faster than 1.19. The average time for 1.23 was 844.8ms.
Oxlint 1.23.0 just got released, which includes the latest in some of the performance optimization work I've been doing.
Running on the vscode repository on my M1 laptop, 1.23.0 is ~7-9% faster than previous versions of oxlint with no changes other than just bumping the dependency.
must-use-promises? I feel like misused promises could be broken into separate rules like: no-unawaited-conditions, no-await-void-return, no-spread-promises. It's felt like 3 separate rules to me when reading the docs and configuring
Just finished writing up an auto-fix for this in Oxlint, in case this was a blocker for anyone: github.com/oxc-project/...
Oh no, that sounds exhausting. Compared to C, I feel like my brain is already adapting to this style. Having all of the type declarations be extremely uniform is really nice for reducing cognitive load.