Rendered at 09:45:58 GMT+0000 (Coordinated Universal Time) with Cloudflare Workers.
owickstrom 2 days ago [-]
I'm using oxc_traverse and friends to implement on-the-fly JS instrumentation for https://github.com/antithesishq/bombadil and it has been awesome. That in combination with boa_engine lets me build a statically linked executable rather than a hodgepodge of Node tools to shell out to. Respect to the tools that came before but this is way nicer for distribution. Good times for web tech IMO.
pier25 2 days ago [-]
All the Void Zero projects are super cool although I still wonder how they’re going to monetize all this.
rk06 2 days ago [-]
they are going to use vite plus for monetization
conartist6 2 days ago [-]
The vite plus idea is that you'll pay for visual tools. What's odd to me is it makes their paid product kind of a bet against their open product. If their open platform were as powerful as it should be, it would be easy to use it to recreate the kinds of experiences they propose to sell.
The paradox gains another layer when you consider that their whole mission is to build tools for the JavaScript ecosystem, yet by moving to Rust they are betting that JS-the-language is so broken that it cannot even host its own tools. And because JS is still a stronger language for building UIs in than Rust, their business strategy now makes them hard-committed to their bet that JS tools in JS are a dead end.
nindalf 2 days ago [-]
> it cannot even host its own tools
You say this like this is the basic requirement for a language. But languages make tradeoffs that make them more appropriate for some domains and not others. There's no shade if a language isn't ideal for developer tools, just like there's no shade if a language isn't perfect for web frontends, web backends, embedded development, safety critical code (think pacemakers), mobile development, neural networks and on and on.
Seriously, go to https://astral.sh and scroll down to "Linting the CPython code base from scratch". It would be easy to look at that and conclude that Python's best days are behind it because it's so slow. In reality Python is an even better language at its core domains now that its developer tools have been rewritten in Rust. It's the same excellent language, but now developers can iterate faster.
It's the same with JavaScript. Just because it's not the best language for linters and formatters doesn't mean it's broken.
rk06 2 days ago [-]
> they are betting that JS-the-language is so broken that it cannot even host its own tools.
Evan wallace proved it by building esbuild. this is no longer bet.
> If their open platform were as powerful as it should be, it would be easy to use it to recreate the kinds of experiences they propose to sell.
you would be surprised to know that tech companies may find it cheaper to pay money than developer bandwidth for stuff beyong their core compentency.
dropbox was also considered to be trivially implementable, but end users rarely try to re-invent it.
lioeters 2 days ago [-]
> esbuild
Another example is the TypeScript compiler being rewritten in Go instead of self-hosting. It's an admission that the language is not performant enough, and more, it can never be enough for building its own tooling. It might be that the tooling situation is the problem, not the language itself, though. I do see hopeful signs that JavaScript ecosystem is continuing to evolve, like the recent release of MicroQuickJS by Bellard, or Bun which is fast(er) and really fun to use.
MrJohz 2 days ago [-]
I don't think that's necessarily a bad thing, though. JavaScript isn't performant enough for its own tooling, but that's just one class of program that can be written. There are plenty of other classes of program where JavaScript is perfectly fast enough, and the ease of e.g. writing plugins or having a fast feedback loop outweighs the benefits of other languages.
I quite like Roc's philosophy here: https://www.roc-lang.org/faq#self-hosted-compiler. The developers of the language want to build a language that has a high performance compiler, but they don't want to build a language that one would use to build a high performance compiler (because that imposes a whole bunch of constraints when it comes to things like handling memory). In my head, JavaScript is very similar. If you need a high performance compiler, maybe look elsewhere? If you need the sort of fast development loop you can get by having a high performance compiler, then JS is just the right thing.
lioeters 2 days ago [-]
True, I agree. It's a good thing to accept a language's limitations and areas of suitability, without any judgement about whether the language is good for all purposes - which is likely not a good goal for a language to have anyway. I like that example of Roc, how it's explicitly planned to be not self-hosting. It makes sense to use different languages to suit the context, as all tools have particular strengths and weaknesses.
Off topic but I wonder if this applies to human languages, whether some are more suited for particular purposes - like German to express rigorous scientific thinking with compound words created just-in-time; Spanish for romantic lyrical situations; or Chinese for dense ideographs. People say languages can expand or limit not only what you can express but what you can think. That's certainly true of programming languages.
pjmlp 2 days ago [-]
Which also proves the point that not everything needs to be Rust.
lioeters 2 days ago [-]
I agree and foresee a future, maybe a decade from now, when the trend shifts to everyone rewriting all the Rust written or generated in the meantime to something else, a newer hopefully simpler language that accomplishes the same thing.
pjmlp 2 days ago [-]
Just wait when Zig reaches 1.0
VPenkov 2 days ago [-]
> The vite plus idea is that you'll pay for visual tools.
From what I understand, Vite+ seems like an all-in-one toolchain. Instead of maintaining multiple configurations with various degrees of intercompatibility, you maintain only one.
This has the added benefit that linters and such can share information about your dependency graph, and even ASTs, so your tools doesn't have to compute them individually. Which has a very decent potential of improving your overall pre-merge pipeline. Then, on top of that, caching.
The focus here is of course enterprise customers and looks like it is supposed to compete with the likes of Nx/Moonrepo/Turborepo/Rush. Nx and Rush are big beasts and can be somewhat unwieldy and quirky. Nx lost some trust with its community by retracting some open-source features and took a very long time to (partially) address the backlash.
Vite+ has a good chance to be a contender on the market with clearer positioning if it manages to nail monorepo support.
panstromek 2 days ago [-]
I don't see the idea is visual tools, I never even heard somebody to talk about it like that. The plan is to target enterprise customers with advanced features. I feel like you should just go and watch some interviews or something where talk about their plan, Evan You was recently on a few podcasts mentioning their plans.
Also, the paradox is not really even there. JS ecosystem largely gave up on JS tools long time ago already. Pretty much all major build tools are migrating to native or already migrated, at least partially. This has been going on for last 4 years or something.
But the key to all of this is that most of these tools are still supporting JS plugins. Rolldown/Vite is compatible with Rollup JS plugins and OXLint has ESLint compatible API (it's in preview atm). So it's not really even a bet at all.
pier25 2 days ago [-]
Yes but is that going to be enough?
Doesn’t look super interesting to me tbh.
TheAlexLichter 2 days ago [-]
There will be more
shawn_w 2 days ago [-]
Why should it be monetized?
leosanchez 2 days ago [-]
IIRC they tool VC money
pier25 2 days ago [-]
It’s a private company with VC money.
hiuioejfjkf 2 days ago [-]
[dead]
carlos22 2 days ago [-]
in the beginning yes, but VCs want to cash out eventually. Look at mongodb, redis and whatnot that did everything to get money at a certain point. For VCs open source is a vehicle to get relevant in a space you would never be relevant if you won't do open source.
Grom_PE 2 days ago [-]
I thought oxfmt would just be a faster drop-in replacement for "biome format"... It wasn't.
Let this be a warning: running oxfmt without any arguments recursively scans directory tree from the current directory for all *.js and *.ts files and silently reformats them.
Thanks to that, I got a few of my Allman-formatted JavaScript files I care about messed up with no option to format them back from K&R style.
tomashubelbauer 2 days ago [-]
> running oxfmt without any arguments recursively scans directory tree from the current directory for all .js and .ts files and silently reformats them
I've got to say this is what I would have expected and wanted to happen. I'd say it is wise to not run tools designed to edit files on files you don't have a backup for (like Git) without doing a dry-run or a small scope experiment first.
vladvasiliu 2 days ago [-]
While I can get behind things such as "use version control," "use backups", etc. this is definitely not what I'd expect from a program run without arguments, especially when it will go and change stuff.
Tadpole9181 2 days ago [-]
What? The very first page of documentation tells you this. The help screen clearly shows a `--check` argument. This is a formatter and uses the same arguments as many others - in particular Prettier, the most popular formatter in the ecosystem.
How were you not expecting this? Did you not bother to read anything before installing and running this command on a sensitive codebase?
vladvasiliu 2 days ago [-]
I do usually run new tools from somewhere harmless, like ~/tmp, just in case they do something unexpected.
But most formatters I'm used to absolutely don't do this. For example, `rustfmt` will read input from stdin if no argument is given. It can traverse modules in a project, but it won't start modifying everything under your CWD.
Most unix tools will either wait for some stdin or dump some kind of help when no argument is given. Hell, according to this tool's docs, even `prettier` seems to expect an argument:
> Running oxfmt without arguments formats the current directory (*equivalent to prettier --write .*)
I'm not familiar with prettier, so I may be wrong, but from the above, I understand that prettier doesn't start rewriting files if no argument is given?
Looking up prettier's docs, they have this to say:
> --write
This rewrites all processed files in place. *This is comparable to the eslint --fix* workflow.
So eslint also doesn't automatically overwrite everything?
So yeah, I can't say this is expected behaviour, even if it's documented.
johnny22 2 days ago [-]
a more related tool would be prettier, which also has a --write option
nindalf 2 days ago [-]
> with no option to format them back
Try git reset --hard, that should work.
jagged-chisel 2 days ago [-]
These files are under version control, right? Or backed up. Right?
watt 3 hours ago [-]
until the day you accidentally run it in your home directory without any arguments.
Sammi 2 days ago [-]
This is user error. oxfmt did what you asked it to do.
rk06 2 days ago [-]
I don't think so. If someone runs a tool without args, the tool should do equivalent of "tool --help"
It is bad ux.
Sammi 2 days ago [-]
I expect a file formatter to format the files when I call it. Anything else would be surprising to me.
rk06 2 days ago [-]
a new user should not expected to know whether to use "--info", "--help", or "-info" or "/info"
A power user can just pass the right params. Besides, it is not that hard to support "--yolo" parameter for that use case
xigoi 2 days ago [-]
Would you enjoy writing `rm --yolo file` instead of `rm file` every time?
watt 3 hours ago [-]
would you enjoy it if running "rm" in any folder recursively deleted all files in it?
phcreery 2 days ago [-]
In this case, "file" is the arg, not --yolo. `rm` without any args returns
``
rm: missing operand
Try 'rm --help' for more information.
```
`oxfmt` should have done the same and `oxfmt .`, with the desired dir ".", should have been the required usage.
xigoi 2 days ago [-]
I expect invoking a command-line tool without any arguments to perform the most common action. Displaying the help should only be a fallback if there is no most common action. For example, `git init` modifies the current directory instead of asking you, because that’s what you want to do most of the time.
vladvasiliu 2 days ago [-]
No, but we're not talking about `oxfmt file` here, but `oxfmt` with no argument.
I don't expect `rm` with no argument to trash everything in my CWD. Which it doesn't, see sibling's comment.
user3939382 2 days ago [-]
Not taking a position but the design of rm strengthens the position that recursive by default without flags isn’t ok. rm makes you confirm when you want changes to recurse dirs.
aquariusDue 2 days ago [-]
I know feels aren't the objective truth but I feel like most people would default to running "new-cli-tool --help" first thing as a learned (defensive) habit. After all quite a bit of stuff that runs in a terminal emulator does something when ran without arguments or flags.
rk06 1 days ago [-]
I feel most people should refer to manual before running arbitary command. but that's because "crontab -r" has taught me this the hard way.
new devs should not learn these things the hard way
ctmnt 2 days ago [-]
I assume you mean what’s more properly called Java style [1], where the first curly brace is on the same line as the function declaration (or class declaration, but if you’re using Allman style you’re probably not using classes; no shade, I’m a JS class hater myself) [2] or control statement [3], the elses (etc) are cuddled, and single statement blocks are enclosed in curly braces. Except I also assume that oxfmt’s default indentation is 2 spaces, following Prettier [4], whereas Java style specified 4.
So maybe we should call it JavaScript style? Modern JS style? Do we have a good name for it?
Also, does anyone know when and why “K&R style” [5] started being used to refer to Java style? Meaning K&R statement block style (“Egyptian braces” [6]) being used for all braces and single statement blocks getting treated the same as multi-statement blocks. Setting aside the eternal indentation question.
It always comes as a surprise to me how the same group of people who go out of their way to shave off the last milliseconds or microseconds in their tooling care so little about the performance of the code they ship to browsers.
Not to discredit OP's work of course.
wiseowise 2 days ago [-]
People shaving off the last milliseconds or microseconds in their tooling aren't the same people shipping slow code to browsers. Say thanks to POs, PMs, stakeholders, etc.
littlestymaar 2 days ago [-]
Sometimes they are the same person.
It just take someone to have poor empathy towards your users to ship slow software that you don't use.
wiseowise 2 days ago [-]
I've never met a single person obsessed with performance who goes half the way. You either have a performance junkie or a slob who will be fine with 20 minutes compile times.
littlestymaar 2 days ago [-]
I have. They cared a lot about performance for them because they hated waiting, but gave literally no shit about anyone else.
staticassertion 2 days ago [-]
TBH I don't know how to do that work. If I'm in the backend it's very easy for me. I can think about allocations, I can think about threading, concurrency, etc, so easily. In browser land I'm probably picking up some confusing framework, I don't have any of the straightforward ways to reason about performance at the language level, etc.
Maybe once day we can use wasm or whatever and I can write fast code for the frontend but not today, and it's a bit unsurprising that others face similar issues.
Also, if I'm building a CLI, maybe I think that 1ms matters. But someone browsing my webpage one time ever? That might matter a lot less to me, you're not "browsing in a hot loop".
philippta 2 days ago [-]
It's not too diffcut in the browser either. Consider how often you're making copies of your data and try to reduce it. For example:
- for loops over map/filter
- maps over objects
- .sort() over .toSorted()
- mutable over immutable data
- inline over callbacks
- function over const = () => {}
Pretty much, as if you wrote in ES3 (instead of ES5/6)
staticassertion 2 days ago [-]
Yes but it's not really fair to expect me to know how to do that. Just because I know how to do it for backend code, where it's often a lot easier to see those copies, doesn't mean I'm just a negligent asshole for not doing it on the frontend. I don't know how, it's a different skillset.
philippta 1 days ago [-]
Nobody expects you to know that, but I'm curious to hear how do you know it for backend code but not frontend code. Have any examples?
staticassertion 1 days ago [-]
The parent commenter earlier seems to be implying that it's only a matter of not caring.
> care so little about the performance of the code they ship to browsers.
> but I'm curious to hear how do you know it for backend code but not frontend code.
Because I find backend languages extremely easy to reason about for performance. It seems to me that when I write in a language like rust I can largely "grep for allocations". I find that hard to see in javascript etc. This is doubly the case because frontend code seems to be extremely framework heavy and abstract, so it makes it very hard to reason about performance just by reading the code.
philippta 16 hours ago [-]
That's completely relatable, and also a major point in my original argument. Using heavily abstracted frameworks will automatically cap you performance wise. The only way out is to not use a framework or one that's known to be lightweight. In backend or tooling like with the JS compiler from OP, one tends to not use heavy frameworks in the first place.
WorldMaker 2 days ago [-]
The work is largely the same.
You think about allocations: JS is a garbage collected language and allocations are "cheap" so extremely common. GC is powerful and in most JS engines quite fast but not omniscient and sometimes needs a hand. (Just like reasoning with any GC language.) Of course the easiest intervention to allocations is to remove allocations entirely; just because it is cheap to over-allocate, and the GC will mostly smooth out the flaws with such approaches, doesn't mean ignoring the memory complexity of the chosen algorithms. Most browser dev tools today have allocation profilers equal or better to their backend cousins.
You think about threading, concurrency, etc: JS is even a little easier than many backend languages because it is (almost excessively) single-threaded. A lot of concurrency issues cannot exist in current JS designs unless you add in explicit IPC channels to explicitly "named" other threads (Service Workers and Web Workers). On the flipside, JS is a little harder to reason about threading than many backend languages because it is extensively cooperatively threaded. Code has to yield to other code frequently and regularly. Shaving milliseconds off a routine yields more time to other things that need to happen (browser events, user input, etc). That starts to add up. JS encourages you to do things in short, tight "bursts" rather than long-running algorithms. Here again, most browser dev tools today have strong stack trace/flame chart profilers that equal or exceed backend cousins. Often in JS "tall" flames are fine but "wide" flames are things to avoid/try to improve. (That's a bit reversed from some backend languages where shallow is overall less overhead and long-running tasks are sometimes better amortized than lots of short ones.)
> But someone browsing my webpage one time ever? That might matter a lot less to me, you're not "browsing in a hot loop".
The heavily event-driven architecture of the browser often means that just sitting on a webpage is "browsing in a hot loop". Browsers have gotten better and better at sleeping inactive tabs and multi-threading tabs to not interfere with each other, but things are still a bit of a "tragedy of the commons" that the average performance of a website still directly and indirectly drags everyone else down. It might not matter to you that your webpage is slow because you only expect a user to visit it once, but you also aren't taking into account that is probably not the only website that user is browsing at that moment. Smart users do directly and indirectly notice when the bad performance of one webpage impacts their experiences of other web pages or crashes their browser. Depending on your business model and what the purpose of that webpage is for, that can be a bad impression that leads to things like lost sales/customers.
staticassertion 2 days ago [-]
I don't think it's the same tbh. In Rust I can often just `rg '\.clone'` and immediately see wins. Allocations are far easier to track statically. I don't have a good sense for "seeing" allocations when I look at JS, it feels like it's unfair to expect me to have that tbh. As for profilers, yes I could see things like "this code is allocating a lot" but JS hardly feels like a language where it's smooth to then fix that, and again, frameworks are so common that I doubt I'd be in a position to do so. This is really in contrast to systems languages again where I also have profilers but fixing the problem is often trivial.
> You think about threading, concurrency, etc: JS is even a little easier than many backend languages because it is (almost excessively) single-threaded. A lot of concurrency issues cannot exist in current JS designs unless you add in explicit IPC channels to explicitly "named" other threads (Service Workers and Web Workers).
My issue isn't with being able to write concurrent code that has no bugs, my issue is having access to primitives where I have tight control over concurrency and parallelism. The primitives in JS do not provide that control and are often very heavy in and of themselves.
I think it's perhaps worth noting that I am not saying "it's impossible to write fast code for the browser", I'm saying it is not surprising that people who have developed skillsets for optimizing backend code in languages designed to be fast are not in a great position to do the same for a website.
WorldMaker 1 days ago [-]
> I don't have a good sense for "seeing" allocations when I look at JS, it feels like it's unfair to expect me to have that tbh.
I still think that's a training/familiarity problem more than a language issue? You can just as easily start with `rg \bnew\b` as you can `rg \.clone`. The `new` operator is a useful thing to start with as in both C++ and C#, too. (Even though JS new is technically a different operator than both C++ and C#'s.) After that the JSON syntax is a decent start. Something like `rg {\s*["\.']` and `rg [` are places to start. Curly brackets and square brackets in "data position" are useful in Python and now some of C#, too.
After that the next biggest culprits are common library things like `.filter()` and `.map()` which JS defaults to reified/eager versions for historic reasons. (There are now lazier versions, but migrating to them will take time.) That sort of library allocations knowledge is mostly just enough familiarity with standard library, a need that remains universal in any language.
> JS hardly feels like a language where it's smooth to then fix that
Again, perhaps this is just a familiarity issue, but having done plenty of both, at the end of the day I still see this process as the same: move allocations out of tight loops, use object pools if necessary, examine the O-Notation/Omega-Notation of an algorithm for its space requirements and evaluate alternatives with better mean or worst cases, etc. It mostly doesn't matter what language I'm working in the basics and fundamentals are the same. Everything is as "smooth" as you feel comfortable refactoring code or switching to alternate algorithm implementations.
> frameworks are so common that I doubt I'd be in a position to do so
Do you treat all your backend library dependencies as black boxes as well?
Even if that is the case and you want to avoid profiling your framework dependencies themselves and simply hope someone else is doing that, there's still so much in your control.
I find JS is one of the few languages where you can somewhat transparently profile even all of your dependencies. Most JS dependencies are distributed as JS source and you generally don't have missing symbol files or pre-compiled binary bricks that are inscrutable to inspection. (WASM is changing that, for the worse, but so far there are very few WASM-only frameworks and most of them have other debugging and profiling tools.)
I can choose which frameworks to use based on how their profiler results look. (I can tell you that I don't particularly like Angular and one of the reasons why is I've caught it with truly abysmal profiles more than once, where I could prove the allocations or the CPU clock time were entirely framework code and not my app's business logic.)
I've used profilers to guide building my own "frameworks" and help proven "Vanilla" approaches to other developers over frameworks in use.
> The primitives in JS do not provide that control and are often very heavy in and of themselves.
Maybe I'm missing what primitives you are looking for. async/await is about the same primitive in JS and Rust and there are very similar higher-level tools on top of them. There's no concurrency/parallelism primitives today in JS because there is no allowed concurrency or parallelism. There are task scheduling primitives somewhat unique to JS for doing things like "fan out" akin to parallelism but relying on cooperative (single) threading. Examples include `requestAnimationFrame` and `requestIdleCallback` (for "this can wait until you next need to draw a frame, including if you need to drop frames" and "this can wait until things are idle" respectively).
> I'm saying it is not surprising that people who have developed skillsets for optimizing backend code in languages designed to be fast are not in a great position to do the same for a website.
I think I'm saying that it is surprising to me that people who have developed skillsets for optimizing backend code in languages designed to be fast seem to struggle applying the same skills to a language with simpler/"slower" mechanics, but also on average much higher transparency into dependencies (fuller top-to-bottom stack traces and metrics in profiles).
To be fair, I get the impulse to want to leave it as someone else's problem. But as a full stack developer who has done performance work in at least a half dozen languages, I feel like if you can profile and performance tune Rust you should be able to profile and performance tune JS. But maybe I've seen "too much of the Matrix" and my "it's all the same" comes from a deep generalist background that is hard for a specialist to appreciate.
staticassertion 1 days ago [-]
> I still think that's a training/familiarity problem more than a language issue?
But that's fine. Even if we say it's a familiarity problem, that's fine. I'm only saying that it's not reasonable to expect my skills in optimizing backend code to somehow transfer. Obviously many things are the same - reducing allocation, improving algorithmic performance, etc. But that looks very different when you go from the backend to the frontend because the languages can look very different.
> You can just as easily start with `rg \bnew\b` as you can `rg \.clone`.
That's not true though. In Rust you have to have a clone somewhere if you're allocating on the heap, or one of the pointer types like `new`. If I pass a struct around it's either cheaply moveable (ie: Copy) or I have to `clone` it. Granted, many APIs will clone "invisibly" within them, but I can always grep to find the clone.
In Javascript, things seem to allocate by default. A new object allocates. A closure allocates. Things are very implicit, you sort of are in an "allocates by default" mode with js, it seems. In Rust I can just do `[u8; n]` or whatever if I want to, I can just do `let x = "foo"` for a static string, or `let y = 5;` etc. I don't really have to question the memory layout much.
Regardless, you can just learn those rules, of course, but you have to learn them. It seems much easier to "trip onto" an allocation, so to speak, in js.
> Again, perhaps this is just a familiarity issue
I largely agree, though I think that js does a lot more allocation in its natural syntax.
> Do you treat all your backend library dependencies as black boxes as well?
No, but I don't really use frameworks in backend languages much. The heaviest dependency I use is almost always the HTTP library, which is reliably quite optimized. Frameworks impose patterns on how code is structured, which, to me, makes it much harder to reason about performance. I now have to learn the details of the framework. Perhaps the only thing close to this in Rust would be tokio.
> I've used profilers to guide building my own "frameworks" and help proven "Vanilla" approaches to other developers over frameworks in use.
I suspect that this is merely an issue of my own biased experience where I have inherited codebases with javascript that are already using frameworks.
> Maybe I'm missing what primitives you are looking for. async/await is about the same primitive in JS and Rust and there are very similar higher-level tools on top of them.
I mean, stack allocation feels like a pretty obvious one, reasoning about mutability, control over locking, the ability to `join` two futures or manage their polling myself, access to operating system threads, access to atomics, access to mutexes, access to pointers, etc. These just aren't available in javascript. async/await in js is only superficially similar to Rust.
I mean, a simple example is that I recently switched to CompactString and foldhash in Rust for a significant optimization. I used Arc to avoid expensive `.clone` calls. I preallocated vectors and reused them, I moved other work to threads, etc. I feel really comfy doing this in Rust where all of this is sort of just... first class? Like, it's not "weird" rust to do any of this. I don't have to really avoid much in the language, it's not like js where I'd have to be like "Okay, I can't write {a: 5} here because it would allocate" or something. I feel like that shouldn't be too contentious? Surely one must learn how to avoid much of javascript if they want to learn how to avoid allocations.
> To be fair, I get the impulse to want to leave it as someone else's problem.
I just reject that framing. People focus on what they focus on. Optimizing their website is not necessarily their interest.
> I feel like if you can profile and performance tune Rust you should be able to profile and performance tune JS.
I probably could but it's definitely not going to feel like second nature to me and I suspect I'd really feel like I'm fighting the language. I mean, seriously, I'd be curious, how do you deal with the fact that you can't stack allocate? I can spawn a thread in Rust and share a pointer back to the parent stack, that just seems very hard to do in javascript if not outright impossible?
> I think I'm saying that it is surprising to me that people who have developed skillsets for optimizing backend code in languages designed to be fast seem to struggle applying the same skills to a language with simpler/"slower" mechanics
Yeah I don't really see it tbh. I mean even if you say "I can do it", that's great, but how is it surprising?
WorldMaker 16 hours ago [-]
> I probably could but it's definitely not going to feel like second nature to me and I suspect I'd really feel like I'm fighting the language. I mean, seriously, I'd be curious, how do you deal with the fact that you can't stack allocate? I can spawn a thread in Rust and share a pointer back to the parent stack, that just seems very hard to do in javascript if not outright impossible?
I had alluded to it before, but this is maybe where some additional experience with other garbage collected backend languages like C# or Java could help build some "muscle memory" here.
The typical lens in a GC-based language is value types versus reference types. Value types are generally stack allocated and pass-by-value (copy-by-value; copied from stack frame to stack frame when passed). Reference types are usually heap allocated and pass-by-reference. A reference is generally a "fat pointer", with the qualification that you generally can't dereference one like a pointer without complex GC locks because the GC reserves the right to move the objects pointed to by references (for instance, due to compaction, but can also due to things like promotion to another heap). References themselves follow the same pass-by-value rules generally (stack allocated and copied).
(The lines are often blurry hence "generally" and "usually": a GC language may choose to allocate particularly large value types on the heap and apply copy-on-write semantics in a way to meet the pass-by-value semantics. A GC language is also free to stack allocate small reference types that it believes won't escape a particular part of the stack. I bring up these edge cases not to suggest complexity but to remind that profile-guided optimization is often the best strategy in any language because any good compiler, even a JIT compiler, is trying to optimize what it can.)
In JS, the breakdown is generally that your value types are string, number, boolean, and your reference types are object, array, and function. `const a = 12` is a static, stack allocated number. `const x = 'foo'` is a static, stack allocated string. It will get copied if you pass it anywhere. Though there's one more optimization here that most GC languages use (and goes all the way back to early Lisp) called "string interning". Strings are always treated as immutable and essentially copy-on-write. Common strings and strings passed to a large number of stack frames get "interned" to shared memory (sometimes the heap; sometimes even just reusing the memory of their first compiled instance in the compiled binary). But because of the copy-on-write and how easy it is to trigger, and often those copies start stack allocated, strings are still considered value types, even though with "interning" they sometimes exhibit reference-like behavior and are sort of the "border type".
Of things to look out for `+` or `+=` where one of the sides is a string can be a huge memory allocator due to copying string bytes alone, which should be easy to expect to happen.
On the reference type side `let x = {a: 5}; let y = x`, the `{a: 5}` part is an object and does allocate to the heap (probably, modulo again things like escape detection by the JIT compiler), but `x` and `y` themselves are stack allocated references. That `let y = x` is only a reference copy.
> it's not like js where I'd have to be like "Okay, I can't write {a: 5} here because it would allocate" or something. I feel like that shouldn't be too contentious? Surely one must learn how to avoid much of javascript if they want to learn how to avoid allocations.
Generally, it's not about "avoiding" the easy language constructions because they allocate, it is balancing the trade-offs of when you want to allocate and how much.
Just like you might preallocate a vector before a tight loop, you might preallocate an array or an object, or even an object pool. (Build an array of objects, with a "free" counter, borrow them, mutate them, return them to the "free" section when done.)
But some of that is trade-offs, preallocation is sometimes harder to read/reason with. On the other side the "over-allocation" you are worried about might be caught entirely by the JIT's escape analysis and compiled out. For almost all languages it is best to let a profile or real data guide what to try to optimize (premature optimization is rarely a good idea), but especially for a GC language it can be crucial. Not because the GC language is more complicated or "magic" or "mysterious", but simply because a GC language is tuned for a lot of auto-optimizations that a manually managed memory language doesn't necessarily get "for free". The trade-off for references being much more opaque boxes than pointers is that a JIT compiler has more optimization options because it can just assume pointer math is off the table. It's between the JIT and the GC where an allocation lives, more times than not, and there are some simple optimization answers such as "the JIT stack allocated that because it doesn't escape this method". It shouldn't feel like a surprise when such things happen, when you get such benefits "for free". The JIT and GC are still maintaining the value-type or reference-type "semantics" at all times, those are just (intentionally) big easy "traits" with a lot of useful middle ground and lot of cross-implementation.
> stack allocation feels like a pretty obvious one, reasoning about mutability, access to pointers
A lot of the above should be a decent starting place for learning those tools. `let` versus `const` as maybe a remaining JS piece not explicitly dived into.
References are generally "pointer enough" for most work. The JS GC doesn't have a way to manually lock a reference to dereference it for pointer math today, but that doesn't mean it never will. Parts of WASM GC are applicable here, but mostly restricted to shared array buffers (blocks of bytes).
In other GC languages, C# has been exploring a space for GC-safe stack allocated pointers to blocks of memory that support (range checked) pointer-like math called Span<T> and Memory<T>. It's roughly equivalent to Rust's Arc-like mechanics, but subtly different as you would expect for existing in a larger GC environment. As that approach has become very successful in C# I am starting to expect variations of it in more GC languages in the next few years.
> control over locking, access to atomics, access to mutexes
For the most part JS is single threaded, stack data is copied (value types), and reference-types get auto-locking for "free" from the GC. So locks aren't important for most JS work and there's not much to control.
If you start to share memory buffers from JS to a Service/Web Worker or to a WASM process you may need to do more manual locks. The big family of tools for that is the Atomics global object: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...
But a lot of that is new and rare in JS today.
> the ability to `join` two futures
`Promise.all` and `Promise.any` are the two most common "standard library" combinators. `Promise.all` is the most like Rust `join`.
There are also libraries with even higher-level combinators.
> manage their polling myself
Promises don't poll. JS lives in a browser-owned event loop. Superficially you are in a browser-provided "tokio"-like runtime at all times.
There are some "low-level" tricks you can pull, though in that the Promise abstraction is especially thin compared to Rust Futures. The entire "trait" that async/await syntax abstracts is just the "thenable pattern" in JS. All you need to make a new non-Promise Promise-like is create an object that supports `.then(callBack)` (optionally a second parameter for a catchCallback and/or a `.catch(callBack)`). Though the Promise constructor is also powerful enough you generally don't need to make your own thenable, just implement your logic in the closure you provide to the Promise constructor.
Similarly on the flipside if you need a more complex combinator than Promise.all, and the reason that some higher-level libraries also exist, you just have to build the right callbacks to `.then()` and coordinate what you need to.
It's generally recommended to stick with things like Promise.all, but low level tricks exist.
> I mean even if you say "I can do it", that's great, but how is it surprising?
I think what continues to surprise me is that it sometimes reads like a lack of curiosity for other languages and for the commonalities between languages. Any GC language is built on the same exact kind of building blocks as "lower level" languages. There is a learning curve involved in reasoning about a GC language, but I don't think it should seem like a steep one. The vocabulary has strong overlaps: value types and stack allocated; reference types and heap allocated; references and pointers. The intuitions of one often benefit the other ("this is a reference type, can I simplify what I need from it inside this loop to a value type or two to keep it stack allocated or would it make more sense to preallocate a pool of them?"). Just because you don't have access to the exact same kinds of low level tools doesn't mean that they don't exist or that you can't learn how to take what you would do with the low level tools and apply them in the higher level space. (Plus tools like C#'s Span<T> and Memory<T> work where the low level tools themselves are also starting to blur more together than ever before.)
It just takes a little bit of curiosity, I think, to ask that next question of "how does a GC language stack allocate?" and allowing that to lead you to more of the vocabulary. Hopefully, I've done an okay job in this post illustrating that.
staticassertion 8 hours ago [-]
Yeah I basically already know all of this tbh, I'm already very familiar with how GCs work, the JVM, C#, etc.
TheAlexLichter 2 days ago [-]
I personally met a lot of folks who care about both quite a bit.
But to be fair, besides the usual patterns like tree-shaking and DCE, "runtime performance" is really tricky to measure or optimize for
pjmlp 2 days ago [-]
While using Electron in the process.
root_axis 2 days ago [-]
I'm surprised to see it's that much faster than SWC. Does anyone have any general details on how that performance is achieved?
grabshot_dev 2 days ago [-]
One thing worth noting: beyond raw parse speed, oxc's AST is designed to be allocation-friendly with arena allocation. SWC uses a more traditional approach. In practice this means oxc scales better when you're doing multiple passes (lint + transform + codegen) on the same file because you avoid a ton of intermediate allocations.
We switched a CI pipeline from babel to SWC last year and got roughly 8x improvement. Tried oxc's transformer more recently on the same codebase and it shaved off another 30-40% on top of SWC. The wins compound when you have thousands of files and the GC pressure from all those AST nodes starts to matter.
Yeah, but not how their implementation techniques differ from SWC's to produce those results.
snowhale 2 days ago [-]
[dead]
apatheticonion 2 days ago [-]
I wrote a simple multi threaded transpiler to transpile TypeScript to JavaScript using oxc in Rust. It could transpile 100k files in 3 seconds.
It's blisteringly fast
iberator 2 days ago [-]
sounds impossible to even index and classify files so fast.
What hardware?
AYBABTME 2 days ago [-]
Let's say 100k files is 300k syscalls, at ~1-2us per syscall. That's 300ms of syscalls. Then assume 10kb per file, that's 1GB of file, easily done in a fraction of a second when the cache is warm (it'll be from scanning the dir). That's like 600ms used up and plenty left to just parse and analyze 100k things in 2s.
ido 2 days ago [-]
I’m assuming they meant 100kloc rather than 100,000 files of arbitrary size (how could we even tell how impressive that is without knowing how big the files are?)
2 days ago [-]
sankalpmukim 2 days ago [-]
I wonder why did it take so long for someone to make something(s) this fast when this much performance was always available on the table.
Crazy accomplishment!
WD-42 2 days ago [-]
Because Rust makes developers excited in a way that C/C++ just doesn't.
pjmlp 2 days ago [-]
Yeah, it is as if there were never other compiled languages before to rewrite JavaScripting tooling.
dwattttt 2 days ago [-]
The word 'excited' in GP's post isn't decorative.
pjmlp 2 days ago [-]
I am fully aware of it, there have been many 'excited' posts in HN history about various programming languages, with related rewrite X in Y, the remark still stands.
WD-42 2 days ago [-]
Why do people get so mad that other people enjoy a language? If I’m more likely to rewrite some tooling because of the existence of a programming language and it’s more performant, isn’t that good for everyone?
We are programmers we are supposed to like programming. These rust haters are intolerable.
pjmlp 2 days ago [-]
Because it gets tiring to have all those Rewrite X in Y, as if X was the very first language where that is possible.
WD-42 2 days ago [-]
Is anyone forcing you to rewrite anything in rust?
grougnax 2 days ago [-]
C++ is pure trash
C is fine but old
phplovesong 2 days ago [-]
You dont need C(++) for building performant software.
phplovesong 2 days ago [-]
We had many languages that are faster that are not c/c++.
Compare Go (esbuild) to webpack (JS), its over 100x faster easily.
For a dev time matters, but is relative, waiting 50sec for a webpack build compared to 50ms with a Go toolchain is life changing.
But for a dev waiting 50ms or 20ms does not matter. At all.
So the conclusion is javascript devs like hype, and flooded Rust and built tooling for JS in Rust. They could have used any other compiled languge and get near the same peformance computer-time-wise, or the exact same time human-timewise.
wiseowise 2 days ago [-]
> But for a dev waiting 50ms or 20ms does not matter. At all.
To win benchmark games it does, in a world where people keep shipping Electron crap, not really.
phplovesong 24 hours ago [-]
Not sure if you missed the /s?
Anyway, you posted about speed, and then followed by a link to some python related thing. In python speed has never been a key tenet, at least when it comes to pure cpu based calculations. How much tooling is built in python? All the modern python tooling is mostly Rust based too. So theres that.
I mean for a dev working in JS with JS built tooling the speed is not in milliseconds, but in seconds, even minutes.
I still think my point holds, having a build take int he 10s of seconds vs 50ms is very much good enough for development (the usual frontend save and refresh browser cycle)
pjmlp 2 days ago [-]
No worries, when Zig hits 1.0, the RIZ projects from JavaScript, Python and Ruby tooling will start hitting HN frontpage.
chrysoprace 2 days ago [-]
I believe it goes back a few years to originally being just oxlint, and then recently Void Zero was created to fund the project. One of the big obstacles I can imagine is that it needs extensive plugin support to support all the modern flavours of TypeScript like React, Vue, Svelte, and backwards compatibility with old linting rules (in the case of oxlint, as opposed to oxc which I imagine was a by-product).
TheAlexLichter 2 days ago [-]
For a couple of reasons:
* You need have a clean architecture, so starting "almost from scratch"
* Knowledge about performance (for Rust and for build tools in general) is necessary
* Enough reason to do so, lack of perf in competition and users feeling friction
* Time and money (still have to pay bills, right?)
throw567643u8 2 days ago [-]
Fractured ecosystem. Low barrier to entry, so loads of tooling.
nullsanity 2 days ago [-]
It takes a good programmer to write it, and most good programmers avoid JavaScript, unless forced to use it for their day job. in that case, there is no incentive to speed up the part of the job that isn't writing JavaScript.
pjmlp 2 days ago [-]
Some of us, already have all the speed we need with Java and .NET tooling, don't waste our time rewriting stuff, nor need to bother with borrow checker, even if it isn't a big deal to write affine types compliant code.
And we can always reach out to Scala or F# if feeling creating to play with type systems.
wiseowise 2 days ago [-]
> It takes a good programmer to write it, and most good programmers avoid JavaScript, unless forced to use it for their day job.
Nonsense.
galaxyLogic 2 days ago [-]
Does oxc-parser make it easy to remove comments from JavaScript?
In other words does it treat comments as syntactic units, or as something that can be ignored wince they are not needed by the "next stage"?
The reason to find out what the comments are is of course to make it easy to remove them.
xixixao 2 days ago [-]
Should be easy with any standard parser. See astexplorer.net
galaxyLogic 6 hours ago [-]
I've been using Esprima and it's not trivial to get rid of, or collect all comments with it. The reason is that while it finds the ranges of all syntactic JavaScript elements, it does not consider comment to be a syntactic element, but just something between them.
vivzkestrel 2 days ago [-]
- seeing this oxlint and oxfmt a lot lately
- how does it compare to biome?
- also biome does all 3 , linting, formatting and sorting, why do you want 3 libraries to do the job biome does alone?
latchkey 2 days ago [-]
I've played with all of these various formatters/linters in my workflow. I tend to save often and then have them format my code as I type.
I hate to say it, but biome just works better for me. I found the ox stuff to do weird things to my code when it was in weird edge case states as I was writing it. I'd move something around partially correct, hit save to format it and then it would make everything weird. biome isn't perfect, but has fewer of those issues. I suspect that it is hard to even test for this because it is mostly unintended side effects.
ultracite makes it easy to try these projects out and switch between them.
AbuAssar 2 days ago [-]
oxc formatter is still alpha, give it some time
latchkey 2 days ago [-]
sure, but biome just works today... ¯\_(ツ)_/¯... i don't understand why we need 10 (or even 2) different rust based formatters... people need to just work together a bit more imho.
wiseowise 2 days ago [-]
So uv for JavaScript? Nice.
silverwind 2 days ago [-]
No, that would probably be pnpm, even thought it's not nearly as fast because it's written in JS.
They are talking about pnpm (which they said would be the uv equivalent for node, though I disagree given that what pnpm brings on top of npm is way less than the difference between uv and the status quo in Python).
hu3 2 days ago [-]
I expected a coparison to `bun build` in the transformer TS -> JS part.
But I guess it wouldn't be an apples to apples com parison because Bun can also run typescript directly.
Jarred 2 days ago [-]
You can find a comparison with `bun build` on Bun's homepage. It hasn't been updated in a little while, but I haven't heard that the relative difference between Bun and Rolldown has changed much in the time since (both have gotten faster).
Bundler Version Time
─────────────────────────────────────────────────────────
Bun v1.3.0 269.1 ms
Rolldown v1.0.0-beta.42 494.9 ms
esbuild v0.25.10 571.9 ms
Farm v1.0.5 1,608.0 ms
Rspack v1.5.8 2,137.0 ms
zdw 2 days ago [-]
This compiles to native binaries, as opposed to deno which is also in rust but is more an interpreter for sandboxed environments?
ameliaquining 2 days ago [-]
Oxc is not a JavaScript runtime environment; it's a collection of build tools for JavaScript. The tools output JavaScript code, not native binaries. You separately need a runtime environment like Deno (or a browser, depending on what kind of code it is) to actually run that code.
3836293648 2 days ago [-]
Deno is a native implementation of a standard library, it doesn't have language implementation of its own, it just bundles the one from Safari (javascriptcore).
This is a set of linting tools and a typestripper, a program that removes the type annotations from typescript to make turn it into pure javascript (and turn JSX into document.whateverMakeElement calls). It still doesn't have anything to actually run the program.
ameliaquining 2 days ago [-]
Deno uses V8, which is from Chrome. Bun uses JavaScriptCore.
3836293648 2 days ago [-]
Ah, yeah. Easy mistake
lioeters 2 days ago [-]
I'm going to call it: a Rust implementation of JavaScript runtime (and TypeScript compiler) will eventually overtake the official TypeScript compiler now being rewritten in Go.
madeofpalk 2 days ago [-]
? Most JavaScript runtimes are already C++ and are already very fast. What would rewriting in Rust get us?
lioeters 2 days ago [-]
Nothing, but it will happen anyway. Maybe improved memory safety and security, at least as a plausible excuse to get funding for it. Perhaps also improved enthusiasm of developers, since they seem to enjoy the newness of Rust over working with an existing C++ codebase. Well there are probably many actual advantages to "rewrite it in Rust". I'm not in support or against it, just making an observation that the cultural trend seems to be moving that way.
3836293648 2 days ago [-]
In popularity or actually take over control of the language?
lioeters 2 days ago [-]
Eventually I imagine a JS/TS runtime written in Rust will be mainstream and what everyone uses.
jeswin 2 days ago [-]
If you want native binaries from typescript, check my project: https://tsonic.org/
Currently it uses .Net and NativeAOT, but adding support for the Rust backend/ecosystem over the next couple of months. TypeScript for GPU kernels, soon. :)
nine_k 2 days ago [-]
No, it it a suite of tools to handle Typescript (and Javascript as its subset). So far it's a parser, a tool to strip Typescript declarations and produce JS (like SWC), a linter, and a set of code transformation tools / interfaces, as much as I can tell.
sneak 2 days ago [-]
Thought this was something related to Oxide Computer - they might want to be careful with that branding.
swiftcoder 2 days ago [-]
There are like 50 rust projects named by oxidation puns. This is hardly the first
lerp-io 2 days ago [-]
whats the point of writing rust memory safe for js if js is already memory safe, ant u just write it in js???
throw567643u8 2 days ago [-]
Too slow. Different people implemented linter, bundler, ts compiler in JS. That means three different parsers and ASTs, which is inefficient. These guys want a grand unified compiler to rule them all.
RealityVoid 2 days ago [-]
For the love of god, please stop naming Rust projects with "corrosion" and "oxidation" and the cute word pwns related to Rust because they are currently overplayed.
hiuioejfjkf 2 days ago [-]
[dead]
RealityVoid 2 days ago [-]
I said nothing about the rs prefix. But making oxide ferrous, Fe203 or whatever your whole shtick tells me nothing about your package and the pwn space is so so so very crowded at this point it just makes for a bad naming scheme.
monster_truck 2 days ago [-]
And what do you name your packages
wangzhongwang 2 days ago [-]
[dead]
VPenkov 2 days ago [-]
Oxc is not the first Rust-based product on the market that handles JS, there is also SWC which is now reasonably mature. I maintain a reasonably large frontend project (in the 10s of thousands of components) and SWC has been our default for years. SWC has made sure that there is actually a very decent support for JS in the Rust ecosystem.
I'd say my biggest concern is that the same engineers who use JS as their main language are usually not as adept with Rust and may experience difficulties maintaining and extending their toolchain, e.g. writing custom linting rules. But most engineers seem to be interested in learning so I haven't seen my concern materialize.
saghm 2 days ago [-]
It's not like JS isn't already implemented in a language that's a lot more similar to Rust anyhow though. When the browser or Node or whatever other runtime you're using is already in a different language out of necessity, is it really that weird for the tooling to also optimize for the out-of-the-box experience rather than people hacking on them?
Even as someone who writes Rust professionally, I also wouldn't necessarily expect every Rust engineer to be super comfortable jumping into the codebase of the compiler or linter or whatever to be able to hack on it easily because there's a lot of domain knowledge in compilers and interpreters and language tooling, and most people won't end up needing experience with implementing them. Honestly, I'd be pretty strongly against a project I work on switching to a custom fork of a linting tool because a teammate decided they wanted to add extra rules for it or something, so I don't see it as a huge loss that it might end up being something people will need to spend personal time on if they want to explore.
chronicom 2 days ago [-]
The goal is for Vite to transition to tooling built on Oxc. They’ve been experimenting with Rolldown for a while now (also by voidzero and uses oxc) - https://vite.dev/guide/rolldown
silverwind 2 days ago [-]
Depends on how conservative their minifier is. The more aggressive, the more likely bugs are. esbuild still hits minifier bugs regularily.
leptons 2 days ago [-]
Over-minifying is kind of pointless, just do a basic minify and then gzip and call it a day.
Ecko123 2 days ago [-]
[dead]
zenon_paradox 2 days ago [-]
[dead]
robofanatic 2 days ago [-]
oxidation is a chemical process where a substance loses electrons, often by reacting with oxygen, causing it to change. What does it have to do with JavaScript?
nine_k 2 days ago [-]
Oxidation of iron produces rust. Rust is the language of implementation of that compiler, and of the entire Oxc suite.
phplovesong 2 days ago [-]
But rust is named after a mushroom?
nine_k 2 days ago [-]
Rust is the layer in immediate contact with the metal :) That's what the official version says, at least.
The paradox gains another layer when you consider that their whole mission is to build tools for the JavaScript ecosystem, yet by moving to Rust they are betting that JS-the-language is so broken that it cannot even host its own tools. And because JS is still a stronger language for building UIs in than Rust, their business strategy now makes them hard-committed to their bet that JS tools in JS are a dead end.
You say this like this is the basic requirement for a language. But languages make tradeoffs that make them more appropriate for some domains and not others. There's no shade if a language isn't ideal for developer tools, just like there's no shade if a language isn't perfect for web frontends, web backends, embedded development, safety critical code (think pacemakers), mobile development, neural networks and on and on.
Seriously, go to https://astral.sh and scroll down to "Linting the CPython code base from scratch". It would be easy to look at that and conclude that Python's best days are behind it because it's so slow. In reality Python is an even better language at its core domains now that its developer tools have been rewritten in Rust. It's the same excellent language, but now developers can iterate faster.
It's the same with JavaScript. Just because it's not the best language for linters and formatters doesn't mean it's broken.
Evan wallace proved it by building esbuild. this is no longer bet.
> If their open platform were as powerful as it should be, it would be easy to use it to recreate the kinds of experiences they propose to sell.
you would be surprised to know that tech companies may find it cheaper to pay money than developer bandwidth for stuff beyong their core compentency.
dropbox was also considered to be trivially implementable, but end users rarely try to re-invent it.
Another example is the TypeScript compiler being rewritten in Go instead of self-hosting. It's an admission that the language is not performant enough, and more, it can never be enough for building its own tooling. It might be that the tooling situation is the problem, not the language itself, though. I do see hopeful signs that JavaScript ecosystem is continuing to evolve, like the recent release of MicroQuickJS by Bellard, or Bun which is fast(er) and really fun to use.
I quite like Roc's philosophy here: https://www.roc-lang.org/faq#self-hosted-compiler. The developers of the language want to build a language that has a high performance compiler, but they don't want to build a language that one would use to build a high performance compiler (because that imposes a whole bunch of constraints when it comes to things like handling memory). In my head, JavaScript is very similar. If you need a high performance compiler, maybe look elsewhere? If you need the sort of fast development loop you can get by having a high performance compiler, then JS is just the right thing.
Off topic but I wonder if this applies to human languages, whether some are more suited for particular purposes - like German to express rigorous scientific thinking with compound words created just-in-time; Spanish for romantic lyrical situations; or Chinese for dense ideographs. People say languages can expand or limit not only what you can express but what you can think. That's certainly true of programming languages.
From what I understand, Vite+ seems like an all-in-one toolchain. Instead of maintaining multiple configurations with various degrees of intercompatibility, you maintain only one.
This has the added benefit that linters and such can share information about your dependency graph, and even ASTs, so your tools doesn't have to compute them individually. Which has a very decent potential of improving your overall pre-merge pipeline. Then, on top of that, caching.
The focus here is of course enterprise customers and looks like it is supposed to compete with the likes of Nx/Moonrepo/Turborepo/Rush. Nx and Rush are big beasts and can be somewhat unwieldy and quirky. Nx lost some trust with its community by retracting some open-source features and took a very long time to (partially) address the backlash.
Vite+ has a good chance to be a contender on the market with clearer positioning if it manages to nail monorepo support.
Also, the paradox is not really even there. JS ecosystem largely gave up on JS tools long time ago already. Pretty much all major build tools are migrating to native or already migrated, at least partially. This has been going on for last 4 years or something.
But the key to all of this is that most of these tools are still supporting JS plugins. Rolldown/Vite is compatible with Rollup JS plugins and OXLint has ESLint compatible API (it's in preview atm). So it's not really even a bet at all.
Doesn’t look super interesting to me tbh.
Let this be a warning: running oxfmt without any arguments recursively scans directory tree from the current directory for all *.js and *.ts files and silently reformats them.
Thanks to that, I got a few of my Allman-formatted JavaScript files I care about messed up with no option to format them back from K&R style.
I've got to say this is what I would have expected and wanted to happen. I'd say it is wise to not run tools designed to edit files on files you don't have a backup for (like Git) without doing a dry-run or a small scope experiment first.
How were you not expecting this? Did you not bother to read anything before installing and running this command on a sensitive codebase?
But most formatters I'm used to absolutely don't do this. For example, `rustfmt` will read input from stdin if no argument is given. It can traverse modules in a project, but it won't start modifying everything under your CWD.
Most unix tools will either wait for some stdin or dump some kind of help when no argument is given. Hell, according to this tool's docs, even `prettier` seems to expect an argument:
I'm not familiar with prettier, so I may be wrong, but from the above, I understand that prettier doesn't start rewriting files if no argument is given?Looking up prettier's docs, they have this to say:
So eslint also doesn't automatically overwrite everything?So yeah, I can't say this is expected behaviour, even if it's documented.
Try git reset --hard, that should work.
It is bad ux.
A power user can just pass the right params. Besides, it is not that hard to support "--yolo" parameter for that use case
`oxfmt` should have done the same and `oxfmt .`, with the desired dir ".", should have been the required usage.
I don't expect `rm` with no argument to trash everything in my CWD. Which it doesn't, see sibling's comment.
new devs should not learn these things the hard way
So maybe we should call it JavaScript style? Modern JS style? Do we have a good name for it?
Also, does anyone know when and why “K&R style” [5] started being used to refer to Java style? Meaning K&R statement block style (“Egyptian braces” [6]) being used for all braces and single statement blocks getting treated the same as multi-statement blocks. Setting aside the eternal indentation question.
1: https://en.wikipedia.org/wiki/Indentation_style#Java
2: https://www.oracle.com/java/technologies/javase/codeconventi...
3: https://www.oracle.com/java/technologies/javase/codeconventi...
4: https://prettier.io/docs/options#tab-width
5: https://ia903407.us.archive.org/35/items/the-ansi-c-programm...
6: https://en.wikipedia.org/wiki/Indentation_style#Egyptian_bra...
[0] https://github.com/microsoft/vscode/issues/32405
Not to discredit OP's work of course.
It just take someone to have poor empathy towards your users to ship slow software that you don't use.
Maybe once day we can use wasm or whatever and I can write fast code for the frontend but not today, and it's a bit unsurprising that others face similar issues.
Also, if I'm building a CLI, maybe I think that 1ms matters. But someone browsing my webpage one time ever? That might matter a lot less to me, you're not "browsing in a hot loop".
- for loops over map/filter
- maps over objects
- .sort() over .toSorted()
- mutable over immutable data
- inline over callbacks
- function over const = () => {}
Pretty much, as if you wrote in ES3 (instead of ES5/6)
> care so little about the performance of the code they ship to browsers.
> but I'm curious to hear how do you know it for backend code but not frontend code.
Because I find backend languages extremely easy to reason about for performance. It seems to me that when I write in a language like rust I can largely "grep for allocations". I find that hard to see in javascript etc. This is doubly the case because frontend code seems to be extremely framework heavy and abstract, so it makes it very hard to reason about performance just by reading the code.
You think about allocations: JS is a garbage collected language and allocations are "cheap" so extremely common. GC is powerful and in most JS engines quite fast but not omniscient and sometimes needs a hand. (Just like reasoning with any GC language.) Of course the easiest intervention to allocations is to remove allocations entirely; just because it is cheap to over-allocate, and the GC will mostly smooth out the flaws with such approaches, doesn't mean ignoring the memory complexity of the chosen algorithms. Most browser dev tools today have allocation profilers equal or better to their backend cousins.
You think about threading, concurrency, etc: JS is even a little easier than many backend languages because it is (almost excessively) single-threaded. A lot of concurrency issues cannot exist in current JS designs unless you add in explicit IPC channels to explicitly "named" other threads (Service Workers and Web Workers). On the flipside, JS is a little harder to reason about threading than many backend languages because it is extensively cooperatively threaded. Code has to yield to other code frequently and regularly. Shaving milliseconds off a routine yields more time to other things that need to happen (browser events, user input, etc). That starts to add up. JS encourages you to do things in short, tight "bursts" rather than long-running algorithms. Here again, most browser dev tools today have strong stack trace/flame chart profilers that equal or exceed backend cousins. Often in JS "tall" flames are fine but "wide" flames are things to avoid/try to improve. (That's a bit reversed from some backend languages where shallow is overall less overhead and long-running tasks are sometimes better amortized than lots of short ones.)
> But someone browsing my webpage one time ever? That might matter a lot less to me, you're not "browsing in a hot loop".
The heavily event-driven architecture of the browser often means that just sitting on a webpage is "browsing in a hot loop". Browsers have gotten better and better at sleeping inactive tabs and multi-threading tabs to not interfere with each other, but things are still a bit of a "tragedy of the commons" that the average performance of a website still directly and indirectly drags everyone else down. It might not matter to you that your webpage is slow because you only expect a user to visit it once, but you also aren't taking into account that is probably not the only website that user is browsing at that moment. Smart users do directly and indirectly notice when the bad performance of one webpage impacts their experiences of other web pages or crashes their browser. Depending on your business model and what the purpose of that webpage is for, that can be a bad impression that leads to things like lost sales/customers.
> You think about threading, concurrency, etc: JS is even a little easier than many backend languages because it is (almost excessively) single-threaded. A lot of concurrency issues cannot exist in current JS designs unless you add in explicit IPC channels to explicitly "named" other threads (Service Workers and Web Workers).
My issue isn't with being able to write concurrent code that has no bugs, my issue is having access to primitives where I have tight control over concurrency and parallelism. The primitives in JS do not provide that control and are often very heavy in and of themselves.
I think it's perhaps worth noting that I am not saying "it's impossible to write fast code for the browser", I'm saying it is not surprising that people who have developed skillsets for optimizing backend code in languages designed to be fast are not in a great position to do the same for a website.
I still think that's a training/familiarity problem more than a language issue? You can just as easily start with `rg \bnew\b` as you can `rg \.clone`. The `new` operator is a useful thing to start with as in both C++ and C#, too. (Even though JS new is technically a different operator than both C++ and C#'s.) After that the JSON syntax is a decent start. Something like `rg {\s*["\.']` and `rg [` are places to start. Curly brackets and square brackets in "data position" are useful in Python and now some of C#, too.
After that the next biggest culprits are common library things like `.filter()` and `.map()` which JS defaults to reified/eager versions for historic reasons. (There are now lazier versions, but migrating to them will take time.) That sort of library allocations knowledge is mostly just enough familiarity with standard library, a need that remains universal in any language.
> JS hardly feels like a language where it's smooth to then fix that
Again, perhaps this is just a familiarity issue, but having done plenty of both, at the end of the day I still see this process as the same: move allocations out of tight loops, use object pools if necessary, examine the O-Notation/Omega-Notation of an algorithm for its space requirements and evaluate alternatives with better mean or worst cases, etc. It mostly doesn't matter what language I'm working in the basics and fundamentals are the same. Everything is as "smooth" as you feel comfortable refactoring code or switching to alternate algorithm implementations.
> frameworks are so common that I doubt I'd be in a position to do so
Do you treat all your backend library dependencies as black boxes as well?
Even if that is the case and you want to avoid profiling your framework dependencies themselves and simply hope someone else is doing that, there's still so much in your control.
I find JS is one of the few languages where you can somewhat transparently profile even all of your dependencies. Most JS dependencies are distributed as JS source and you generally don't have missing symbol files or pre-compiled binary bricks that are inscrutable to inspection. (WASM is changing that, for the worse, but so far there are very few WASM-only frameworks and most of them have other debugging and profiling tools.)
I can choose which frameworks to use based on how their profiler results look. (I can tell you that I don't particularly like Angular and one of the reasons why is I've caught it with truly abysmal profiles more than once, where I could prove the allocations or the CPU clock time were entirely framework code and not my app's business logic.)
I've used profilers to guide building my own "frameworks" and help proven "Vanilla" approaches to other developers over frameworks in use.
> The primitives in JS do not provide that control and are often very heavy in and of themselves.
Maybe I'm missing what primitives you are looking for. async/await is about the same primitive in JS and Rust and there are very similar higher-level tools on top of them. There's no concurrency/parallelism primitives today in JS because there is no allowed concurrency or parallelism. There are task scheduling primitives somewhat unique to JS for doing things like "fan out" akin to parallelism but relying on cooperative (single) threading. Examples include `requestAnimationFrame` and `requestIdleCallback` (for "this can wait until you next need to draw a frame, including if you need to drop frames" and "this can wait until things are idle" respectively).
> I'm saying it is not surprising that people who have developed skillsets for optimizing backend code in languages designed to be fast are not in a great position to do the same for a website.
I think I'm saying that it is surprising to me that people who have developed skillsets for optimizing backend code in languages designed to be fast seem to struggle applying the same skills to a language with simpler/"slower" mechanics, but also on average much higher transparency into dependencies (fuller top-to-bottom stack traces and metrics in profiles).
To be fair, I get the impulse to want to leave it as someone else's problem. But as a full stack developer who has done performance work in at least a half dozen languages, I feel like if you can profile and performance tune Rust you should be able to profile and performance tune JS. But maybe I've seen "too much of the Matrix" and my "it's all the same" comes from a deep generalist background that is hard for a specialist to appreciate.
But that's fine. Even if we say it's a familiarity problem, that's fine. I'm only saying that it's not reasonable to expect my skills in optimizing backend code to somehow transfer. Obviously many things are the same - reducing allocation, improving algorithmic performance, etc. But that looks very different when you go from the backend to the frontend because the languages can look very different.
> You can just as easily start with `rg \bnew\b` as you can `rg \.clone`.
That's not true though. In Rust you have to have a clone somewhere if you're allocating on the heap, or one of the pointer types like `new`. If I pass a struct around it's either cheaply moveable (ie: Copy) or I have to `clone` it. Granted, many APIs will clone "invisibly" within them, but I can always grep to find the clone.
In Javascript, things seem to allocate by default. A new object allocates. A closure allocates. Things are very implicit, you sort of are in an "allocates by default" mode with js, it seems. In Rust I can just do `[u8; n]` or whatever if I want to, I can just do `let x = "foo"` for a static string, or `let y = 5;` etc. I don't really have to question the memory layout much.
Regardless, you can just learn those rules, of course, but you have to learn them. It seems much easier to "trip onto" an allocation, so to speak, in js.
> Again, perhaps this is just a familiarity issue
I largely agree, though I think that js does a lot more allocation in its natural syntax.
> Do you treat all your backend library dependencies as black boxes as well?
No, but I don't really use frameworks in backend languages much. The heaviest dependency I use is almost always the HTTP library, which is reliably quite optimized. Frameworks impose patterns on how code is structured, which, to me, makes it much harder to reason about performance. I now have to learn the details of the framework. Perhaps the only thing close to this in Rust would be tokio.
> I've used profilers to guide building my own "frameworks" and help proven "Vanilla" approaches to other developers over frameworks in use.
I suspect that this is merely an issue of my own biased experience where I have inherited codebases with javascript that are already using frameworks.
> Maybe I'm missing what primitives you are looking for. async/await is about the same primitive in JS and Rust and there are very similar higher-level tools on top of them.
I mean, stack allocation feels like a pretty obvious one, reasoning about mutability, control over locking, the ability to `join` two futures or manage their polling myself, access to operating system threads, access to atomics, access to mutexes, access to pointers, etc. These just aren't available in javascript. async/await in js is only superficially similar to Rust.
I mean, a simple example is that I recently switched to CompactString and foldhash in Rust for a significant optimization. I used Arc to avoid expensive `.clone` calls. I preallocated vectors and reused them, I moved other work to threads, etc. I feel really comfy doing this in Rust where all of this is sort of just... first class? Like, it's not "weird" rust to do any of this. I don't have to really avoid much in the language, it's not like js where I'd have to be like "Okay, I can't write {a: 5} here because it would allocate" or something. I feel like that shouldn't be too contentious? Surely one must learn how to avoid much of javascript if they want to learn how to avoid allocations.
> To be fair, I get the impulse to want to leave it as someone else's problem.
I just reject that framing. People focus on what they focus on. Optimizing their website is not necessarily their interest.
> I feel like if you can profile and performance tune Rust you should be able to profile and performance tune JS.
I probably could but it's definitely not going to feel like second nature to me and I suspect I'd really feel like I'm fighting the language. I mean, seriously, I'd be curious, how do you deal with the fact that you can't stack allocate? I can spawn a thread in Rust and share a pointer back to the parent stack, that just seems very hard to do in javascript if not outright impossible?
> I think I'm saying that it is surprising to me that people who have developed skillsets for optimizing backend code in languages designed to be fast seem to struggle applying the same skills to a language with simpler/"slower" mechanics
Yeah I don't really see it tbh. I mean even if you say "I can do it", that's great, but how is it surprising?
I had alluded to it before, but this is maybe where some additional experience with other garbage collected backend languages like C# or Java could help build some "muscle memory" here.
The typical lens in a GC-based language is value types versus reference types. Value types are generally stack allocated and pass-by-value (copy-by-value; copied from stack frame to stack frame when passed). Reference types are usually heap allocated and pass-by-reference. A reference is generally a "fat pointer", with the qualification that you generally can't dereference one like a pointer without complex GC locks because the GC reserves the right to move the objects pointed to by references (for instance, due to compaction, but can also due to things like promotion to another heap). References themselves follow the same pass-by-value rules generally (stack allocated and copied).
(The lines are often blurry hence "generally" and "usually": a GC language may choose to allocate particularly large value types on the heap and apply copy-on-write semantics in a way to meet the pass-by-value semantics. A GC language is also free to stack allocate small reference types that it believes won't escape a particular part of the stack. I bring up these edge cases not to suggest complexity but to remind that profile-guided optimization is often the best strategy in any language because any good compiler, even a JIT compiler, is trying to optimize what it can.)
In JS, the breakdown is generally that your value types are string, number, boolean, and your reference types are object, array, and function. `const a = 12` is a static, stack allocated number. `const x = 'foo'` is a static, stack allocated string. It will get copied if you pass it anywhere. Though there's one more optimization here that most GC languages use (and goes all the way back to early Lisp) called "string interning". Strings are always treated as immutable and essentially copy-on-write. Common strings and strings passed to a large number of stack frames get "interned" to shared memory (sometimes the heap; sometimes even just reusing the memory of their first compiled instance in the compiled binary). But because of the copy-on-write and how easy it is to trigger, and often those copies start stack allocated, strings are still considered value types, even though with "interning" they sometimes exhibit reference-like behavior and are sort of the "border type".
Of things to look out for `+` or `+=` where one of the sides is a string can be a huge memory allocator due to copying string bytes alone, which should be easy to expect to happen.
On the reference type side `let x = {a: 5}; let y = x`, the `{a: 5}` part is an object and does allocate to the heap (probably, modulo again things like escape detection by the JIT compiler), but `x` and `y` themselves are stack allocated references. That `let y = x` is only a reference copy.
> it's not like js where I'd have to be like "Okay, I can't write {a: 5} here because it would allocate" or something. I feel like that shouldn't be too contentious? Surely one must learn how to avoid much of javascript if they want to learn how to avoid allocations.
Generally, it's not about "avoiding" the easy language constructions because they allocate, it is balancing the trade-offs of when you want to allocate and how much.
Just like you might preallocate a vector before a tight loop, you might preallocate an array or an object, or even an object pool. (Build an array of objects, with a "free" counter, borrow them, mutate them, return them to the "free" section when done.)
But some of that is trade-offs, preallocation is sometimes harder to read/reason with. On the other side the "over-allocation" you are worried about might be caught entirely by the JIT's escape analysis and compiled out. For almost all languages it is best to let a profile or real data guide what to try to optimize (premature optimization is rarely a good idea), but especially for a GC language it can be crucial. Not because the GC language is more complicated or "magic" or "mysterious", but simply because a GC language is tuned for a lot of auto-optimizations that a manually managed memory language doesn't necessarily get "for free". The trade-off for references being much more opaque boxes than pointers is that a JIT compiler has more optimization options because it can just assume pointer math is off the table. It's between the JIT and the GC where an allocation lives, more times than not, and there are some simple optimization answers such as "the JIT stack allocated that because it doesn't escape this method". It shouldn't feel like a surprise when such things happen, when you get such benefits "for free". The JIT and GC are still maintaining the value-type or reference-type "semantics" at all times, those are just (intentionally) big easy "traits" with a lot of useful middle ground and lot of cross-implementation.
> stack allocation feels like a pretty obvious one, reasoning about mutability, access to pointers
A lot of the above should be a decent starting place for learning those tools. `let` versus `const` as maybe a remaining JS piece not explicitly dived into.
References are generally "pointer enough" for most work. The JS GC doesn't have a way to manually lock a reference to dereference it for pointer math today, but that doesn't mean it never will. Parts of WASM GC are applicable here, but mostly restricted to shared array buffers (blocks of bytes).
In other GC languages, C# has been exploring a space for GC-safe stack allocated pointers to blocks of memory that support (range checked) pointer-like math called Span<T> and Memory<T>. It's roughly equivalent to Rust's Arc-like mechanics, but subtly different as you would expect for existing in a larger GC environment. As that approach has become very successful in C# I am starting to expect variations of it in more GC languages in the next few years.
> control over locking, access to atomics, access to mutexes
For the most part JS is single threaded, stack data is copied (value types), and reference-types get auto-locking for "free" from the GC. So locks aren't important for most JS work and there's not much to control.
If you start to share memory buffers from JS to a Service/Web Worker or to a WASM process you may need to do more manual locks. The big family of tools for that is the Atomics global object: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...
But a lot of that is new and rare in JS today.
> the ability to `join` two futures
`Promise.all` and `Promise.any` are the two most common "standard library" combinators. `Promise.all` is the most like Rust `join`.
There are also libraries with even higher-level combinators.
> manage their polling myself
Promises don't poll. JS lives in a browser-owned event loop. Superficially you are in a browser-provided "tokio"-like runtime at all times.
There are some "low-level" tricks you can pull, though in that the Promise abstraction is especially thin compared to Rust Futures. The entire "trait" that async/await syntax abstracts is just the "thenable pattern" in JS. All you need to make a new non-Promise Promise-like is create an object that supports `.then(callBack)` (optionally a second parameter for a catchCallback and/or a `.catch(callBack)`). Though the Promise constructor is also powerful enough you generally don't need to make your own thenable, just implement your logic in the closure you provide to the Promise constructor.
Similarly on the flipside if you need a more complex combinator than Promise.all, and the reason that some higher-level libraries also exist, you just have to build the right callbacks to `.then()` and coordinate what you need to.
It's generally recommended to stick with things like Promise.all, but low level tricks exist.
> I mean even if you say "I can do it", that's great, but how is it surprising?
I think what continues to surprise me is that it sometimes reads like a lack of curiosity for other languages and for the commonalities between languages. Any GC language is built on the same exact kind of building blocks as "lower level" languages. There is a learning curve involved in reasoning about a GC language, but I don't think it should seem like a steep one. The vocabulary has strong overlaps: value types and stack allocated; reference types and heap allocated; references and pointers. The intuitions of one often benefit the other ("this is a reference type, can I simplify what I need from it inside this loop to a value type or two to keep it stack allocated or would it make more sense to preallocate a pool of them?"). Just because you don't have access to the exact same kinds of low level tools doesn't mean that they don't exist or that you can't learn how to take what you would do with the low level tools and apply them in the higher level space. (Plus tools like C#'s Span<T> and Memory<T> work where the low level tools themselves are also starting to blur more together than ever before.)
It just takes a little bit of curiosity, I think, to ask that next question of "how does a GC language stack allocate?" and allowing that to lead you to more of the vocabulary. Hopefully, I've done an okay job in this post illustrating that.
But to be fair, besides the usual patterns like tree-shaking and DCE, "runtime performance" is really tricky to measure or optimize for
We switched a CI pipeline from babel to SWC last year and got roughly 8x improvement. Tried oxc's transformer more recently on the same codebase and it shaved off another 30-40% on top of SWC. The wins compound when you have thousands of files and the GC pressure from all those AST nodes starts to matter.
It's blisteringly fast
We are programmers we are supposed to like programming. These rust haters are intolerable.
C is fine but old
Compare Go (esbuild) to webpack (JS), its over 100x faster easily.
For a dev time matters, but is relative, waiting 50sec for a webpack build compared to 50ms with a Go toolchain is life changing.
But for a dev waiting 50ms or 20ms does not matter. At all.
So the conclusion is javascript devs like hype, and flooded Rust and built tooling for JS in Rust. They could have used any other compiled languge and get near the same peformance computer-time-wise, or the exact same time human-timewise.
It absolutely does:
https://mail.python.org/pipermail/python-dev/2018-May/153296...
https://news.ycombinator.com/item?id=16978932.
Anyway, you posted about speed, and then followed by a link to some python related thing. In python speed has never been a key tenet, at least when it comes to pure cpu based calculations. How much tooling is built in python? All the modern python tooling is mostly Rust based too. So theres that.
I mean for a dev working in JS with JS built tooling the speed is not in milliseconds, but in seconds, even minutes.
I still think my point holds, having a build take int he 10s of seconds vs 50ms is very much good enough for development (the usual frontend save and refresh browser cycle)
* You need have a clean architecture, so starting "almost from scratch" * Knowledge about performance (for Rust and for build tools in general) is necessary * Enough reason to do so, lack of perf in competition and users feeling friction * Time and money (still have to pay bills, right?)
And we can always reach out to Scala or F# if feeling creating to play with type systems.
Nonsense.
In other words does it treat comments as syntactic units, or as something that can be ignored wince they are not needed by the "next stage"?
The reason to find out what the comments are is of course to make it easy to remove them.
- how does it compare to biome?
- also biome does all 3 , linting, formatting and sorting, why do you want 3 libraries to do the job biome does alone?
I hate to say it, but biome just works better for me. I found the ox stuff to do weird things to my code when it was in weird edge case states as I was writing it. I'd move something around partially correct, hit save to format it and then it would make everything weird. biome isn't perfect, but has fewer of those issues. I suspect that it is hard to even test for this because it is mostly unintended side effects.
ultracite makes it easy to try these projects out and switch between them.
But I guess it wouldn't be an apples to apples com parison because Bun can also run typescript directly.
In text form:
Bundling 10,000 React components (Linux x64, Hetzner)
This is a set of linting tools and a typestripper, a program that removes the type annotations from typescript to make turn it into pure javascript (and turn JSX into document.whateverMakeElement calls). It still doesn't have anything to actually run the program.
Currently it uses .Net and NativeAOT, but adding support for the Rust backend/ecosystem over the next couple of months. TypeScript for GPU kernels, soon. :)
I'd say my biggest concern is that the same engineers who use JS as their main language are usually not as adept with Rust and may experience difficulties maintaining and extending their toolchain, e.g. writing custom linting rules. But most engineers seem to be interested in learning so I haven't seen my concern materialize.
Even as someone who writes Rust professionally, I also wouldn't necessarily expect every Rust engineer to be super comfortable jumping into the codebase of the compiler or linter or whatever to be able to hack on it easily because there's a lot of domain knowledge in compilers and interpreters and language tooling, and most people won't end up needing experience with implementing them. Honestly, I'd be pretty strongly against a project I work on switching to a custom fork of a linting tool because a teammate decided they wanted to add extra rules for it or something, so I don't see it as a huge loss that it might end up being something people will need to spend personal time on if they want to explore.