Hacker Newsnew | past | comments | ask | show | jobs | submit | syrusakbary's commentslogin

Yeah, the strategy is literally the same

Wasmer (YC S19) | https://wasmer.io/ | Multiple Roles | Remote (EU) or Office (US) | Full-time

We are building the next generation of infrastructure for AI without Docker containers, but with a better container technology based on WebAssembly!

We are hiring for:

  * Rust Engineer (Remote, EU timezone)
  * Rust Distributed Engineer (Remote, EU timezone)
  * Developer Education Engineer (Office, San Francisco)
https://www.workatastartup.com/companies/wasmer


Thanks Yuri. Keep up the good work


Yes it can :)


Yes, this should be fully possible.

We actually believe Edge.js will a great use case for LLM-generated code.


Yes, it could run in iOS (using JavascriptCore, V8 in jitless mode, or QuickJS), although we don't have a prototype app yet.

It should probably take a few hours with AI to get a demo for it :)


Awesome! Are you planning on setting a license soon? I might have missed it but I don't see it on the GitHub repo.


Just set it to MIT :)


It’s not a dumb question at all.

And yes, it will allow running Node.js apps fully on the browser, in a way that’s more compatible than any other alternative!

Stay tuned!


Do you have any specific test case that you would consider "very challenging" on the compatibility side? I'd be curious to check if BrowserPod can support that already.


>in a way that’s more compatible than any other alternative

I can see where that's going.

Awesome. I want to msg. you on LinkedIn but can't.


We are so deep into the weeds that sometimes is hard for us to realize that maybe we are not explaining in the best terms.

What was the most confusing thing in the blogpost? I'd like to polish a bit more to make it clearer! Thanks a lot!


Hi Syrusakbary, I have to admit I still do not fully understand what this is.

First, I could not find usage examples on the edgejs.org page and the docs link points to the node docs, why?

If I head to github, there are some usage examples, but they confuse me more.

The first example: $ edge server.js led me to think that this is a node replacement that runs in a webassembly sandbox, so completely isolated. But why the need of --safe then? What's the difference between using it and not using it?

But the next examples creates more confusion to me: $ edge node myfile.js $ edge npm install $ edge pnpm run dev

What is this doing? I thought that edge was a node replacement, interpreting and running javascript files, but it's now running executables (node, npm)... what is that? What happens when I run npm install... where does it install files? What's the difference between running edge node myfile.js and edge myfile.js?

Hope this helps.


> I could not find usage examples on the edgejs.org page and the docs link points to the node docs, why?

This was intentional, as a demonstration that Edge and Node should not diverge a bit. You should be able to replace `node` with `edge` in your terminal and have things running, so that's why we point to the Node.js docs.

> But why the need of --safe then? What's the difference between using it and not using it?

Edge.js currently runs without a sandbox by default. The main reason for this is two fold: native currently performs a bit better than with the Wasm sandbox (about 10-20% better), and because we wanted to polish more the Wasm integration before offering it as default.

> $ edge pnpm run dev > What is this doing?

This is making the `node` alias available for anything that you put after edge. This allows pnpm to use the edge `node` alias instead of your platform node.

Things will be installed as usual, in your `node_modules` directory


Hi HN!

I'm Syrus, from Wasmer. We built Edge.js in a few weeks after different trials trying to bring Node.js to the Edge. We used AI and Codex heavily for this project, as otherwise the timeline would have spanned to a year plus to develop.

The summary of this announcement is that Edge.js:

  * Runs using WebAssembly when in `--safe` mode
  * It's fully compatible with Node.js (passing all their spec tests for non-VM modules)
  * It has a pluggable JS engine architecture: can work with V8, Javascript, SpiderMonkey, QuickJS, Hermes, etc.
Super happy to answer any questions you may have!


> * Runs using WebAssembly when in `--safe` mode

Why is safe mode opt-in?


noob question, but how can you create a localhost:3000 port, when ported to wasm, in the browser?

I think this is a cool demo for you to show, at least in my mind this might be a little mind blowing + maybe a db?

I know there are wasm dbs availble that are very light, but so that maybe it's a plus to consider.


Node does not run in a browser?


Yet... stay tuned!


Just wanted to chime in to say this is really cool. I dreamed of building something like this for the Extism ecosystem but it was a huge lift to unlock all the pieces. This looks like lots of innovation all the way down the stack. Kudos!


Thanks Ben! Took us a bit to figure out the best architecture for it, but once it became clear then it was just a matter of implementing the missing bits.

I think the fact that WASIX is much more mature now have helped to increase development speeds quite a bit!


Maybe I’m just dense, but it says the fs module is fully supported, so what happens when I try to read a file from disk if the app is fully sandboxed?


Only the current working directory will be exposed/mounted to the runtime (we do this to facilitate the DX when running local files without requiring the user to add extra flags).

As a fun exercise, you can try reading process.cwd() from edge in --safe mode and without it.


but what if I want to expose / mount more files in the sandbox?

need docs


Actually agree with you here. It will be a good idea to add docs for the CLI and the WebAssembly sandboxing


What's the Next.js compatibility like?


Edge.js is fully compatible with Next.js


Fully disagree with this take. Not allowing AI assistance on PRs will likely decimate the project in the future, as it will not allow fast iteration speeds compared to other alternatives.

Note aside, OpenJS executive director mentioned it's ok to use AI assistance on Node.js contributions:

  I checked with legal and the foundation is fine with the DCO on AI-assisted contributions. We’ll work on getting this documented.

[1]: https://github.com/nodejs/node/pull/61478#issuecomment-40772...


I appreciate hearing your point of view on this. In my opinion the future of Open Source and AI assisted coding is a much bigger issue, and different people have different levels of confidence in both positive and negative outcomes of LLM impact on our industry.

It is great to have a legal perspective on compliance of LLM generated code with DCO terms, and I feel safer knowing that at least it doesn't expose Node.js to legal risk. However it doesn't address the well known unresolved ethical concerns over the sourcing of the code produced by LLM tooling.


AI coding is great, but iteration speed is absolutely not a desirable trait for a runtime. Stability is everything.

Speed code all your SaaS apps, but slow iteration speeds are better for a runtime because once you add something, you can basically never remove it. You can't iterate. You get literally one shot, and if you add a awkward or trappy API, everyone is now stuck with it forever. And what if this "must have" feature turns out to be kind of a dud, because everyone converged on a much more elegant solution a few years later? Congratulations, we now have to maintain this legacy feature forever and everyone has to migrate their codebase to some new solution.

Much better to let dependencies and competing platforms like bun or deno do all the innovating. Once everyone has tried and refined all the different ways of solving this particular problem, and all the kinks have been worked out, and all the different ways to structure the API have been tried, you can take just the best of the best ideas and add it into the runtime. It was late, but because of that it will be stable and not a train wreck.

But I know what you're thinking. "You can't do that. Just look at what happens to platforms that iterate slowly, like C or C++ or Java. They're toast." Oh wait, never mind, they're among the most popular platforms out there.


Since when we accepted that we can’t go fast and offer stability at the same time?

Time is highly correlated with expertise. When you don’t have expertise, you may go fast at expense of stability because you lack the experience to make good decisions to really save speed. This doesn’t hold true for any projects where you rely on experts, good processes and tight timelines (aka: Apollo mission)


IME there's a reason it's "move fast and break things" and not "move fast and don't break anything," because if the second was generally possible, we wouldn't even need this little aphorism.

And again, I'm not making a claim that the slow and steady tradeoff is best for all situations. Just that it is a great tradeoff for foundational platforms like a runtime. On a platform like postgresql or the JVM, the time from initial proposal to being released as a stable feature is generally years, and this pace I think has served those platforms well.

But I'm open to updating my priors. Do you think there are foundational platforms out there that iterate quickly and do a good job of it?


it’s a well known true-ism you can have it cheap, correct or fast.

but you can only have two of them at the same time.

and we’re talking about FOSS here, so cheap kinda has to be one of them.


Well, with the help of AI now you can have Fast, Affordable, and Correct.


"Correct" usually means "exactly matches the specification" .... these systems do not do that based on what's been indicated on a bunch of articles, HN discussions, etc. etc. documenting people's collective experiences. It often doesn't one-shot tasks, it requires hand-holding. Oftentimes to a significant degree. Especially if not working on a CRUD webapp with a lot of boilerplate which heavily skew the training data. LLMs return the most likely sequence of individual elements of code, not the most correct code.

So, it's fast if you get lucky and are able to one-shot it, affordable-ish if you get lucky and are able to one-shot it and correct if you get lucky.

And that tends to happen if you're working on a specific set of codebases which are close to the most commonly occurring codebases in the training data set -- i.e. CRUD webapps / heavy boilerplate usage. Most FOSS projects probably don't fit in that camp (i have no data to support this, it's just my gut experience).


Allowing AI contributions results in lower quality contributions and allows wild things to come in and disrupt it, making it an unreliable dependency. We have seen big tech experience constant outages due to AI contributions as is...


Your comment is why advertisers say that you should repeat your core call to action at least a few times to make it stick.

You’ve read people saying the same thing hundreds of times and have somehow taken that as meaning that it’s credible.

Neither you nor I nor anyone else here knows what the “effects” are, because this is brand new tech, and it’s constantly changing. Yet you’re speaking with absolute confidence.

“Big tech” has downtime all the time, and LLMs did not change that fact. The only difference is that the peanut gallery that is already worked up about AI for philosophical / cultural reasons is suddenly ready to blame AI for every issue under the sun.

You think that you’re making a technical argument but you’re just repeating the same taking points I see teenagers regurgitating on TikTok. There’s nothing intelligent or credible about it.


My dude, you're making the classic problem of assuming because you don't have any first-hand knowledge of problems, other people are equally ignorant.

Don't slap someone else down because you don't know something.


> Not allowing AI assistance on PRs will likely decimate the project in the future, as it will not allow fast iteration speeds compared to other alternatives.

It's not an AI issue. Node.js itself is lots of legacy code and many projects depend on that code. When Deno and Bun were in early development, AI wasn't involved.

Yes, you can speed up the development a bit but it will never reach the quality of newer runtimes.

It's like comparing C to C++. Those languages are from different eras (relatively to each other).


> Not allowing AI assistance on PRs will likely decimate the project in the future, as it will not allow fast iteration speeds compared to other alternatives.

If and when there is evidence that AI is actually increasing the speed of improvement (and not just churn), it would make sense to permit it. Unless and until such evidence emerges, the risks greatly outweigh the benefits, at least for a foundational codebase like this.


Not allowing AI assistance on PRs will likely decimate the project in the future,

I can't help but wonder if this matter could result in an io.js-like fork, splitting Node into two safe-but-slow-moving and AI-all-the-things worlds. It would be historically interesting as the GP poster was, I seem to recall, the initial creator of the io.js fork.


> Not allowing AI assistance on PRs will likely decimate the project in the future, as it will not allow fast iteration speeds compared to other alternatives.

That sort of statement might also be sarcasm in another context: I personally use AI a lot, but also recognize that there are a lot of projects out there that are suffering from low quality slop pull requests, devs that kinda sign out and don't care much about the actual code as long as it appears to be running, alongside most LLMs struggling a lot with longer term maintenance if not carefully managed. So I guess it depends a lot on how AI is used and how much ideological opposition to that there is. In a really testable codebase it could actually work out pretty well, though.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: