In a world of AI coding it seems like we can create or copy almost anything. So after some denial I’m thinking let’s embrace that and bring back “view source”.
qip lets you write tiny WebAssembly modules in Zig or C and compose them together. The modules have a simple input -> output interface and cannot access anything else, no file system, no network, no env vars, not even the time. You chain modules together so the output of one becomes the input of another e.g. there’s a CommonMark module that converts to markdown-to-html. There’s a file-based router that lets you serve a website with these same modules.
I want these modules to be open and shared, so you can decide to have a `/view-source` page that lists all the wasm modules and all the source content (markdown, images, etc) and source code (zig, c). So you can choose to fork the ingredients of the qip website if you like: https://qip.dev/view-source
I chose wasm because it’s fast, runs anywhere (browser/server/native), and has a strong yet lightweight sandbox. I’m working on collaborative web hosting that I hope will bring back web 1.0 vibes.
Previously Anthropic subscribers got access to the latest AI but it seems like there’s a League of Software forming who have special privileges. To make or maintain critical software will you have to be inside the circle?
Who gates access to the circle? Anthropic or existing circle members or some other governance? If you are outside the circle will you be certain to die from software diseases?
Having been impressed by LLMs but not believing the AGI hype, I now see how having access to an information generator could be so powerful. With the right information you can hack other information systems. Without access to the best information you may not be able to protect your own system.
I think we have found the moat for AI. The question is are you inside or outside the castle walls?
They’ve been trying their hardest to find a moat for 5 years, and nothing seems to stick. At first it seemed like access to the model could be a moat but then llama and deepseek came out. Then it seemed like the hardware requirements could be a moat but small local AI just kept getting more efficient. Now they’re trying to gate keep access to the models again under the guise of security, but we probably got like t minus 2 weeks before an equivalent model is released by someone
American AI desperately wants AI to intensify the wealth disparity and therefore justify the wealth grab that the rich have done for the last 3 decades and AI is just not cooperating
The allocation of each object still has overhead though, even if they all live side-by-side. You get memory overhead for each value. A Uint8Array is tailor-made for an array of bytes and there’s a constant overhead. Plus the garbage collector doesn’t even have to peer inside a Uint8Array instance.
The engine can optimize all those allocations out of existence so they never happen at all, so it's not a problem we'll be stuck with forever, just a temporary inconvenience.
If a generator is yielding values it doesn't expose step objects to its inner code. If a `for of` loop is consuming yielded values from that generator, step objects are not exposed directly to the looping code either.
So now when you have a `for of` loop consuming a generator you have step objects which only the engine ever can see, and so the engine is free to optimize the allocations away.
The simplest way the engine could do it is just to reuse the same step object over and over again, mutating step.value between each invocation of next().
WebAssembly is particularly attractive for agentic coding because prompting it to write Zig or C is no harder than prompting it to write JavaScript. So you can get the authoring speed of a scripting language via LLMs but the performance close to native via wasm.
This is the approach I’m using for my open source project qip that lets you pipeline wasm modules together to process text, images & data: https://github.com/royalicing/qip
qip modules follow a really simple contract: there’s some input provided to the WebAssembly module, and there’s some output it produces. They can’t access fs/net/time. You can pipe in from your other CLIs though, e.g. from curl.
I have example modules for markdown-to-html, bmp-to-ico (great for favicons), ical events, a basic svg rasterizer, and a static site builder. You compose them together and then can run them on the command line, in the browser, or in the provided dev server. Because the module contract is so simple they’ll work on native too.
An advantage of running a coding agent in a VM is that to answer your question, it can install arbitrary software into the VM. (For example, running apt-get or using curl to install a specialized tool.) WebAssembly seems suitable for more specialized agents where you already know what software it will need?
> We said the runtime asks the OS for large chunks of memory. Those chunks are called arenas, and on most 64-bit systems each one is 64MB (4MB on Windows and 32-bit systems, 512KB on WebAssembly).
Incorrect. You ask the OS for pages. (Golang does internally appear to manage its heap into “arenas”.) On WebAssembly the page size is 64KiB. Window 64-bit it’s 4KiB, Apple Silicon 16KiB, Linux x86_64 4KiB.
"Page" is OS terminology. "Arena" is Go terminology. An arena is made up of sequential pages. Go asks the OS for 64MB of sequential memory, and calls that 64MB chunk an arena; this is consistent with the text you quoted. It is not incorrect.
If people who wore Google Glass without respect for others were Glassholes, perhaps people who unleash their OpenClaw instance onto the internet without respect are Clawholes?
We have LLMs that generate code but that code should be untrusted: perhaps it overflows or tries to read ssh keys. If we aren’t reviewing code closely a major security hole could be on any line.
And since LLMs can generate in whatever language, it makes sense for them to write fast imperative code like C or Zig. We don’t have to pick our favorite scripting language for the ergonomics any more.
So qip tries to solve both problems by running .wasm modules in a sandbox. You can pipe from other cli tools and you can chain multiple modules together. It has conventions for text, raw bytes, and image shaders, with more to come.
I am excited by the capabilities of probabilistic coding agents, but I want to combine them deterministic code and that what these qip modules are. They are pure functions with imperative guts.
WebAssembly Text Format (wat) is fine to use. You declare functions that run imperative code over primitive i32/i64/f32/f64 values, and write to a block of memory. Many algorithms are easy enough to port, and LLMs are pretty great at generating wat now.
I made Orb as a DSL over raw WebAssembly in Elixir. This gives you extract niceties like |> piping, macros so you can add language features like arenas or tuples, and reusability of code in modules (you can even publish to the package manager Hex). By manipulating the raw WebAssembly instructions it lets you compile to kilobytes instead of megabytes.
I’m tinkering on the project over at: https://github.com/RoyalIcing/Orb
> Which brands do people trust? - Which people do people of power trust?
These are often at odds with each other. So many times engineers (people) prefer the tool that actually does the job, but the PMs (people of power) prefer shiny tools that are the "best practice" in the industry.
Example: Claude Code is great and I use it with Codex models, but people of power would rather use "Codex with ChatGPT Pro subscription" or "CC with Claude subscription" because those are what their colleagues have chosen.
qip lets you write tiny WebAssembly modules in Zig or C and compose them together. The modules have a simple input -> output interface and cannot access anything else, no file system, no network, no env vars, not even the time. You chain modules together so the output of one becomes the input of another e.g. there’s a CommonMark module that converts to markdown-to-html. There’s a file-based router that lets you serve a website with these same modules.
I want these modules to be open and shared, so you can decide to have a `/view-source` page that lists all the wasm modules and all the source content (markdown, images, etc) and source code (zig, c). So you can choose to fork the ingredients of the qip website if you like: https://qip.dev/view-source
I chose wasm because it’s fast, runs anywhere (browser/server/native), and has a strong yet lightweight sandbox. I’m working on collaborative web hosting that I hope will bring back web 1.0 vibes.
https://github.com/royalicing/qip
reply