Our exe.dev web UI still runs on AWS. We also have a few users left on our VM hosts there, as when we launched in December we were considering building on AWS. Now almost all customer VMs are on other bare metal providers or machines we are racking ourselves. We built our own GLB with the help of another vendor's anycast network. You can see that if you try any of the exe.xyz names generated for user VMs.
We would move exe.dev too, but we have a few customers who are compliance sensitive going through it, so we need to get the compliance story right with our own hardware before we can. It is a little annoying being tied to AWS just for that, but very little of our traffic goes through them, so in practice it works.
I need to fix our transfer pricing. (In fact I'm going to go look at it now.) I set that number when we launched in December, and we were still considering building on top of AWS, so we put a conservative limit based on what wouldn't break the bank on AWS. Now that we are doing our own thing, we can be far more reasonable.
Almost every VC rejected us when we went to get seed funding for Tailscale, we knew none of them. Friends of friends of acquaintances got us meetings. Fundraising is very possible for you if you are committed to building a business. Most important thing is don't think of fundraising as the goal, it is just a tool for building a business. (And some businesses don't need VC funding to work. Some do.)
The biggest challenge is personal: do you want to build a business or do you want to work with cool tech? Sometimes those goals are aligned, but usually they are not. Threading the needle and doing both is difficult, and you always have to prioritize the business because you have to make payroll.
Author here. Most of our infra is custom, the VMM is based on cloud-hypervisor (a project spiritually similar to Firecracker). We have a lot of work to do, including on the VMM, but right now there is more value for users if we spend our time on the VM management layer and GLB.
I really resonated with your piece. I really have found once the friction to writing code went down, the time was immediately replaced with “setting up cloud abstractions”.
I’ve been using things like onrender.com for hosting projects, and once the initial pain of setting up was complete, I found shared infra like logs/metrics somewhat useful. Do you imagine building these in exe.dev? Or does doing so move into “more confusing abstractions” territory?
Based on this and recent product releases, Anthropic seems keen on building a closed ecosystem around their excellent model. That is their business choice, I suspect it will work well. But I cannot say I am particularly excited to have my entire development stack owned by one company.
As a non-American, I love what Chinese companies are doing. The progress they are showing and the fact they are sharing the weights of the models is great. I can't wait for the day when companies that "have no moat" like A. , Cursor or even OpenAI are left with a bunch of float matrices and hardware.
I understand people from the US will have an anti-Chinese reaction, but for us in the "third world" that can use both techs, the openess is always good.
We are not running out of IPv4 space because NAT works. The price of IPv4 addresses has been dropping for the last year.
I know this because I just bought another /22 for exe.dev for the exact thing described in this blog post: to get our business customers another 1012 VMs.
Your NAT traversal article is amazing, but sadly the long tail (ha) means any production quality solution has to have relays, which is a huge complexity jump for people who just want to run some p2p app on their laptop.
And it's not clear it will ever be better than it is now with CGNAT on the rise.
IPv6 does not work on the only ISP in my neighborhood that provides gigabit links. I will not build a product I cannot use.
Even when IPv6 is rolled out, it is only tested for consumer links by Happy Eyeballs. Links between DCs are entirely IPv4 even when dual stacked. We just discovered 20 of our machines in an LAX DC have broken IPv6 (because we tried to use Tailscale to move data to them, which defaults to happy eyeballs). Apparently the upstream switch configuration has been broken for months for hundreds of machines and we are the first to notice.
I am a big believer in: first make it work. On the internet today, you first make it work with IPv4. Then you have the luxury of playing with IPv6.
Whenever I see a comment that says "if you don't do the thing in the most efficient way possible, someone else will steal your lunch", I think that people vastly overestimate the likelihood that this will actually happen.
It's similar to "open source is the most secure because it has the most eyeballs on it", but in reality security bugs will exist for years with no one noticing because people vastly overestimate how any developers will actually spend their time analyzing any given open source software.
Sure, bugs are more likely to be caught in open source and it's more likely someone will take your market share with a more efficient and competitively priced product, but you're overblowing the likelihood of both by a large margin.
> "if you don't do the thing in the most efficient way possible, someone else will steal your lunch"
Well you’re leaving behind both a serious pain point for your users AND you’re leaving in the open a clearly more compute- and money-efficient way to achieve the objective on the table.
It’s literally giving your eventual competitors (because there will be competitors, eventually) a competitive advantage.
Then sure, the market is very wide but… just look at stackoverflow vs chatgpt. As soon as a better alternative came on the market, stackoverflow died to irrelevance within months.
Have you looked at each service running through a cloudflare tunnel or (HE offers something similar too)?
(PS: I use exe.dev quite a lot whenever I want to have a project and basic scripting doesn't work and I want to have a full environment, really thanks for having this product I really appreciate it as someone who has been using it since day one and have recommended/talked about your service in well regards to people :>)
You can get this effect today by installing Tailscale on your exe.dev VM. :)
The reason we put so much effort into exposing these publicly is for sharing with a heterogeneous team without imposing a client agent requirement. The web interface should be easy to make public, easy to share with friends with a Google Docs-style link, and ssh should be easy to share with teammates.
That said, nothing wrong with installing tunneling software on the VM, I do it!
Nice to see this work! I experimented with this for exe.dev before we launched. The VM itself worked really well, but there was a lot of setup to get the networking functioning. And in the end, our target are use cases that don't mind a ~1-second startup time, which meant doing a clean systemd start each time was easier.
That said, I have seen several use cases where people want a VM for something minimal, like a python interpreter, and this is absolutely the sort of approach they should be using. Lot of promise here, excited to see how far you can push it!
I’ve been a big fan of “what’s the thinnest this could be” interpretations of sandboxes. This is a great example of that. On the other end of the spectrum there’s just-bash from the Vercel folks.
Wouldn't you need to restart a process anyways if there's a security update? Sounds like you'd just need to kill all the VMs, start up the base again, and fork (but what do I know).
That is very true. We use copy on write for exe.dev base images right now, and are accumulating a lot of storage because of version drift.
We believe the fix here is to mount the base image as a read-only block device, then mount a read-write block device overlay. We have not rolled it out yet because there are some edge cases we are working through, and we convinced ourselves we could rework images after the fact onto a base image.
Right now our big win from copy-on-write is cloning VMs. You can `ssh exe.dev cp curvm newvm` in about a second to split your computer into a new one. It enables a lot of great workflows.
Our exe.dev web UI still runs on AWS. We also have a few users left on our VM hosts there, as when we launched in December we were considering building on AWS. Now almost all customer VMs are on other bare metal providers or machines we are racking ourselves. We built our own GLB with the help of another vendor's anycast network. You can see that if you try any of the exe.xyz names generated for user VMs.
We would move exe.dev too, but we have a few customers who are compliance sensitive going through it, so we need to get the compliance story right with our own hardware before we can. It is a little annoying being tied to AWS just for that, but very little of our traffic goes through them, so in practice it works.
reply