For all the other comments, parent is probably talking about forward proxies and to their point many forward/enterprise proxies have configurations which cause websockets to break and it is a pain to debug this if you have many enterprise customers.
Echoing this. At $DAYJOB some 5-10% of customers will fail to initiate a websocket connection, even over wss:// despite plain HTTPS requests working fine. This is a client-side issue with whatever outdated HTTP CONNECT implementation the enterprise has.
Feel like there is a larger potential customer base there but it also seems like they would lose the edge they built by owning the full rack. (I.e. integrating with customer TORs and network fabric is a nightmare.)
There needs to be a dev kit of sorts. I’d be happy to recommend Oxide to some of our customers but not before I try it first. And I’m not buying a whole data centre just to play around?
It's still early days for us obviously, but we have some of our equipment in a cage in a regular colocation facility, on the Internet. We're generally able to provide access to systems there so that folks can kick the tyres as part of a pre-sales engagement. If you or your customers are interested, you're always welcome to reach out to our sales folks and have a chat!
Maybe they could rent a rack with a somewhat direct access on a per-month basis or something so we can POC around, but that could turn them into a cloud company which is probably not what they want.
Yeah definitely. I used to work for an AI hardware company that only sold $150k systems to "POA" customers. I think part of the reason they didn't do very well is it was completely inaccessible to normal people.
And to cleanse the palette, DSRH's epic "Sun Deskset == Roy Lichtenstein Painting on your Bedroom Wall" flame (David SH Rosenthal was one of the original authors of NeWS, with James Gosling, and also wrote the X11 ICCCM):
I don't really know. I was just trying to come up with some obscure references. Solaris on desktop wasn't crazy enough.
I think a modern take on NeWS style system would use Webassembly style system with capability based access rather then PostScript and no security. Basically a modern browser with Canves/WebGL and without a lot of other stuff a browser does.
Would have been a interesting alternative to the Wayland approach.
Kind of sad that Scott McNealy didn't have the balls to open it up. Having some real competition to X in the 90s would have been a cool. Specially if Sun had pushed it at least somewhat.
A 6U product kind of like the blade server enclosures could be interesting too. That said, I haven't worked in a datacenter for 14 years, so don't listen to me too seriously...
I feel like the next generation of this type of company is smaller consultancies that have awesome developers that build customer tooling on the side. But the main revenue driver is consultancy.
Also it really feels like all the air has been let out of the docker/kubernetes/cloud-native balloon that was so popular in the late 2010s.
> smaller consultancies that have awesome developers that build customer tooling on the side
I've worked at a couple consultancies and they were always chasing after that recurring product revenue. I grew to believe that it isn't possible under that business model.
When you are running a consultancy you are in the business of marking up developer hours: find a client to sign a contract for $150 / hour and hire a consultant who will do the job for $100k/year salary. Then convince them to work as many hours as possible for their fixed salary, plus the carrot of a bonus payout every once in a while if everyone bills lots of hours.
Having that developer spend any time working on the company's product causes all sort of problems. The most immediate is the loss of revenue. But also now this employee might see working on the product as cutting into their bonus since they are billing less hours. Everyone wants some of the upside if the side product generates revenue but how do you split it between people who worked directly on the product and people who worked on paying client jobs to generate the revenue so the others could work on the product? It ends up causing a rift.
The other thing I've seen while working at small and even medium sized consultancies is that they end up dependent on one large customer who calls all the shots and takes up all the available time, or all "extra" time not being billed is used working on sales for the next contract. Either way there doesn't end up being much capacity to work on cool tooling.
> I've worked at a couple consultancies and they were always chasing after that recurring product revenue. I grew to believe that it isn't possible under that business model.
The only time I've seen this work (and I have seen it work multiple times) is with managed hosting.
So if you are an expert at developing web applications or solutions using <product X> you can offer a managed hosting solution to your clients where you host the web app or the <product X> solution. Not all of them will take you up on it, but some will. Those that do will pay you a monthly fee. This isn't free (you now have to carry a pager) but is recurring.
Building your own SaaS/other unrelated product? That's a rock I've seen several consulting ships crash into (to pick a metaphor). Here's one that some of my friends tried to build in the late 2000s that I wrote about: https://www.mooreds.com/wordpress/archives/506
And vice versa, i know a lot of product companies that have a really shitty profesional services arm that care more about shoveling more product down a customers throat than actually helping them from a neutral PoV.
Since they raised $61 million (source https://techcrunch.com/2024/02/05/cloud-native-container-man...) they probably had way more team than $10 million could support. Since they raised the VC money I guess that downsizing to a team that the business can sustain is not an option.
According to LinkedIn they were 50-200 employees which after excluding all the other costs for running a company, is definitely not enough to cover the fully-burdened payroll of even a 50 person organization that’s probably predominantly Engineering types, that assumption being if most people are US-based (or if they don’t do “location based” pay, and indexed off a more expensive location).
"Air has been let out" in the sense that it's moved past the trough of disillusionment and is on its way to the plateau of productivity[1].
There are still a few people in the trough of disillusionment yelling about how a $5/month VPS is all you'll ever need, or a $50/month bare metal colocated server is all you'll ever need. But for the most part the people who benefit from cloud services & containerization will use it when they need to, avoid it when they don't. It'll continue to be a productive tool when used properly, with vendors supporting mature products using it that solve people's problems.
Like with big data I think most orgs (>80%) just don't need it due to the extra complexity. They might as well just use something managed like fargate and go back to building their own products.
IMO that's an overly reductive take, which is very common for tools in this space, but (I think) needs to be addressed so people stop repeating it.
It reduces what "most orgs" are (as if 80% of businesses are the same, or solve the same problems, or have the same challenges, or use the same approaches, or have the same customers, have the same staff or expertise, budgets, timelines, etc, etc, etc.). Clearly there is no such thing as "most orgs", as there are many different kinds of businesses and how they approach solving problems varies from business to business. Their use of technology to solve problems also can't be easily reduced; the way the business chooses to solve problems doesn't necessarily dictate what technology they should use.
It correlates the need for complexity with whether an organization is in some 20% minority of organizations, as if only a minority of orgs should or shouldn't use a complex tool.
It reduces a given tool down to "complex or not", as if complexity is the only consideration of whether to use a tool or not. There may be many different reasons to use a tool regardless of whether it's complex.
It assumes that a given tool has some inherent complexity that isn't comparable to other tools. Other tools might have less inherent complexity, but their lack of complexity may then create new problems that have to be solved, which just moves the complexity from the tool to a bunch of other places.
Overall, it correlates the way you solve problems, with how complex a tool is, with whether your organization is of one of two large generic groups. This is such a sweeping conclusion that it would be impossible to prove or demonstrate.
Based on your comment about them "using Fargate" instead, I'm assuming what you're actually saying is you think people should be using a managed product which uses containerization [and possibly k8s], rather than managing a complex technology themselves. I agree. But that doesn't mean we can generalize about who should be using what and when.
I wasn't writing a critiqued essay. It's perfectly reasonable to assume 80% of orgs don't have the scale, core competencies or justifiable need to be managing container clusters themselves.
Also, no need to assume. I specifically said "use something managed".
IMO "use something managed" gets reduced to "we shan't run Kubernetes on-premises" which ends up meaning "we won't learn anything about failure modes until it's too late to think about mitigating them"
Which might be in line with what you said about
> 80% of orgs don't have the scale, core competencies or justifiable need to be managing container clusters themselves.
But also, would at least have some potential to be solved and much more cost effectively, or maybe at least grown past, if they would just spend some energy on deploying Kubernetes internally; even if we can't or won't afford an entire team dedicated to doing only that, (and even if we commit to using only managed services for production anywhere and everywhere.)
In my experience the way some places reflexively avoid it like it's a trap to be stayed out of, winds up being a bit like a self-fulfilling prophecy "we're not doing Kubernetes" - I empathize with the person who you triggered, even if now we're up to two walls of text from just a simple comment, I feel triggered too.
This is basically a build vs buy discussion. Businesses have generally concluded they should only build things that give them a strategic advantage and buy other services to maintain focus on their sources of competitive advantage. Eg in the case of k8s, it's not just spinning it up, it's securing it, patching, monitoring, etc. It's for this reason the majority of orgs shouldn't run it themselves.
However some balance is needed. Orgs may want to do exploration since it may not be obvious where competitive advantage can come from, or like you say perhaps hybrid makes sense, using it only in non prod.
I agree that it makes more sense to buy your Kubernetes on an organizational basis because one should not reinvent the wheel, and taking advantage of a commoditized service is only possible if you work with a competent broker.
However I am wary of the capacity of skills vendors to take advantage when you come to depend on them, even when their intentions are good and all ideals aligned. Being able to deliver the limited Kubernetes experience for yourself in low-stakes contexts, where you can depend on it because you know how it works, well enough to administer in a pinch, but availing that also in a pinch you're not the bottleneck to solve a problem, because you use the managed broker in all the places where it matters, feels like a sweet spot to me.
I don't want to pay money to a broker every time I spin up a new experiment for the duration of the experiment =/= I don't want to perform experiments.
That's where I see the disconnect that "Leadership" may fail to understand. You can provide a service at low marginal cost to take some of the load off your people, and that might also have the effect of stopping any experiments that fall beneath a certain threshold as "not worth the cost" - all because we settled on getting something for cheap that should have been free.
Then again, dodging all those diversions might have been a part of the strategy...
> Also it really feels like all the air has been let out of the docker/kubernetes/cloud-native balloon that was so popular in the late 2010s.
Not really, the space has simply grown faster than these companies could keep up with and were left behind.
I can code up a CICD pipeline that does per-PR namespace isolated deploys of an app stack on EKS using Github actions in well under a week. With docker compose for local testing. That wasn't the case 5 years ago but it is now. Why would I want to be locked into Weave Works?
Jenkins still does stuff that you can't do with GH Actions. Actions ate Travis / TeamCity / CircleCI, all the "more polished Jenkins for the 80% use case" products.
Based on the last time I looked: good handling of dependencies between builds (e.g. the ability to do an "edge build" where for any change in a given project, you check whether that will break your other projects when they upgrade to depend on that), advanced scheduling, plugins that integrate all sorts of random tools into your build views.
Your GitHub action can trigger a helm chart, or series thereof, or other infra tools. Declarative specifications, triggered procedurally with the context of the branch’s latest build. We use this pattern quite extensively for preview app workflows.
As of a year ago this is possible in a fully declarative way with Flux 2, but there’s a lot more moving parts and security footguns - and the idea that the maintenance of this project has lost one of its primary sponsors is worrying at best.
Yeah this is nice if you have large teams and repeatable projects. Smaller companies have much more ad-hoc requests. I stood up an entirely new type of project end-to-end from a docker compose into our cluster. Re-used alot of code base but it was still a bit of work. Much less than it used to be though.
I have this installed but I've never actually done more than a quick test of it but this might be a good tool for you. It'll record all of your aws console actions and output them as terraform, couldformation, cdk, etc.
That'd give you a repeatable deployment for disaster recovery without having the toil of writing that part of it. Having to click through every checkbox in the console and iam perms and blahblah under fire is rough.
We looked at weaveworks and its competitor both as a product and an investment (mid 6 figure usage). Our big issue was that we had a lot of smaller teams doing different things and not one or two featured items raking in the majority of our revenue.
These solutions work if you have a bunch of snowflake workloads by design (or bad design).
> a bunch of snowflake workloads by design (or bad design).
That's a really interesting characterization of WGE, and I can't say I disagree much (my personal opinion as an ex-Wyvern/OSS Engineer DX @ weaveworks)
> Also it really feels like all the air has been let out of the docker/kubernetes/cloud-native balloon that was so popular in the late 2010s.
Kubernetes is just boring now. It's stable, the people who need to know it probably know it. I started working and contributing to k8s in 2015 back in version 1.1. 7 years of the same technology. I haven't even used it in 2-3 years (1.18) and I know I can hop over to it and do exactly what I used to do with some CRD flare.
All of the contributors should be proud of what they've built, that's the goal in the end, stability to where it's an afterthought.
"Smaller consultancies" are actually really hard. Especially if you want to deal with larger companies with the type of $$ to pay for consulting. Instead of doing the work you love you end up in procurement and payment hell.
Small consultancies also tend to fall into the trap of having one client (often their first client) that they utterly depend on the money from... but who doesn't depend on them to be able to survive. That customer almost always knows the relationship is unbalanced in their favor (sometimes they went into the relationship specifically because they knew it would be that way) and they will run you ragged with unreasonable requests, burning out your staff and ruining your relationships with other clients because you have to keep them happy so they keep writing checks (and then you're even more dependent on them, as a rancid little bonus).
The only way out is to either gut your way through it till you grow enough to be able to push back without risking your existence; detect that things are going that way early and fire them as a customer before it ever gets to that point... or keep burning out staff till you can't find fresh faces, then close up shop.
Back in the day there was a smaller consultancy with awesome developers called LShift (mentioned here https://en.wikipedia.org/wiki/East_London_Tech_City). They worked out that an open source messaging thing would be useful, and created RabbitMQ (there were details about who, how it was funded internally, etc). That got sold to VMWare, and a bunch of people went with it, but LShift went on as before, happy, but always looking for another Rabbit. Didn't find one, was aquihired in the end.
Meanwhile, some of the Rabbit people formed Weave, looking for the killer business around the early container ecosystem (https://www.weave.works/oss/net/ was interesting, eksctl, flux, CNCF, lots of good things). But I guess they took a bite of the VC apple and sustainable technical contributions was no longer the goal.
I've huge respect for everyone I knew from Weave. Great people all. Best wishes and I know you'll land on your feet.
VC funded and looking for growth opportunities is far different than building a long term sustainable business. The carrot in the business model leads business decisions in very different directions and your setup is different to meet them.
Are there opportunities for VC funded with growth expectations in this kind of business?
This is what we are doing with my company. We build out tooling that benefits everyone and open source most of it, but our bread and butter is consulting. Our consulting leads us to build more tooling to make our jobs easier, which leads to more effectively delivering our consulting.
I think this is the model most of the big shops like IBM and Oracle also use. Infrastructure tooling is not usually a core competency for most companies, and they want others to do it for them.
> I feel like the next generation of this type of company is smaller consultancies that have awesome developers that build customer tooling on the side. But the main revenue driver is consultancy.
The issue here is the lack of appeal for investors, leading to a tenfold decrease in new cos. However, the startups that do launch are likely to be more sustainable
Since the end of the free money era, a lot of people are rightly questioning the microservice/k8s/cloud-native model that requires you to run an operation team of 20 people and has a baseline cost of 200k$/year for a cluster that doesn't do anything yet.
I have worked in multiple orgs that standardized on Kubernetes as end users. Those numbers are coming from my real life experience. If we include all costs it is actually probably way higher than 200k per cluster.
> Also it really feels like all the air has been let out of the docker/kubernetes/cloud-native balloon that was so popular in the late 2010s.
As someone who works for a company that sells various k8s versions of products and services, we're only now seeing some of our bigger customers really starting to use k8s more exclusively. So at least my experience is the opposite of yours, it seems that at least in some industries k8s is only now seeing significant adoption.
I would recommend trying SmartOS or OmniOS instead, since the Oxide rack isn't filled with IBM compatible personal computers on sleds, and they have no BIOS or UEFI.
Sounds more like a kubernetes problem than a dns problem.
I hate coredns. Everything running inside of a kubernetes cluster should just be querying the kubernetes endpoints api for these IPs directly and using the node dnsservers for external hosts.
If I restart my DB, will the database service host env var also be updated? Will restarting a DB or changing the IP of a DB will also imply a restart of all of the services that need access to the DB?