Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

For the life of me I can’t figure out why I would recommend Kubernetes to any company who is already on AWS. Except for the custom stuff, you should probably used a managed equivalent and for the custom parts where you need HA, scalability, etc. just use regular old ECS or Fargate for Serverless Docker. Heck even simpler, sometimes is just to use a bunch of small VMs and bring them up or down based on a schedule, number of messages in a queue, health checks, etc and throw them behind an autoscaling group.

I’m not familiar with Azure or GCP, but they have to have more easily managed offerings than K8s.

If you’re on prem - that’s a different use case but my only experience being responsible for the complete architecture of an on prem solution was small enough that a combination of Consul/Nomad/Vault/Fabio made sense. It gave us the flexibility to mix Docker containers, raw executables, shell scripts, etc.

That being said, for both Resume Driven Development reasons and because we might be able to find people who know what they were doing, if I had to do another on prem implementation that called for it, I would probably lean toward K8s.



Yeah, the main reasons are social. Everyone wants that k8s feather in their cap just like everyone wants that AWS feather in their cap. Sure, there may be times when it's reasonable, but 95% of the hype is hot air.

There are two reasons you have to get a k8s cluster. First is that if you don't have one set up defensively, some slick sales guy will come in and scare your boss's boss's boss into thinking the company will implode without one. If you have one, you can say "don't worry, we already have that" and fend off the invasion. It's a necessary status symbol, like saying "we turned off our last physical server because we love paying Amazon 7x the cost of hardware for less control and less performance every year".

The second reason is that the pre-eminence of the fad will create a feedback loop where some tooling assumes that you have a k8s cluster because of course all of the cool kids have a k8s cluster and make it inordinately difficult to do something reasonable without one. It's thus handy for experimentation with presumptuous tools created by people who have drunk that kool-aid. We're already seeing this to a certain extent.

The harsh reality is that the vast majority of tech architectures are designed as fashion statements rather than systems to serve a functional purpose. With this reality in mind, we must look good on the runway or risk expulsion.


Would really love to see some before/after cost calculations for some cloud migrations. Techies, and especially the more senior ones at CTO level, can easily be scared into thinking that they need to use cloud, partly because they are so far removed from the tech, and partly because they just follow the trend and everyone else is doing it (similar to "no one got fired for buying IBM") Cloud is great for many use cases but a lot of companies just go all in even where it doesn't make sense. But perhaps everyone is getting 90% discounts at having the last laugh...


I've seen several large-scale cloud migrations and the bill has always been higher, usually egregiously so. In one case in particular, we would've been able to re-buy all the (perfectly adequate) hardware in our racks every 45 days if we had been pouring the cloud spend into hardware.

In another case, I've seen a company that spends dozens of thousands of dollars a month on the cloud infrastructure to run a site that services a max of 50 concurrent users. The truth of that matter is that the production site could run just fine on any developer's laptop and a one-time spend on a pair of geographically-dispersed dedicated servers would free up huge amounts of cash without any measurable/actual impact, but the bosses won't feel very important if they acknowledge that. It boosts their self-image to have a big cloud bill and feel like a grown-up company because they're paying big invoices, and plus the CxOs can prance around and tell everyone how forward-looking they are because they're "in the cloud".

It seems like the most common pitch is "cloud is usage-based billing" and people operate under some vague theory that this will translate to savings somewhere, but despite popular belief, most workloads are reasonably static and you're just going to pay a lot more for that static workload.

The fantasies of huge on-demand load are mostly a delusion of circular self-flattery, aggressively pushed by rent-seekers and eaten up all too eagerly by people who are supposed to be reasonable stewards and sometimes even dare to call themselves "engineers".

By all means set up the cloud stuff and have the account ready to take true ad-hoc resource demands, but the number of cases where AWS and friends are an actual net savings over real hardware is infinitesimal. Most companies would be much better off if they invested in owning at least the baseline 24x7 infrastructure.

I guess the issue there is that since most companies don't really have the dynamic demand they imagine, if they actually used cloud providers for elasticity, they'd almost never use them and then they couldn't feel cool enough.

If you're a random guy, it's going to be cheaper and better to run on a Linode or small AWS instance than it will be to rent and stock a rack. If you have more than 5 employees, this is almost certainly not true.


In another case, I've seen a company that spends dozens of thousands of dollars a month on the cloud infrastructure to run a site that services a max of 50 concurrent users. The truth of that matter is that the production site could run just fine on any developer's laptop and a one-time spend on a pair of geographically-dispersed dedicated servers would free up huge amounts of cash without any measurable/actual impact,

I find it hard to believe that even if I went out of my way to throw every single bit of AWS technology I know that I could Architect a system that only has 50 customers where I could make it cost that much more. I could do that with AWS with a pair of EC2 servers, a hosted database, a load balancer and an autoscaling group with a min/max of 2 for HA. That includes multi AZ redundancy. Multi region redundancy would double the price. That couldn’t possible cost more than $500 a month


Here the software side of the fad rears its head: there are about four dozen microservices involved, each with its own RDS instance, load balancer, the works. 5-6 different implementation languages were used and a large number depend on the JVM or other memory-hungry runtimes. There are a couple of so-called "data analysts" who don't really know what they're doing, never produce anything, and spend lots of money on EMR et al. Buzzwords abound.

The workload is containerized and orchestrated (of course, since a company so self-conscious about its tech fashions would never not be) but one can only increase the density so far, and obviously optimizing the infrastructure spend on "sexy cloud stuff" hasn't been the top priority.

Even hinting that hardware may be appropriate for a certain use case will bring out the bean counters in force. At a third company, I almost gave the "Global VP of Cloud Computing" an aneurysm by suggesting that there may be a use for some of the tens of millions of dollars of hardware that they'd recently purchased. In shock and disbelief, he shouted "What, now you're talking hybrid cloud?!" I said "if that's what you want to call it" as the rest of the room jumped to inform me that the R&D departments at the cloud providers ensure customers will always be using the latest datacenter technology, hastening to add that Microsoft is building a datacenter underwater somewhere, and thus it's a lost cause for anyone to run their own hardware. Some of the shadier cronies in the room chimed in to add that the hardwareless course of action had been confirmed as the ideal by both IBM and Accenture in studies commissioned by the VP.

Cloud resources are a useful tool in the toolbox, but as an industry, we have gone way overboard and lost all reason. At some point, when cloud inevitably loses its shine, the bubble must pop. If you're in the market for server hardware, this is a great time to buy.


Crazy costs for simple stuff can easily happen with on-premise systems as well - I once had an in-house infrastructure team quote £70K for infrastructure to host a single static HTML page that would be accessed by about 10 people.

There was even a kind of daft logic to their costing - didn't make it any less crazy.


If you do your cloud migration and just do a lift and shift without changing your processes or people (retrain, reduce, and automate), it will always cost more. The problem is that too many AWS consultants are just old school net ops people who watched one ACloudGuru training video, passed a multiple choice certification, and can click around in a GuI and replicate an on prem architecture.

I’ve never met any that come from a development or Devops background and know the netops side.


What could you do with your private server room if you were willing to spend that much time and money, though?


Well, seeing that there are only 24 hours in a day and that I refuse to work more than 40-45 hours a week....

There are two parts to any implementation - the parts that only you or your company can do - ie turn custom business requirements into code and the parts that anyone can do “The undifferentiated heavy lifting” like maintaining standard servers. Why would I spend the energy doing the latter instead of focusing on the former?

If I have an idea, how fast can I stand up and configure the resources I need with a private server room as compared to running a CloudFormation Template? What about maintenance and upgrades?

How many people would our company have to hire to babysit our infrastructure? Should we also hire someone overseas to set up a colo for our developers there so they don’t have to deal with the latency?


We are talking about a situation where you already have a server room and employees.

Typically what I've seen is that the developers are being starved out for resources in the on-prem hardware, and no amount of complaining or yelling or saber-rattling seems to do anything about it. But along comes cloud and we are willing to spend many times more money. The devs are happy because they can spin up hardware and apologize later, which feels really good until you find out people are spinning up more hardware instead of fixing an n^2 problem or something equally dumb in their code (like slamming a server with requests that always return 404).


We are talking about a situation where you already have a server room and employees.

And by “changing your processes” I guess I should also include “changing your people”. Automate the processes where you can, reduce headcount, and find ways to migrate to manage services where it makes sense.

The devs are happy because they can spin up hardware and apologize later, which feels really good until you find out people are spinning up more hardware instead of fixing an n^2 problem or something equally dumb in their code (like slamming a server with requests that always return 404).

I hate to say it, but throwing hardware at a problem long enough to get customers, prove the viability of an implementation and in start up world, get to the next round of funding or go public and then optimize is not always the wrong answer - see Twitter.

But, if you have bad developers they could also come up with less optimum solutions on prem and cause you to spend more.

With proper tagging, it’s easy to know where to point the finger when the bill arrives.


My company (a large-ish pension plan, ~10 dev teams, most applications are internal) was on AWS Beanstalk and switched to Kubernetes (EKS). Once we moved in and cleared the initial hurdles, the overall impression is that K8s is very pleasant to work with: solid, stable, fast, no bad surprises. Everything just works. We probably spend 0.1 FTE taking care of Kubernetes now. Definitely was worth the cost.

All AWS tech I've tried before (ECS, Beanstalk, plain EC2, Cloud Formation) is slower, has random quirks, and needs an extra layer of duct tape.


I’m the last person that will defend Elastic Beanstalk, but what issues have you had with the other stuff?


You probably have not tried too hard? Misspelled CloudFormation hints at it. Btw, ECS is as good as k8s, more integrated with AWS and way simpler.


If you need anything more than just ECS and you don't choose Kubernetes, you may end up rolling your own, in-house, less reliable subset of Kubernetes. You'll need to train everyone how to use it, and how to understand how it works when something goes wrong.

Choosing Kubernetes is like choosing Java -- it's a standard, there is a huge ecosystem of supporting software, the major cloud providers officially support it, and you can hire people who have worked with it before. Whether or not it's overkill is less important than those other factors.


If you’re going to be in the cloud anyway, at least with AWS, you don’t need it and there are plenty of people who know AWS.


As a counter-point, for anyone who has used kubernetes, trying to go to anything else feels like "kubernetes-lite" and very, very often requires duct-taping the disparate parts together because they weren't designed to be one ecosystem.

If one's use-case fits within the 20% on offer of AWS's "20/80 featureset," and one enjoys the interactions with AWS, that's great. To each their own.

But I can assure you with the highest confidence that there are a group of folks who run kubernetes not because of resume driven development but because it is a lot easier to reason about and manage workloads upon it. I know about the pain of getting clusters _started_, but once they're up, I find their world easier to keep in my head.


How is K8s easier to reason about than AWS/Azure/GCP solutions?

I’m in no position to debate Azure/GCP. I’ll leave it to others to carry that torch.


This.

We seem to be forgetting KISS - at some point yes we need to use massive scalable architectures - but humans got on fine treating large numbers of animals as not pets long before we went full on intensive chicken farming.

There is a lot of space between pets and Google-scale


Some hosts even offer 100gb dedicated connections


> Consul/Nomad/Vault/Fabio

This is not very much easier to implement than kubernetes, in my experience, and you end up with a less capable system at the end of it.


And this was on Windows - for reasons.

None of these can run as Windows Services by themselves. I had to use NSSM. That being said.

-Consul a three line yaml configuration to set it up in cluster. It’s a single standalone executable that you run in server mode or client mode.

- once you install Consul on all of the clients and tell it about the cluster, the next step is easy.

- run the Nomad executable as a server, if you already have Consul, there is no step 2. It automatically configures itself using Consul.

- run Nomad in client mode on your app/web servers. If you already have the Consul client running - there is no step 2.

- Vault was a pain, and I only did it as a proof of concept. I ended up just using Consul for all of the configuration and a encryption class where I needed to store secrets.

Did I mention that we had a lot of C# framework code that we didn’t want to try containerize and Nomad handles everything.

That being said, I wouldn’t do it again. If we had a pure Linux shop and the competencies to maintain Linux I would have gone with K8s instead if I had to do an on prem implementation.

But honestly, at the level I’m at now, no one would pay me the kind of money I ask for to do an on prem implementation from scratch. It’s not my area of expertise - AWS is.


Well you can spin up a working k8s cluster in 5 minutes with Kops, but that’s obviously not the end of the story.


How well would that work orchestrating a combination of a Docker containers, C# (.Net Framework not .Net Core) executables, Powershell scripts, etc?


And everyone you hire has to be trained on your particular in-house system.


That’s exactly what I said

That being said, for both Resume Driven Development reasons and because we might be able to find people who know what they were doing, if I had to do another on prem implementation that called for it, I would probably lean toward K8s.


God, HN can be so cynical at times. (I'm not really directing this at just you, scarface74, but the overall tone of responses here). Docker and Kubernetes are not just about padding your resume. Why would I not want to use a solution for orchestration, availability, and elasticity of my services?


Why wouldn’t you? Easy: because you probably don’t have enough “services” to make the costs of kubernetes worthwhile.

If you do, then congratulations, you’re in the top 5% of dev teams, and you presumably also have a well-funded organization of people supporting your complicated deployment.

Otherwise, it’s the common case of bored devs overcomplicating systems that can be deployed more cheaply and safely with simpler technology.


I’m not saying you wouldn’t. I am saying that you get elasticity, orchestration, and availability by using AWS/Azure/GCP and managed services where appropriate and its a lot simpler. I’m not saying the cloud is always the right answer and if I were to do an on prem/colo, I would probably go for K8s if it were appropriate.

As far as Docker, it is the basis of both Fargate, ECS, and CodeBuild in AWS. I’m definitely not saying that it’s not useful regardless.

But why am I cynical? I consider myself a realist, no job is permanent, salary compression is real and the best way to get a raise is via RDD and job hopping.


> Heck even simpler, sometimes is just to use a bunch of small VMs and bring them up or down based on a schedule, number of messages in a queue, health checks, etc and throw them behind an autoscaling group.

Kubernetes is the tool I would choose to do that.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: