Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

AWS 2006: "Run your workloads on our EC2 instances in the cloud 24/7."

AWS 2014: "Run your work loads on serverless so you don't have to deal with those pesky EC2 instances 24/7 anymore."

AWS 2019: "Click a checkbox and you can have your serverless workloads get dedicated EC2 instances 24/7!"



That's pretty reductive, it's more like

"Click a checkbox and we'll run your code for you, take care of OS security updates, compliance requirements, autoscaling, load balancing, AZ resiliency, getting logs of your box, restarting unhealthy processes, ..."


You wouldn’t use provisioned capacity for “workloads” where you don’t care about latency - like processing events.

It would only be used for user impacting APIs.

There are a few types of processes that I have had to create.

1. A Windows service that processed a queue. We have 20x more messages at peak. Of course since it was tied to Windows, lambda wasn’t an option. I had to create an autoscaling group based on queue length. That also involves CloudWatch alarms to trigger scaling and now we either have one instance running all the time (production) or we have a min of zero and only launch an instance when there is a message in the queue (non prod). Not only is the process slower to scale, but because it’s Windows AWS does hourly billing.

Of course the deployment process and Cloudformation template are a lot more complicated than lambda.

2. Same sort of process on lambda. The CloudFormation template using SAM is much simpler and the process is faster to scale in and out.

Also, you can configure everything on the web and export the template.

3. A Node/Express API using lambda proxy integration behind API Gateway.

Again this was easy to set up but cold start times were killing us and we knew that we were going to have to move it off of lambda because of the 6MB request/response limit.

4. The same API as above running in Fargate.

Since we knew advance that this was the direction we wanted to go, I opted to use Node/Express for the lambda. So we didn’t require any code changes. But creating a registry, Docker containers, services, clusters, load balancers, autoscaling groups, etc took a lot longer to get right and then automating everything with CloudFormation was more complicated.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: