I'm testing Istio at the moment and I feel those comments to be very inaccurate:
"Traditional service meshes are an all-or-nothing proposition that add a significant layer of complexity to your stack. That’s not great."
Istio is like k8 it's very modular and you setup what you need.
"Traditional service meshes are designed to meet the needs of platform owners, and they dramatically underserve a more important audience: the service owners."
Not sure what it means, Istio is all about observability ect...
Those quotes aren't from the article. I'll address them anyways since they're from a different article which I wrote:
>> "Traditional service meshes are an all-or-nothing proposition that add a significant layer of complexity to your stack. That’s not great."
> Istio is like k8 it's very modular and you setup what you need.
Istio is a very different beast from Linkerd. But out of curiosity, I just tried the latest Istio release on my laptop (1.0.2 on Docker for Mac K8s). The simplest configuration I found installs 50 (!) CRDs, 13 deployments, and is currently sitting at ~600mb of memory without any traffic. Perhaps by "modular" you mean you can add more modules on top of that?
(By contrast, Linkerd 2.0 installs 0 CRDs, 4 services, and is sitting at 250mb including 50mb full Grafana UI. It also does a lot less... but that's the point.)
That's the control plane. Let's not get started on the data plane.
>> "Traditional service meshes are designed to meet the needs of platform owners, and they dramatically underserve a more important audience: the service owners."
> Not sure what it means, Istio is all about observability ect...
This is all in the article. You can install Linkerd as a service owner on a single service. You'll get metrics, debugging, and more. It's lightweight and small enough for this installation make sense.
Buoyant invented the service mesh concept and added that term to our vernacular as a way to describe Linkerd 1.x and similar systems. Istio was built using the same pattern as Linkerd 1.0. Those are "traditional", albeit in internet years not real years.
Linkerd 2.0 takes a fundamentally different approach. It's tiny, fast, lightweight and designed to add value as a service sidecar (installed on a single service) without any "mesh". If multiple service owners install Linkerd 2.0, it self-morphs into a mesh configuration and provides all of the value of a service mesh. This creates an installation and deployment pattern that is very practical and bottoms-up, delivering value to the individual service (and service owner) but also supporting a higher-level abstraction of a mesh at the platform level. This is a fundamental innovation and, hence, a new model for service mesh patterns versus the original, er, traditional model.
Linkerd 1.0 and Istio are traditional service meshes; you install them in their totality across an entire platform. They are rich in features and bulky, completely overkill for a single service.
Linkerd 2.0 follows a new bottoms-up model that is, frankly, non-traditional. An individual service owner can install a lightweight package on a single service and derive immediate value. When multiple service owners on a project adopt Linkerd 2.0, their services will properly "mesh" without any platform-level installs. This makes adoption organic and immediately valuable for a single dev but also for an entire project as it gets installed into more services. Game changing IMHO.
Honestly I'd much rather have some configurable yaml than some obscure app in the middle of your cluster that reads CRDs when what you're doing is just adding a side car.
At least this way you can tell if your local yaml is applied or not without a complicated diffing algorithm that compensates for an in-cluster modifier.
I would too, except, have you seen what your YAML looks like after all the Linkerd additions? There's a reason they don't even mention in the docs exactly what it changes.
Marco, CTO of Kong here. This is exactly what Kong has been doing for a while and with the newly announced Kong 1.0 release [1] (2 days ago) we also support Service Mesh with a lightweight runtime that has been running in production since 3.5 years across multiple platforms, hybrid container orchestration platforms and even hybrid baremetal/cloud deployments.
GA for Kong 1.0 will include Service Mesh (we have RC1 now, and RC2 coming soon).
I get why I want metrics from an RPC service, but not why I want them behind a process boundary. How do custom metrics (items processed/filtered per request, experiment vs control SR) work?
The post is long but it was really written from the POV of an infrastructure owner (DevOps) and not from the POV of a service owner/developer which is actually the target for Linkerd 2.0. FWIW - the post says it takes 5 mins to install Istio. I find that very, very hard to believe.
For different definitions of "installed", I guess. I've been at an Istio workshop and it was less than 5 minutes to have it up and functional. Tweaking it obviously takes longer but that's true for every tool.
Exactly! We considered Go briefly but in the end, the magic intersection of native-code performance, plus guaranteed memory safety, trumped every other concern for the data plane.
"Traditional service meshes are an all-or-nothing proposition that add a significant layer of complexity to your stack. That’s not great."
Istio is like k8 it's very modular and you setup what you need.
"Traditional service meshes are designed to meet the needs of platform owners, and they dramatically underserve a more important audience: the service owners."
Not sure what it means, Istio is all about observability ect...