Advanced Helm Techniques
Rage bait for YAML templating haters
Rage bait for YAML templating haters
Which features I recommend using, or not using, in Istio
In Analyzing Go Build Times, I went over how to analyze and understand Go build times, and what factors impact build times. A close cousin to build times is build sizes. Large binaries can lead to a variety of issues such as: Generally, slower build times Increased costs of storage Increased costs and time to distribute Increased memory usage at runtime (more on this in another article, hopefully) So its generally nice to keep them small. ...
Exploring an extreme service mesh architecture to maximize extensibility.
The OSI model attempts to build a model for network communications, where increasingly high level layers are built upon lower layers. This is only slightly useful in practice, as the real world is not so simple. In service mesh, generally discussion is reduced to L4 and L7, or TCP and HTTP. This oversimplifies the problem, leading to some confusion. Thinking in terms of termination Simply saying "HTTP" is not really clear about what is going on. Instead, I think its more useful to think about what layer we terminate. ...
When looking at service mesh (or even general networking) architectures, the basic idea is to send network traffic through some component, which handles various functionality. This could be authentication, authorization, encryption, observability, reliability, networking, etc. There are a few different classes of components that can do this, though: Different types of proxy deployments Native application enhancement. The application itself is compiled in with functionality. This could be something like gRPC (or, even more "meshy", gRPC with xDS), Finagle, Hystrix, etc. Even simply instrumenting your application with metrics and traces could be classified here. "Sidecar", or running a proxy-per-application, is probably the most common service mesh deployment pattern today, used by Istio, Linkerd, and more. Per-node proxy; like sidecar, but instead of per-application the proxy is per-node. Each node contains multiple unique workloads, so the proxy is multi-tenant. Remote proxy. A completely standalone proxy deployment we send some traffic through. This could be correlated to one or many service(s), one or many workload(s), etc -- the correlation between proxies and other infrastructure components is flexible here. Within each of these, there is 2 actors: a client and a server. This gives us 8 points to insert functionality. Presumably, all 8 will not be used at once -- but its possible. If we are willing to blur the lines a bit, even a traditional sidecar based service mesh utilizes 6 of these! The most rich "service mesh" functionality may exist in the sidecar, but the application itself has some functionality (even if its not terribly rich), and the node does as well (again, this may not be terribly rich -- kube-proxy, for example, has very minimal functionality). And the same is mirror on the client and server side. ...
In-kernel networking solutions, such as WireGuard, are not always faster than user space.
Exploring an (unfortunately, hypothetical) CI/CD system for end to end tests on Kubernetes.
The common messaging around Istio Ambient Mesh is that is a "node proxy." For example, from The New Stack ... architecture that moves the proxy functionality from the pod-level to the node-level. While this is technically accurate, it is misleading and really missing the point and benefits of Ambient. A brief history of service mesh architectures This skips quite a bit of information, but is close enough. One of the earlier service meshes on the market was Linkerd 1 - not to be confused with Linkerd 2, which most people just call "Linkerd" today. Linkerd 1 was a per-node proxy that did all the service mesh functionality we know and love, at the node level. ...
How to achieve an architecture similar to "Waypoint Proxies" without ambient mesh, or even Istio.