Testing a Kubernetes Networking Implementation Without Kubernetes
How Istio tests its networking proxy without Kubernetes, Docker, or root.
How Istio tests its networking proxy without Kubernetes, Docker, or root.
tl;dr: it just works
Like most other Kubernetes controllers in, Istio is written in Go and relies on the client-go library. While this provides an excellent low-level building block, usage in higher level code in Istio led to a variety of issues that led us to develop our own higher level, opinionated client for Istio. This post covers the issues we faced and how we incrementally solved them. Background knowledge At a high level, client-go provides a few layers for interactions with the API server: ...
Rage bait for YAML templating haters
Which features I recommend using, or not using, in Istio
In Analyzing Go Build Times, I went over how to analyze and understand Go build times, and what factors impact build times. A close cousin to build times is build sizes. Large binaries can lead to a variety of issues such as: Generally, slower build times Increased costs of storage Increased costs and time to distribute Increased memory usage at runtime (more on this in another article, hopefully) So its generally nice to keep them small. ...
Exploring an extreme service mesh architecture to maximize extensibility.
The OSI model attempts to build a model for network communications, where increasingly high level layers are built upon lower layers. This is only slightly useful in practice, as the real world is not so simple. In service mesh, generally discussion is reduced to L4 and L7, or TCP and HTTP. This oversimplifies the problem, leading to some confusion. Thinking in terms of termination Simply saying "HTTP" is not really clear about what is going on. Instead, I think its more useful to think about what layer we terminate. ...
When looking at service mesh (or even general networking) architectures, the basic idea is to send network traffic through some component, which handles various functionality. This could be authentication, authorization, encryption, observability, reliability, networking, etc. There are a few different classes of components that can do this, though: Different types of proxy deployments Native application enhancement. The application itself is compiled in with functionality. This could be something like gRPC (or, even more "meshy", gRPC with xDS), Finagle, Hystrix, etc. Even simply instrumenting your application with metrics and traces could be classified here. "Sidecar", or running a proxy-per-application, is probably the most common service mesh deployment pattern today, used by Istio, Linkerd, and more. Per-node proxy; like sidecar, but instead of per-application the proxy is per-node. Each node contains multiple unique workloads, so the proxy is multi-tenant. Remote proxy. A completely standalone proxy deployment we send some traffic through. This could be correlated to one or many service(s), one or many workload(s), etc -- the correlation between proxies and other infrastructure components is flexible here. Within each of these, there is 2 actors: a client and a server. This gives us 8 points to insert functionality. Presumably, all 8 will not be used at once -- but its possible. If we are willing to blur the lines a bit, even a traditional sidecar based service mesh utilizes 6 of these! The most rich "service mesh" functionality may exist in the sidecar, but the application itself has some functionality (even if its not terribly rich), and the node does as well (again, this may not be terribly rich -- kube-proxy, for example, has very minimal functionality). And the same is mirror on the client and server side. ...
In-kernel networking solutions, such as WireGuard, are not always faster than user space.