When looking at service mesh (or even general networking) architectures, the basic idea is to send network traffic through some component, which handles various functionality. This could be authentication, authorization, encryption, observability, reliability, networking, etc. There are a few different classes of components that can do this, though:

Different types of proxy deployments
Different types of proxy deployments
  1. Native application enhancement. The application itself is compiled in with functionality. This could be something like gRPC (or, even more "meshy", gRPC with xDS), Finagle, Hystrix, etc. Even simply instrumenting your application with metrics and traces could be classified here.
  2. "Sidecar", or running a proxy-per-application, is probably the most common service mesh deployment pattern today, used by Istio, Linkerd, and more.
  3. Per-node proxy; like sidecar, but instead of per-application the proxy is per-node. Each node contains multiple unique workloads, so the proxy is multi-tenant.
  4. Remote proxy. A completely standalone proxy deployment we send some traffic through. This could be correlated to one or many service(s), one or many workload(s), etc -- the correlation between proxies and other infrastructure components is flexible here.

Within each of these, there is 2 actors: a client and a server. This gives us 8 points to insert functionality. Presumably, all 8 will not be used at once -- but its possible. If we are willing to blur the lines a bit, even a traditional sidecar based service mesh utilizes 6 of these! The most rich "service mesh" functionality may exist in the sidecar, but the application itself has some functionality (even if its not terribly rich), and the node does as well (again, this may not be terribly rich -- kube-proxy, for example, has very minimal functionality). And the same is mirror on the client and server side.

Each of these offers unique capabilities and tradeoffs.

Client modes

Client application

The client application can optionally resolve the Service calls and apply service policies.

Resolving service calls means when the application decides it wants to connect to example.ns.svc.cluster.local, it will pick a specific pod to send the request to. Inspecting the traffic at a later point in the network will not (reliably) be able to determine which service the application intended to reach; they will only see traffic going to a specific pod. Typically, this will also involve load balancing to intelligently pick an optimal pod backend, and possibly request-level load balancing, where each request is sent to a potentially different backend (rather than picking this per-connection). This is a form of service policy

Service policy is any policy that applies to calls to a service. This includes a lot of things, such as:

  • Retries
  • Load balancing
  • Timeouts
  • Routing decisions
  • Request rewrites (header modifications, URL rewrites, redirects)
  • Traffic mirroring

and much more.

Client applications are unique in the hierarchy of proxy classes, as they have direct access to the users intent. This differs from others, which typically will attempt to reverse engineer based on request properties (IP, Port, Host header, etc). Often this is equivilent, but sometimes it isn't -- for instance, the application may dial example.ns.svc.cluster.local, but the DNS response is somehow mangled to return the IP of attacker.example.com; a proxy later in the stack may incorrectly assume that application intended to go to attacker.example.com.

As a client-side operation, applying authorization rules is inappropriate here. If a backend wants to restrict access to it, it ought to do that check itself, rather than politely asking clients to apply the check. In general, this applies to egress policies as well. While the client library could have surface level protection against communicating to some backends, because the library and application are in the same trust boundary, these cannot be relied upon (See this post for more details).

Client sidecar

A client sidecar has much of the same properties of a Client application.

The major difference, as noted above, is the sidecar will need to reverse engineer the users intent to decide where to go next. In Istio, this is done through some questionable and inconsistent mechanisms.

In addition, the client sidecar can generally only apply service policies if the client application did not. If they client sidecar sees a request to pod-xyz, generally this would never have service-level operations done on it. While it is, in theory, possible to try to reverse this to a service, this is certainly not reliable or broadly "correct".

Client node

Again, a client node has much of the same properties of a Client application or Client sidecar.

One unique aspect is that by the time a packet leaves the node, the destination IP must not be going to a Service (ClusterIP). Services have virtual IPs, so they are not actual valid IPs we can send traffic to. Instead, the node is expected to translate the request to one of the backend pods for the Service. This is the role kube-proxy (or equivalents) play. One reason why Istio doesn't fully replace kube-proxy is that, while in the vast majority of cases it will send requests directly to a pod (as it does the service resolution itself), it doesn't for 100% of cases.

Another cool thing about the node is it has a somewhat unique position of being able to optimize node-local traffic. Co-locating pods is a nice optimization for high performance applications, but this is largely lost when using a service mesh. A node component, however, can enable fast-paths where data is efficiently transferred without encryption, using splice() or other mechanisms; because it is in charge of identity management, it can send the request without encryption while still correctly enforcing authorization policies (why this is secure will need another blog post, or see this talk).

Like a Client sidecar, the client node must reverse engineer the client's intent on where the destination is. However, it also is serving multiple applications at once, so it critically must determine who the client even is. Typically, this is critically important to do correctly, as a mistake could lead to attaching the wrong credentials to outgoing requests.

The node has a unique property available that Client application or Client sidecar do not: it runs in a different trust domain than user applications. This makes it possible to enforce egress policies, which is not possible in sidecars.

Client remote proxy

Client remote proxy are very similar to Client node.

Some differences:

  • Scaling is entirely decoupled from the node and cluster.
  • Client identification maybe still be required, but there may be less information. While the node could hook directly into a veth to identify the traffic source, the remote proxy only sees an IP address. As such, an implementation here should probably rely on using an identity on the wire (mTLS) rather than IP address. Most use cases of this do not have a per-client-pod identification requirement.
  • Unless explicitly called, this deployment model would need to be coupled with one of the other client deployment modes to redirect traffic to it. This is recommended for most cases anyways, though, to add encryption.

Because its a shared deployment, this can also be used to forward all egress traffic from a set of stable IPs. This is a very common egress NAT case.

Server modes

Server application

When a request arrives at a server application, its a request intended for that specific workload. By definition, the request can no longer be to a Service, and we generally cannot reliably tell if it even was originally sent to a Service or directly.

As such, all server actions are generally workload oriented policies.

This includes things such as:

  • Terminating any encryption used
  • Applying authorization rules or rate limiting
  • Emitting telemetry

Routing can be done at this phase, but its fairly niche. For example, we could redirect traffic incoming on port 80 to port 90, or to a Unix Domain Socket.

Load balancing is not done here -- its already been done by the clients! However, it is plausible that the request can be forwarded to a different backend, but I haven't seen this done in practice.

Server sidecar

Everything here is basically identical to Server application.

Server node

Most of Server application applies here.

Like Client node, a server node runs in a different trust boundary than the application, so it can apply policies that are less likely to be bypassable by the application. However, bypassing your own ingress policies is an atypical attack vector anyways; usually attacks are bypassing your own egress policies or someone else's ingress policies.

Server remote proxy

The server remote proxy, in my opinion, is the most unique and interesting deployment mode.

The other server deployment modes only can handle workload policies. A substantial portion of configuration is, or is desired to be, against services. This means that we need to rely on pushing all of this configuration to clients, which has a few problems:

  • We cannot enforce the client will follow our policies, just recommend them to. For most policies, we cannot actually verify they follow them, either.
  • Scalability is extremely challenging. In the worst case, all clients get configuration for how to communicate to all services, which is O(N^2).
  • There is a responsibility violation. The service producer configures the rules, but the client implements them. This means the client must trust the service producer to produce reasonable rules (for example, don't make a configuration that makes each request take 10s to process). Additionally, its harder to understand, debug, and operate.

A server remote proxy flips this model, and allows the service producer to enforce service policies (and workload policies).

In essence, we are able to take the responsibilities of clients and servers, and sandwich them together into one component controlled by the service producer.

Note that the client still has some responsibility: they must actually send the traffic to this proxy. This can be done through various means (redirecting the request, changing DNS, etc), and enforce through node/sidecar/application level policies.