When looking at service mesh (or even general networking) architectures, the basic idea is to send network traffic through some component, which handles various functionality. This could be authentication, authorization, encryption, observability, reliability, networking, etc. There are a few different classes of components that can do this, though:
Different types of proxy deployments Native application enhancement. The application itself is compiled in with functionality. This could be something like gRPC (or, even more "meshy", gRPC with xDS), Finagle, Hystrix, etc. Even simply instrumenting your application with metrics and traces could be classified here. "Sidecar", or running a proxy-per-application, is probably the most common service mesh deployment pattern today, used by Istio, Linkerd, and more. Per-node proxy; like sidecar, but instead of per-application the proxy is per-node. Each node contains multiple unique workloads, so the proxy is multi-tenant. Remote proxy. A completely standalone proxy deployment we send some traffic through. This could be correlated to one or many service(s), one or many workload(s), etc -- the correlation between proxies and other infrastructure components is flexible here. Within each of these, there is 2 actors: a client and a server. This gives us 8 points to insert functionality. Presumably, all 8 will not be used at once -- but its possible. If we are willing to blur the lines a bit, even a traditional sidecar based service mesh utilizes 6 of these! The most rich "service mesh" functionality may exist in the sidecar, but the application itself has some functionality (even if its not terribly rich), and the node does as well (again, this may not be terribly rich -- kube-proxy, for example, has very minimal functionality). And the same is mirror on the client and server side.
...