Installing Istio... how hard could it be? A simple istioctl install is all you need... right?

Sort of. However, due to a variety of past decisions, things can get a bit more complex under the hood. I will leave out why things got into this state for another post and just talk about how things are now.

The three installation methods

At a very high level, there are three installation:

However, all three of these are inter-related, causing quite a bit of complexity.


Since Helm is the lowest level abstraction, we will start there. Istio offers a variety of Helm charts. These are split out into a few different charts, rather than one monolithic chart, to support more flexible installations (more details later).

Aside from that, they are mostly standard Helm charts (with the caveat of some tricks we employ. Configuration is done through --set <some setting> or --values my-config.yaml.

Advanced customization -- for things that Istio hasn't parameterized is a bit trickier, but can be done.

Installation is split between two methods:

  • helm install is a more managed Helm experience which does a variety of things outside the scope of this post. Read the docs!
  • helm template just renders plain-old Kubernetes YAML files that you can apply with kubectl apply. This is super flexible, since it breaks any dependency on Helm. However, you lose a lot of functionality from Helm such as waiting for resources to be deployed, pruning removed resources, etc. Notably (especially since Istio makes use of this somewhat), is you lose the ability to have charts behave differently based on the cluster they are running against.

One nice aspect of Helm is that its generic and wide-spread. There is a ton of tooling around it, notable GitOps tools like ArgoCD and Flux.

Helm installs are an official supported and recommended installation method.


istioctl install and istioctl manifest generate largely mirror helm install and helm template. The former offers a fully managed experience, while the latter simply spits out Kubernetes YAML files.

Again like Helm (not an accident!) configuration is done via --set <some setting> or --values my-config.yaml. Additionally, istioctl install uses the exact same Helm charts as their base (except Gateways, see below).

There are some key differences, though:

  • Istioctl can install all Istio components at once rather than multiple individual charts
  • Istioctl takes a different input: the IstioOperator. This offers both higher level APIs over the Helm configuration, as well as a passthrough directly to the same Helm values.

An example:

kind: IstioOperator
  # Higher level Operator configs
  revision: custom
    - name: istio-egressgateway
      enabled: true
  # Helm passthrough
        foo: bar

Istioctl installs are an official supported and recommended installation method. However, for users already using Helm, especially with GitOps tooling, Helm is usually a better choice.

Istio In-Cluster Operator

The Istio In-Cluster Operator is yet another installation method, and the source of much confusion. The operator runs as a Deployment in the cluster.

When using the operator, you actually apply an IstioOperator to the cluster; the operator reads this and dynamically reconciles the state of the cluster. Internally, this builds on the same tooling as istioctl install (which, in turn, builds on the same Helm charts helm uses). It is a bit more complex in reality, but it is essentially just doing istioctl install on a loop).

One bit of confusion is you need to actually install the operator itself! This can be done with istioctl operator init or with a Helm chart, which muddles things a bit. It is also part of why I don't like operators.

Additionally, the API IstioOperator is often confused. IstioOperator is also used for istioctl install, even though there is no "operator" involved.

The operator is formally not recommended and informally really not recommended. To be explicit, this applies to the usage of the in-cluster operator deployment only -- not the IstioOperator API shared with istioctl install.

Confused? Me too. It helps to pretend the name of IstioOperator is... anything else.

Mesh Config and Proxy Config

A large chunk of Istio functionality is configuring "Mesh Config" and "Proxy Config". These, too, can be somewhat confusing.

Mesh Config is one of the core configuration mechanisms for Istiod. This is exposed through --set meshConfig.<...> in both IstioOperator based installs and Helm installs. Ultimately, these configure the Istiod chart to emit a ConfigMap - istio.

Here is a snippet of an example install with access logs enabled:

$ kubectl get cm istio -oyaml -n istio-system
apiVersion: v1
  mesh: |-
    accessLogFile: /dev/stdout

If the ConfigMap changes, Istiod will dynamically pick up the changes. This means you can kubectl edit the ConfigMap directly, but you probably shouldn't -- the next time you run an install, your changes would be reverted.

While Mesh Config is often referred to as MeshConfig, which looks like a Kubernetes kind name, there is no MeshConfig Kubernetes resource.

Proxy Config

Proxy Config is similar to Mesh Config, but holds per-proxy (sidecars, gateways, and waypoints) configuration. This includes things like environment variables and other tuning configuration.

This can be set in a few ways.

  1. As part of Mesh Config, there is a defaultConfig field. For example, setting some field like this in Helm:

        # Everything here is a part of Proxy Config!
        discoveryAddress: istiod:15012

    As the name implies, this is just the default config to apply to all proxies. However, I mentioned Proxy Config is per-proxy...

  2. As part of an individual Pod, with an annotation on the Pod:

    annotations: |
        # Everything here is a part of Proxy Config!
        discoveryAddress: istiod:15012    

    Values set here will merge with specified defaults. The schema between this and the Mesh Config are identical.

  3. Additionally, there is a ProxyConfig custom resource.

    kind: ProxyConfig
      name: per-workload-proxyconfig
          app: ratings
        imageType: debug

    Unlike the previous 2, this is not the exact same schema. The CRD has a limited subset of the full Proxy Config. This is primarily because the CRD is much newer, and was kept minimal to avoid carrying forward tech debt.

Install Overview

An overview of Istio install method relationships
An overview of Istio install method relationships

Revisions, Tags, Profiles, Compatibility Versions, and Unicorns

If at this point you are screaming "I just want to install Istio, how many concepts do I need to understand!", feel free to stop here. However, we are just getting started.


Istio revisions are a mechanism to allow multiple installations of Istio side-by-side. This is sometimes also referred to as canary upgrades, which is one common use case for this feature.

In other applications, you might just simply install it twice (with a different name or namespace). However, Istio has a lot of interactions between different components, as well as a variety of shared resources (some cluster-scoped). Revisions are not entirely isolated, either - they (usually) use the same mTLS root of trust, allowing multiple revisions to communicate.

Essentially, any per-revision configuration gets the -<revision name> suffix added to its name.

Things can be connected to revisions via label:

  • Namespaces or Pods can be labeled with<revision-name> to specify which revision should be injected.
  • Istio custom resources typically apply to all revisions, but if you need to scope it to a single one can do it.

Below shows what this looks like:

An overview of Istio revisions
An overview of Istio revisions

One thing you will note is that webhooks exist both in the shared and per-revision state. This is intentional: the shared ones handle anything that doesn't specify a revision, while the pre-revision ones handle things explicitly connected to a revision (by label). In order to pick which Istiod serves these default functions, one revision must be selected as the default with a "tag".


Tags act as an indirection to a revision. They are very similar to a docker image tag. For instance, you can have a "stable" tag pointing to whatever revision the infrastructure operators deem "stable" at that time, and another "unstable" tag pointing to the latest-and-greatest version.

When an upgrade is desired, the infrastructure admins simply move the tag. Without tags, each application owner would need to change the revision in all of their resources, which can be a major hassle for many organizations.

Additionally, the special default tag is utilized to declare which revision handles the default un-revisioned operators.

Compatibility Versions

Compatibility Versions are a newer concept in Istio. The idea is to decouple the actual version of Istio (as in, the literally binary that is running) from the behavior of that version.

For instance, I may deploy to the latest version (we will use Istio 1.22), but set compatibilityVersion=1.21. This would give me all the bug fixes, performance improvements, CVE fixes, etc that come with Istio 1.22 -- including the support of community/vendors -- but turn off any behavioral changes that may have been made in that version.

This, hopefully, removes the need to make a tradeoff between adapting to potentially breaking changes and getting on the latest version; upgrading to the latest version is always safe, and adaptation can be done later. It does actually have to be done eventually, though, as compatibility versions will only support a few previous releases of behavior.


Profiles are a very simple abstraction over IstioOperator and Helm values.yaml. Essentially, they are just a pre-configured set of options.

So, for instance, if I want to deploy Istio tuned for OpenShift, I can just run istioctl install --set profile=openshift.

Note that profiles are literally just IstioOperator/values.yaml files built into the install methods. Both istioctl and helm already support setting --values multiple times, so the following are equivalent:

  • istioctl install --set profile=openshift -f my-values.yaml
  • istioctl install -f openshift.yaml -f my-values.yaml

Where openshift.yaml could be your own custom configuration or the same one Istio embeds.


So far, I haven't talked much about Gateways. Unfortunately, I was not saving the simplest for last. Gateways, somehow, have even more ways to install than Istio core!

Before we get into how they can be deployed, its important to understand how they actually work. Due to defaults and documentation, users often think Istio has 2 gateways:

  • istio-ingressgateway serving ingress traffic from istio-system
  • istio-egressgateway serving ingress traffic from istio-system

This isn't really how Istio works, it is just a common convention for how to use Istio.

Istio does not have a concept of ingress and egress gateways. There is only "gateway". Whether it is serving ingress traffic or egress traffic is a property of which traffic you program it to handle. you could even use one gateway for both (usually, ingress has a public-facing address while egress does not, though, so its not often the right approach). You may also want more than just 2 gateways!

Istioctl install

First up is installation in istioctl. Out of the box, a gateway is installed: istio-ingressgateway in istio-system. Others can be enabled explicitly; I showed one above in the Istioctl section.

While Istioctl can deploy arbitrary gateways, its not very well suited to do so. The values configuration is global, so you cannot really have per-gateway settings. Additionally, if you declare everything in one configuration, they cannot be upgraded independently. You can have multiple distinct istioctl install operations, but you need to be careful not to accidentally have each install attempt to own the same resources.

This install is based on the Helm charts at manifests/charts/gateways/istio-{ingress,egress}. Note these charts are not the published Helm charts, more on that next.

Helm install

Unlike the rest of Istioctl vs Helm, which are basically identical aside from a higher-level API, the published Helm charts for gateway is completely different from what istioctl uses.

This was an attempt to correct some of the past mistakes around the old charts. This avoids the mistakes of hardcoding assumptions around names and "ingress" vs "egress".

Additionally, it switches to a different runtime mechanism: the new chart creates only a minimal Deployment, with the rest of the configuration populated at runtime via a mutating webhook. This mirrors how sidecars work. This helps keep configuration of gateways sane -- rather than repeating all configuration between the gateway install and Istiod install (things like repeating mesh config, among others), it is all configured on Istiod and the gateway is dynamically configured.

Plain old YAMLs

Due the runtime configuration mechanics discussed above, the installation is almost trivial. This actually makes it feasible to just hand-write a Deployment for the gateway. Istio docs provide an example.

Gateway API

The new Kubernetes Gateway API, not to be confused with the Istio Gateway resource of the same name, gives us yet another install mechanism.

Users can create a Gateway object, and Istio will automatically provision the underlying infrastructure (Deployment/Service/etc).

For instance, this config both configures and deploys a gateway:

kind: Gateway
  name: gateway
  namespace: istio-ingress
  gatewayClassName: istio
  - name: default
    hostname: "*"
    port: 80
    protocol: HTTP

With the other install mechanisms, deploying a gateway (with one of the methods above) and configuring it (with Istio's Gateway type) are separate.

"Isn't this just a operator, which you just bashed on?!". Yes.. but it is better.

Please just tell me what to use

For the base install:

  • If you are using Helm or tooling that integrates Helm, Helm is a great choice.
  • If istioctl install appeals to you, that is a fine option as well.

For gateways:

  • If you are willing to use newer features that are a bit more bleeding-edge and less feature-rich, use the Gateway API.
  • Otherwise, use the same method as the base install.

Use revisions and tags if you are willing to take on a bit of complexity in exchange for safer upgrades.

Use compatibility versions if you want to upgrade but are blocked by some breaking change.

The future and the past

A followup post will (hopefully) discuss why we got to where we are - but that is for another day.

More interesting is what is coming. Opinions here are my own view of what should and might happen.

  • Istio in-cluster operator will eventually be official deprecated and removed in the 1-2 year timeframe
  • istioctl install will retain a similar experience but will (continue to be) rewritten internally to be closer to helm install. While it currently uses some helm libraries, there is still a large amount of divergence. A north star would be for istioctl to eventually literally call helm install (although probably the library equivalent).
  • Gateway installation will converge on Gateway API
  • (Maybe) Helm installation can be done in a single step
  • (Seems nice, but no clear path to this happening) istioctl install's API will be refreshed to not have a confusion name or legacy semantics ("pilot" is so 2020)