Go has very few runtime "knobs" to configure, but the ones it do has are fairly important:
GOMAXPROCS
configures the number of processes to run. This defaults to the number of CPUs available.GOMEMLIMIT
(since Go 1.19) hints at how much memory is available, which can help tune the GC.
Both of these have reasonable defaults... unless you set limits in containers.
For example, a pod with a limit of 1 CPU running on a 64 core machine will get GOMAXPROCS=64
set - not good!
There is considerable data showing this can cause major performance issues.
Configuration in Kubernetes
Most usages I have seen are either manually configuring this (tedious and error prone) or using automaxprocs (good, but relies on the application using it, and doesn't support GOMEMLIMIT
).
Fortunately, there is a better way. Kubernetes allows us to dynamically set environment variables from resourceFieldRef
, allowing us to do this:
env:
- name: GOMEMLIMIT
valueFrom:
resourceFieldRef:
resource: limits.memory
- name: GOMAXPROCS
valueFrom:
resourceFieldRef:
resource: limits.cpu
And things work perfectly.
A full example can be tested like so:
apiVersion: v1
kind: Pod
metadata:
name: test
spec:
containers:
- name: tester
image: golang:1.20
args:
- bash
- -c
- |
cat <<EOF > /tmp/code.go
package main
import (
"fmt"
"runtime"
"runtime/debug"
)
func main() {
fmt.Printf("GOMAXPROCS: %d\n", runtime.GOMAXPROCS(0))
fmt.Printf("GOMEMLIMIT: %d\n", debug.SetMemoryLimit(-1))
}
EOF
go run /tmp/code.go
resources:
limits:
cpu: 1500m
memory: 1500M
env:
- name: GOMEMLIMIT
valueFrom:
resourceFieldRef:
resource: limits.memory
- name: GOMAXPROCS
valueFrom:
resourceFieldRef:
resource: limits.cpu
Which results in:
GOMAXPROCS: 2
GOMEMLIMIT: 1500000000
This correctly handles rounding up of the CPUs, and the correct units (bytes) for GOMEMLIMITS
. Go will also handle these being 0
the same as unset, so if we do not have limits things work as expected as well.