Caching is one of the fastest ways to reduce backend load and improve response latency in Kubernetes.
With NGINX Ingress Controller (NIC), you can define caching behavior as a first-class Policy resource and attach it to a VirtualServer or VirtualServerRoute. That keeps caching configuration explicit, reusable, and versioned with the rest of your traffic policy.
Across this guide, we’re focused on:
- How the cache policy works in NGINX Ingress Controller.
- Where to attach it in
VirtualServer. - Why StatefulSet is important for persistent cache use-cases.
Why Use a Cache Policy Resource?
Putting cache settings in a Policy resource gives platform teams a cleaner separation of concerns:
- Application routing stays in
VirtualServer. - Caching behavior lives in a dedicated, reusable policy.
- Updates can be rolled out without embedding raw snippets in every route.
This model also makes reviews easier because cache behavior is visible in one place.
How the Cache Policy Works in NGINX Ingress Controller
At a high level:
- Create a
Policywith aspec.cacheblock. - Reference that policy from
VirtualServer.spec.policies(server-wide) or from route-levelpolicies. - NGINX Ingress Controller renders the corresponding NGINX cache directives and applies them during config reload.
Example Cache Policy:
apiVersion: k8s.nginx.org/v1
kind: Policy
metadata:
name: cache-policy
spec:
cache:
cacheZoneName: "testcache" # Required
cacheZoneSize: "15m" # Required
allowedCodes: ["any"] # Optional ["any"] or [200, 301, ...], "any" cannot be combined with specific codes
allowedMethods: ["GET", "HEAD", "POST"] # Optional
overrideUpstreamCache: true # Optional, default is false - whether to respect upstream cache-control headers (Cache-Control Expires Set-Cookie Vary X-Accel-Expires)
Example of VirtualServer referencing above Cache Policy:
apiVersion: k8s.nginx.org/v1
kind: VirtualServer
metadata:
name: cafe
spec:
policies:
- name: cache-policy
host: cafe.example.com
tls:
secret: tls-secret
upstreams:
- name: tea
service: tea-svc
port: 80
- name: coffee
service: coffee-svc
port: 80
routes:
- path: /tea
action:
pass: tea
- path: /coffee
action:
pass: coffee
Important behavior to remember:
- Cache policy is designed for
VirtualServerandVirtualServerRouteflows. - If multiple cache policies are referenced for the same effective location, only one is applied (first one wins).
- Route-level policies override same-type policies defined at
VirtualServer.spec.
Key Fields Worth Tuning First
When you define spec.cache, prioritize these fields first:
cacheZoneNameandcacheZoneSize: memory zone identity and capacity.allowedMethods: which methods are cacheable.allowedCodesplustime: what status codes are cached and for how long.cacheKey: request identity for cache lookup.overrideUpstreamCache: whether upstream cache headers should be honored.
Then tune advanced behavior only as needed:
cacheUseStale,cacheBackgroundUpdate, andcacheRevalidatefor resilience.conditions.noCacheandconditions.bypassfor selective caching.cachePurgeAllow(NGINX Plus) for controlled invalidation.
Caching at Scale: Use StatefulSet for Persistent Cache Workloads
NGINX Ingress Controller supports running the controller as a StatefulSet, which is the better fit for disk-backed cache use-cases. Each replica gets stable storage through a PersistentVolume, which improves cache warm-up behavior after restarts.
For Helm-based deployments, NGINX Ingress Controller explicitly supports this model with:
controller.kind: statefulset- StatefulSet-specific
nginxCachePVCconfiguration undercontroller.statefulset
You can find a complete working example on github and more documentation in our docs:


