Hey folks! Today we’re going to dive into a topic that might already be part of your infrastructure day-to-day, but there’s always more to explore: Nginx with Kubernetes. This duo is more than a simple ingress configuration — it offers potential for fine-grained traffic control, scalability, and optimizations that ensure a robust, reliable architecture.
Nginx as an Ingress Controller on Kubernetes
When we talk about Kubernetes, we know it’s phenomenal for orchestrating and managing microservices, but incoming cluster traffic is its own chapter. The Nginx Ingress Controller is in charge of that entry point. It manages how external data arrives and gets routed to internal services, letting us configure rules for HTTP and HTTPS request routing, load balancing, and even header manipulation.
In Kubernetes, you can deploy the Nginx Ingress Controller as a Deployment or even as a DaemonSet, depending on your needs. It runs as a pod inside the cluster and reacts to the Ingress resource that we define in YAML. This approach brings two upsides: fine-grained control over traffic and the ability to route users to the right service based on very specific rules.
Configuration and customization with annotations
One of the great advantages of Nginx on Kubernetes is customization flexibility. Through annotations in the Ingress manifest, you can adjust the behavior of each service without modifying the global Nginx configuration. Want to cap the upload size? Add a specific annotation. Need to tune the response timeout? Another annotation handles it. Here’s an example:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: meu-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/proxy-read-timeout: "300"
spec:
rules:
- host: meusite.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: meu-servico
port:
number: 80Ingress Nginx annotations: https://github.com/kubernetes/ingress-nginx/blob/main/docs/user-guide/nginx-configuration/annotations.md
These annotations are small lines of code, but they directly impact how Nginx responds to requests. They can also be used to tighten security — configuring SSL with Let’s Encrypt, for example — and enabling URL rewrites when needed.
ConfigMap and global configuration
In production environments, where the Nginx Ingress Controller has to support high request volumes, the ConfigMap comes into play. It enables more robust and complex tuning that affects all of Nginx’s behavior. Common settings include the maximum number of simultaneous connections and the buffer size.
Here’s an example of a ConfigMap you can apply to Nginx:
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-configuration
namespace: ingress-nginx
data:
proxy-body-size: 64m
proxy-connect-timeout: "30"
worker-processes: "4Ingress Nginx configmap keys: https://docs.nginx.com/nginx-ingress-controller/configuration/global-configuration/configmap-resource/#summary-of-configmap-keys
These parameters are especially useful in high-demand scenarios, where small optimizations can significantly cut latency and improve the user experience.

Monitoring and metrics: Prometheus and Grafana
Another vital aspect of an efficient Nginx-on-Kubernetes deployment is monitoring. Integrating Nginx with Prometheus and Grafana not only helps you understand how traffic is behaving, but also lets you anticipate bottlenecks and adjust configurations as needed.
The Nginx Ingress Controller exposes detailed metrics — from service response times to requests per second — letting you tune autoscaling based on real metrics and prioritize services during failures or traffic spikes.
Scalability and high availability
Finally, running Nginx as an Ingress Controller on Kubernetes means scalability is native. With the Horizontal Pod Autoscaler (HPA), you can define policies so Nginx increases or decreases the number of pods based on demand. For critical environments, it’s worth considering multiple ingress controllers for greater fault tolerance, along with external Load Balancers on public clouds.
Combining Nginx with Kubernetes lets you get the most out of both tools’ control and flexibility, delivering a custom, highly configurable, and secure network layer. Whether you’re tuning response times, optimizing security settings, or scaling based on metrics, Nginx adapts to Kubernetes and makes traffic management more efficient.
If you already use these technologies, take the opportunity to try out some of these settings and see how they can positively impact your environment. I hope this post brought practical insights for your implementation.
And you know the drill — if you liked it, make sure to follow us.
See you next time!
References:
NGINX Documentation. Available at: https://nginx.org/en/docs/.
Kubernetes Documentation. Available at: https://kubernetes.io/pt-br/docs/home/.