Let’s continue our journey into Kubernetes networking and the F5 NGINX Ingress Controller.
In a previous post, we detailed some of the reasons for migrating from the community ingress-nginx controller to the robust and actively developed open source F5 NGINX Ingress Controller. Migrating a core infrastructure component like an Ingress controller requires careful planning and execution to ensure a pain-free transition. In this blog, we will suggest a phased roadmap to guide your migration and help you build a stronger foundation for your Kubernetes networking.
You can also check out the NGINX documentation on Kubernetes Ingress migration.

Phase 1: Pre-Migration Planning
Moving core infrastructure components involves architectural implications, security, and ensuring zero downtime. A plan is crucial to navigating these complexities.
Document the Current Setup: The initial and most critical step is to document your existing ingress-nginx setup. This isn’t just about listing your Ingress objects; it’s about noting how they are configured.
- Ingress Objects: Inventory all Ingress resources, noting whether they use ingressClassName: nginx or the older kubernetes.io/ingress.class: “nginx” annotation.
- Annotations: List every ingress-nginx-specific annotation in use, such as nginx.ingress.kubernetes.io/rewrite-target, nginx.ingress.kubernetes.io/canary-by-header, or nginx.ingress.kubernetes.io/auth-url.
Also, note any custom server-snippet or location-snippet directives. - Global Configurations: Document any global settings applied via a ConfigMap, including how TCP/UDP services are exposed.
The Annotation Problem: This inventory is essential because ingress-nginx relies heavily on annotations for extending its capabilities. While annotations offer flexibility, they are at best unstructured key-value strings. So, while simple for basic use cases, this approach leads to “annotation overload” in more complex setups. The ease of using annotations can become significant long-term technical debt, transforming a simple configuration into a complex, manual, and error-prone process on any change. Therefore, capturing existing annotations during planning is crucial.
The migration process requires converting these ingress-nginx annotations to either the NGINX Ingress Controller’s native CRDs or its own set of annotations. This conversion is not always a straightforward one-to-one mapping, particularly for advanced features. However, it is possible that we could see support for the additional ingress annotations within the NGINX project.
Define Goals and Success Criteria: Beyond the technical inventory, define your goals. What do you aim to achieve?
- Enhanced Security: Do you need a Web Application Firewall (WAF), mTLS, or robust authentication?
- Improved Performance: Are you looking to improve latency or handle higher traffic volumes?
- Simplified Management: Do you want a more structured, auditable configuration?
- Commercial Support: Is 24/7 support and a stable product a business requirement?
Define your success metrics, such as zero downtime, specific performance improvements (e.g., a 15% reduction in p99 latency), or successful WAF policy enforcement.
Set Up a Parallel Environment: The safest way to migrate is to deploy the new NGINX Ingress Controller alongside your existing ingress-nginx controller. This creates a safe testing ground, allowing you to thoroughly validate the new setup without impact on production traffic. You can achieve this by deploying the new controller with a distinct ingressClassName (e.g., nginx2) and potentially in a separate Kubernetes namespace.
Phase 2: Installation & Configuration
With a comprehensive plan in hand, the next step is deploying and configuring the controller.
Installation and CRD-Based Configuration: The NGINX Ingress Controller offers flexible installation options, including its official Helm chart. Ensure version consistency across the controller image, installation manifests, and documentation to prevent compatibility issues.
The new controller supports standard Kubernetes Ingress resources, but its power lies in its CRDs. This represents a significant difference in configurations compared to ingress-nginx.
Community Ingress-Nginx | F5 NGINX Ingress Controller | Comparison Details |
Configuration Model | Configuration Model | |
Relies heavily on annotations (nginx.ingress.kubernetes.io /rewrite-target). | Leverages Custom Resource Definitions (CRDs) like VirtualServer and Policy. | CRD vs. Annotations: CRDs provide a structured, type-safe, and Kubernetes-native approach to configuration. |
Advanced Routing | Advanced Routing | |
Uses annotations for features like canary deployments (nginx.ingress.kubernetes.io /canary-by-header). | Uses VirtualServer and VirtualServerRoute CRDs with native splits fields for weighted or header-based routing. | The CRD approach integrates advanced routing directly into the API, making it more declarative and GitOps-friendly. |
Security Features | Security Features | |
Limited by annotations; requires manual snippets for things like rate limiting. | Uses a dedicated Policy CRD for security features like WAF (with NGINX App Protect), mTLS, JWT/OIDC authentication, and fine-grained rate limiting. | The commercial controller offers a comprehensive and integrated security suite through the Policy CRD |
TCP/UDP Support | TCP/UDP Support | |
Managed via a ConfigMap, which can be less intuitive. | Uses a dedicated TransportServer CRD for TCP/UDP load balancing and TLS passthrough. | TransportServer offers a more structured and declarative way to manage non-HTTP traffic, aligning with the Kubernetes API model. |
Converting Annotations: A critical step in this phase is converting your existing ingress-nginx annotations to their equivalents. Many common annotations have direct mappings, but for others, you might need to use NGINX snippets to replicate the desired behavior. The NGINX documentation can help guide you through this conversion process.
Phase 3: Testing & Validation
This is where your parallel deployment strategy pays off!
Functional and Integration Testing:
- Verification: Confirm that all migrated Ingress rules, routes, and services function as intended. Every application exposed through the new controller should be tested for correct routing, proper SSL termination, and backend connectivity.
- Security Testing: If you are using advanced security features like WAF or mTLS via the Policy CRD, test these to ensure they are correctly enforced and do not cause unexpected behaviors.
- Performance Benchmarking: Don’t just assume the new controller is better; prove it.
- Load Testing: Conduct performance benchmarking and load testing against desired traffic volumes and latency requirements. The F5 NGINX Ingress Controller is recognized for its high performance, but you should validate this in your environment.
- Connection Stability: Past experiences with Ingress changes have revealed issues such as connection drops and the termination of long-lived connections. Your testing should include long-duration tests to ensure the new system handles those.
Observability:
- Monitoring: Implement robust monitoring and alerting systems to detect performance degradations. The new controller’s metrics and logging capabilities (with Prometheus and Grafana, for example) provide crucial insights into its health and performance in real-time.
- Comparison: Compare metrics (e.g., latency, error rates, resource utilization) from both controllers during operation to identify any discrepancies.
The core principle here is to run both controllers in parallel and gradually shift traffic.
Strategy | Description | Advantages | Disadvantages |
DNS-based | Lower the DNS TTL and update A records to point to the new controller’s IP. | Simple, requires minimal tooling. | High risk due to unpredictable DNS propagation delays; users might experience stale DNS caches and downtime. |
Blue/Green | Maintain two identical production environments (blue for old, green for new). Switch traffic instantly or gradually at the load balancer or DNS level. | Provides a full environment for testing; fast switch. | Resource-intensive as it requires two full environments, can be a “big bang” switch. |
Canary Release | Gradually redirect a small percentage of live traffic to the new controller while monitoring performance. | Low risk; allows for “testing in production” with minimal blast radius. | Can be complex to set up manually; requires robust monitoring to be effective. |
Use NGINX Ingress Controller’s CRDs for Canary Releases: The NGINX Ingress Controller’s dedicated CRDs make canary releases simple. You can use the VirtualServer.Route.Splits CRD to distribute traffic by weight (e.g., 90% to the old controller, 10% to the new one) or by header-based routing for internal testing. This capability is a significant advantage over controllers that rely on annotations or external tools, as it integrates traffic management directly into the Kubernetes API.
Throughout the traffic shift, continuous monitoring is critical. Use your observability tools to detect any issues immediately. Only drop the old controller once you are confident it is no longer receiving any new traffic.
Phase 5: Post-Migration & Cleanup
Continue Monitoring: Once traffic is fully shifted, monitoring remains essential to long-term stability.
Decommission the Old Controller: After a period of stable operation (typically a few days to a week), you can decommission the old ingress-nginx controller. This involves deleting its Deployment, Service, and any associated configurations. Make sure the old controller is no longer receiving any traffic before its removal.
This cleanup is not just best practice; it’s a necessary step to realize the benefits of the new system. Failure to remove it can lead to wasted resources, operational confusion, and security risks from leaving a potentially vulnerable component in your cluster.
Rollback Plan: A robust rollback plan is crucial. Kubernetes provides a built-in rollback mechanism for Deployments with kubectl rollout undo. While it is not likely to happen, having backups of your old ingress-nginx configurations can provide a useful safety net.
Some Common Challenges
1. Annotation Translation:
- Challenge: Ingress-nginx’s reliance on unstructured annotations makes direct translation to the NGINX Ingress Controller’s structured CRDs difficult and manual.
- Solution: Make sure you have an inventory and map every annotation. Use the NGINX Ingress Controller documentation, which provides specific guides on converting annotations. For annotations without direct CRD fields, use custom NGINX snippets.
2. Ensure Consistent Behavior:
- Challenge: Even NGINX-based controllers can have subtle behavioral differences that can cause concerns in routing or load balancing.
- Solution: Testing can help here. Conduct functional and performance tests in parallel. Use canary releases to expose the new controller to a small percentage of real traffic and compare metrics before a full cutover.
3. Manage Long-Lived Connections:
- Challenge: Changes to Ingress controllers can sometimes terminate long-lived connections (e.g., WebSockets, gRPC).
- Solution: Implement graceful shutdown procedures for the old controller pods by configuring a preStop hook to allow NGINX to drain existing connections before the pod terminates. Continuously monitor active connections during traffic shifts and consider increasing terminationGracePeriodSeconds.
4. Optimization and Performance Tuning:
- Challenge: Inefficient resource allocation can lead to inflated cloud costs or performance bottlenecks.Solution: Define appropriate resources.limits and resources.requests for the new controller.
Conclusion: A Stronger Kubernetes Foundation
The journey from ingress-nginx to the F5 NGINX Ingress Controller is a strategic investment in the future resilience, security, and scalability of your Kubernetes networking layer.
By transitioning, you gain access to a suite of advanced features and benefits:
- Enhanced Security: Integrated WAF (NGINX App Protect), mTLS, and JWT/OIDC authentication and authorization.
- Advanced Traffic Management: Native support for blue/green, canary releases, and sophisticated traffic splitting via CRDs.
- Broader Protocol Support: Extends support beyond HTTP/S to include TCP, UDP, and gRPC.
- Superior Observability: Rich metrics and seamless integrations with tools like Prometheus and Grafana.
- Professional Assurance: 24×7 commercial support and the peace of mind that comes with a stable, extensively tested product.
- Reduced Technical Debt: The shift to a CRD-based configuration model provides a cleaner, more type-safe, and structured way to manage complex ingress rules, eliminating the “annotation sprawl” that plagues large deployments.
While the community Ingress API is now considered “frozen” in favor of the new Gateway API, migrating to the NGINX Ingress Controller establishes a stronger foundation for future transitions. This move signifies a strategic commitment to a more robust, scalable, and secure application delivery platform. The initial cost of a commercial solution is often justified by reduced management complexity, accelerated time to market, and the confidence derived from comprehensive support and integrated security features.
The path to a stronger Kubernetes networking layer requires careful planning, thorough testing, and a strategic approach to traffic cutover. By embracing the capabilities of the NGINX Ingress Controller, organizations can build a more resilient and performant foundation for their applications, ready to meet the demands of tomorrow’s cloud-native world.
So get started today. Download NGINX Ingress Controller from GitHub or from DockerHub. Or learn even more about Kubernetes Ingress and Security with NGINX with this free e-book.
And please join us in the NGINX Community Forum to provide feedback, ask questions, or share your insights.
