Live AMA: Transitioning to the Open Source NGINX Ingress Controller from ingress-nginx - Dec 10, 11

Hello dxiri!
Correct. Transportserver CRD supports both TCP and UDP layer 4 functionality.
Here is the doc that covers that resource:

You can define the load balancing method you want to use with the Transport server resoure.

You can use algorithms like hash, least_conn and random for your UDP resource.
Let us know if that helps and if you have any more questions.

2 Likes

Very relevant question today! Gateway API is a newer approach to Kubernetes traffic management and is seen as the next step beyond Ingress. We’ve implemented Gateway API with NGINX Gateway Fabric and work with the SIG Network group to make sure it meets conformance standards. It’s a solid option to use today, and we’re adding new features quickly.

That said, Ingress has been around for a while (our NGINX Ingress Controller since 2016) and is packed with features. It’s stable and widely used, so it’s not going away anytime soon. If you’re thinking about switching from Ingress to Gateway API, it’s important to plan the migration carefully. Even though Gateway API is GA, (CORS comes to mind) many features still considered experimental and could change down the line.

The best approach? Take it step by step and phase the migration to make things as smooth as possible, but yes Gateway API is the future for sure.

1 Like

The way NGINX Ingress Controller is built is that there are validations for most configuration options.

If it’s something from a CRD, those values have a type, format, minimum, if it’s an integer, so kubernetes will reject the update.

If it’s an annotation that’s not in the list, kubernetes will add it to the resource, but NIC won’t pick up the configuration. If it’s an invalid value for an annotation (they will be strings always), NIC will sometimes do validation, depending on the annotation itself, but if it’s freeform, it’s passed onto the NGINX configuration.

For snippets (location, http, server, etc), NIC doesn’t do any validation, and will pass on the field into the configuration file.

There are a few configuration options that NIC does extra validation on, the proxy buffer configurations as an example, but apart from the above, NGINX will throw an error if the created configuration file is off, and will refuse to start.

NIC does not terminate the already running pods until the new ones are healthy, so a good rule of thumb is that if the new ones are unhealthy, look at the logs first, and see what issues NGINX is having.

There’s also a wider project for configuration safety that we’re working on to make it hard for folks to end up in an invalid state.

NIC currently doesn’t have a hardening guide itself, but that is an excellent resource. It comes up fairly often for NGINX core, so we’ll be looking into creating one for NIC as well.

The CIS benchmark recommendations for the community ingress would also apply to NIC as well.

4 Likes

Question from the wild:

The open source version of NGINX Ingress Controller is maintained by F5, but do you actually encourage and accept contributions to it as well? How much influence does the NGINX community have on project direction and roadmap?

1 Like

We absolutely accept contributions for our open source projects, they are all on GitHub. The community has real influence on features and prioritization (especially via issues, upvotes, and PRs). Check out the repos:

The projects are Apache 2.0 licensed and genuinely open source with a commitment to keep it OSS long‑term. We also have engineers dedicated to community queries and issues. Hope this helps!

3 Likes

As an engineer on the F5 Ingress Controller project who’s been on the community rota very recently, I can answer it from my point of view:

  • we have a community rota: each sprint (2 week blocks of time) one dedicated team member is tasked with keeping an eye on this forum, GitHub issues / discussions / pull requests, and internal commercial questions that get escalated to us
  • we also have a community call over Zoom every other Monday at 4pm GMT that everyone in the community is welcome to join: GitHub - nginx/kubernetes-ingress: NGINX and NGINX Plus Ingress Controllers for Kubernetes. You can bring your questions, listen in, if you have an issue / pr / discussion topic you want to talk about, that’s the forum to do it
  • and we’re also genuinely happy to take contributions, either code, or documentation over GitHub. They don’t go ignored, see the first bullet point :slight_smile:
4 Likes

Hi, thanks for doing the AMA.
I would like to join. Where do I find the meeting link?
Thanks!

4 Likes

Hi @MaxNginx ! You’ve joined, you made it! The AMA is happening right here on this thread, right now. Text-based instead of video :slight_smile:

1 Like

Question from the wild:

What’s on the roadmap? Are any of those features particularly relevant to migration?

One challenge we have observed for application teams during the migration of Ingress resources from the community ingress-nginx controller to the F5 nginx ingress is their reliance on the automatic merging of multiple Ingress objects that share the same hostname and namespace, but have different paths and annotations, which are then merged together currently [1].

Am I correct that we would now need to transition to using mergeable ingress types [2]?

Additionally, do you have any recommendations for how to approach this migration? This is especially challenging with application teams that have not updated their Ingress configurations in years.

Would it be a viable approach to check the NGINX configuration inside the controller container before and after implementing Mergeable Ingress Types and check for the diff?

Or should we anyways move to CRDs instead of annotations?

[1] How it works - Ingress-Nginx Controller

[2] https://github.com/nginx/kubernetes-ingress/tree/v5.3.0/examples/ingress-resources/mergeable-ingress-types

Certainly! We are adding a bunch of features to both the NGINX Ingress Controller and NGINX Gateway Fabric over the next year. I mention both because I know folks are evaluating both options today. Without going into too much detail (as roadmaps are subject to change), we are listening very closely to user feedback following this ingress-nginx announcement.

For the Ingress Controller, we know that the community needs more annotations to mirror ingress-nginx, sticky sessions and authentication (in OSS!) are important, and our OSS metrics could also be improved. That’s all I will say for now regarding the NIC.

As for the NGINX Gateway Fabric, we are moving quickly alongside the Gateway API itself and aim to conform to the latest spec (1.4.1). A lot of exciting things are happening with the Gateway API that we plan on adopting, including TCP and UDP routes, CORS, ListenerSets, authentication, mTLS, rate limiting, session persistence, and many more features.

If there’s anything specific you’re interested in learning about, please feel free to raise it in the repositories

1 Like

Yes, you are correct that you would need to transition to using Mergeable Ingress Types to replicate the automatic merging behavior that community ingress-nginx provides.

This has to be done using nginx.org/mergeable-ingress-type annotation with master or minion as you can see in the example.

If you create multiple ingresses with same hostname in F5 Nginx Ingress Controller, you’ll run into an issue where only one Ingress is operational and all others will error with “Hostname Collison”

CRDs (VirtualServer, VirtualServerRoute, Policies, GlobalConfiguration and TransportServer) vs. Ingress is a subjective decision but i personally would prefer CRDs:

  • More structured and type-safe
  • Better validation
  • More advanced features (traffic splitting, canary deployments)
  • Future-proof

Ingresses only allow customisation using annotations which can generally spiral into complex looking configs fairly quick.

Recommendation: Gradual Transition to CRDs

  1. For new deployments: Use VirtualServer/VirtualServerRoute CRDs
  2. For existing ones: Convert to Mergeable Ingress Types with annotations
  3. then gradually migrate to VirtualServer CRDs for new and advanced features

Whenever a new functionality is exposed, it generally comes to both Ingress (via annotations) and CRDs

eg :
*Rate-limit using Ingress: https://github.com/nginx/kubernetes-ingress/tree/main/examples/ingress-resources/rate-limit
*Rate-limit using CRDs: https://github.com/nginx/kubernetes-ingress/tree/main/examples/custom-resources/rate-limit

Docs:

  1. Migrate from Ingress-NGINX Controller to NGINX Ingress Controller | NGINX Documentation
  2. VirtualServer and VirtualServerRoute resources | NGINX Documentation
  3. Advanced configuration with Annotations | NGINX Documentation
5 Likes

Hi @dxiri in terms of using NGINX Gateway Fabric for managing these configurations, right now we don’t yet have native support for these settings, aside from the cert-manager.io/cluster-issuer annotation.

We have a document here on Cert-manager integration with NGINX Gateway Fabric: Secure traffic using Let's Encrypt and cert-manager | NGINX Documentation

If you want to get something working today, look to the SnippetsFilter resource. This is a more robust way to adding configuration using snippets with NGINX Gateway Fabric.

We also have examples here for testing: https://github.com/nginx/nginx-gateway-fabric/tree/main/examples/snippets-filter

As more Ingress annotations become native features for NGINX Gateway Fabric, we do our best to keep the ingress2gateway tool up-to-date as these capabilities come out. If you’ve not seen it before, this tool help translate Ingress resources to Gateway API equivalent resources based on the provider you specify (for us that’s nginx)

For our v2.4.0 release in January, we will be adding a new Policy resource called ProxySettingsPolicy which is design for configuration of settings like proxy-read-timeout and proxy-connection-timeout.

The first iteration of this resource won’t include these specific directives, but we’re aiming to add them very soon after, so stay tuned!

NGINX Ingress Controller is the better choice since your setup depends on annotations, oauth2-proxy integration, and config snippets.
NGINX Gateway Fabric doesn’t yet support these features without significant rework.

1 Like

Thanks @JSON_Williams. My bad, I wasn’t referring to the LB algorithm but to the ServiceType (ClusterIP vs NodePort vs LoadBalancer). Using a service type of LoadBalancer doesn’t work for me because my provider doesn’t allow UDP based LoadBalancers, so I was looking at alternative options to this. Any ideas? end result must be something listening on a static IP using an UDP port I define.

1 Like

Thanks @ve.patel for this Update. So If we check this documentation on Mergeable ingrees - kubernetes-ingress/examples/ingress-resources/mergeable-ingress-types at v5.3.0 ¡ nginx/kubernetes-ingress ¡ GitHub . There are annotations currently not supported in the Master/Minion model.

In our case there are Ingress that uses below annotations -

nginx.ingress.kubernetes.io/backend-protocol: GRPC —>Maps to nginx.org/grpc-services

nginx.ingress.kubernetes.io/backend-protocol: HTTPS —>Maps to nginx.org/ssl-services

Will this work with Mergeable Ingress?

1 Like

Hi @Tony_V ,

I believe these annotations should work in the minion Ingresses but not Master. Should still work with Mergable Ingress.

1 Like