Hi Benjamin, we currently do not have a native field or annotation for auth_request. We are considering adding the annotation early next year. Another option to consider is using custom annotations by modifying the ingress template to include auth_request. More info about this can be found in our documentation – Custom annotations | NGINX Documentation
What load balancing algorithms are supported ? How to configure them ?
Is sticky session available in open-source f5 ingress controller ?
Question from Reddit:
If I migrate from ingress-nginx to open source NGINX Ingress Controller, can I run both controllers simultaneously during migration? What about. My custom configurations and snippets?
Excellent question!
The configuration format is most likely not going to change. That’s a decision the NGINX core team made a long time ago, and I can’t see this changing any time soon.
I agree it isn’t the easiest to read, even for someone who works with it every day, but there’s a certain logic to it.
Everything is root → http → server → location, plus a few others like stream. Evaluation order is most specific to the request to the least specific.
That said, one of the upsides of using the ingress controller, either F5’s or the community one until recently, was that users could describe what they want in kubernetes yaml files.
The downside is that the complexity is now in yaml files in multiple interconnected objects.
In gateway fabric tcp/udp routes are not yet supported.
I see this pr got closed recently https://github.com/nginx/nginx-gateway-fabric/pull/3688
is there plan to implement this ?
They can coexist If at least one of them is installed with a unique ingressClass. You can specify this in the Helm chart values easily enough.
Hi @bmv126, for NGINX Ingress Controller, if you using Ingress, you can use the annotation nginx.org/lb-method, like nginx.org/lb-method: "round_robin"
If you use VirtualServer, you can use the lb-method field
NGINX Open Source supports four load balancing methods: Round Robin, Least Connections, IP Hash, and Generic Hash. NGINX Plus supports six load balancing methods: the four above, Least Time, and Random.
The default load balancing method in NGINX Ingress Controller is round robin, and you can see the page on our docs for all methods supported.
Hey there, NGINX Gateway Fabric has tcp/udp routes slated to be in our 2.4 release which should come out in ~ jan-feb of next year.
If you would like to track the progress, you can watch for the related Github Epics:
Are there any stats on the performance of the open source (F5, free version) NGINX Ingress Controller vs the community version? Let’s say we have thousands of Ingresses, some with RegEx pathing and what not.
What’s the recommended setup for being production ready?
On top of this… currently we have our ingress controller integrated with Datadog- what’s the migration path there? Do you happen to have knowledge of any differences we may need to consider between the APM/metrics being reported to Datadog from the community version of the ingress controller vs the F5 open sourced one?
Hi, thanks for the question! Here are the load balancing algorithms (ses lb-method):
You can do hash-based persistence with open source (use the hash based lb method). We have taken note of this feedback though, as cookie-based persistence is something our open source users are often asking for.
How does the f5 opensource ingress controller create the nginx.conf ?
does the upstream section have a list of ip or is it backend service name ?
Does it support deployment on istio enabled namespace?
Hi @B-L-R ,
We’ve had a detailed look into the performance of the community ingress. There are known issues, and we have a document detailing our findings currently in review. I’ll reply to this once that document is publicly available.
For Datadog, their document details how to connect to F5’s Ingress Controller by using Datadog’s Prometheus integration and the NGINX Prometheus Exporter.
On getting production ready: it depends on what your metric of that is. Generally we recommend having a dev / sandbox environment to iron out configuration issues and nuances as it will work differently to the community ingress controller, and then looking at the metrics Datadog will give you on performance.
I have a potential use case where I need to listen on UDP for SNMP traffic and my cloud hosted load balancer doesn’t support UDP. Is this what the TransportServer is for? If not, is there any other CRD or regular ingress config I can use to achieve this?
For use cases like this MetalLB seems to be the goto solution, but I wonder if this controller would be able to handle something like this. If this controller can achieve this, what LB type should I configure? I had a look at kubernetes-ingress/examples/custom-resources/basic-tcp-udp at v5.2.1 · nginx/kubernetes-ingress · GitHub but that doesn’t explain the load balancing part which is what I am looking for.
I have a question of my own for the NGINX Ingress Controller and NGINX Gateway Fabric teams! ![]()
What is your favorite part of working on these projects, or what gets you excited about the technology? Do you have a favorite feature or capability?
When crds are disabled, what all features will not be available ?
From the docs, i understood that tcp/udp depends on these crd or is there any other way to achieve this ?
Looking at the source code, the following will not work without custom resources:
- TLS Passthrough / transport servers
- cert-manager
- external-dns controller
- global configuration
- anything to do with Policies
- virtualservers
- and virtual server routes
The CRDs are there to make it easier to listen to and interpret connected data rather than relying on annotations for a lot of things.
The NGINX Ingress Controller monitor the Kubernetes resources user creates or changes, such as Ingress and ConfigMaps, and custom resources like VirtualServer, and transform the spec to NGINX confs using Go templates.
If you’re interested in more technical details, you can look at this page in our docs
If you look at the conf generated, you might see a list of pod IP of upstream backend in the upstream block
I haven’t used Istio myself, but we also have a tutorial on our website on integrating the NGINX Ingress Controller with Istio
What happens if there’s an invalid annotation used? With the community version, certain annotations aren’t properly validated, which can prevent the future ingress resources from being picked up.
Is it the same for the F5 open source ingress controller?
Also the community ingress controller has these pages on annotation security / hardening guidelines - Hardening guide - Ingress-Nginx Controller and Annotations Risks - Ingress-Nginx Controller
Is there a similar hardening/annotation guideline for the F5 ingress controller?
Question from the wild that I’ve seen asked many times:
When should I consider migrating to Gateway API? What factors are most important to consider?