Migrate from ingress-nginx/oauth2-proxy to NGINX Ingress Controller/oauth2-proxy

My issue: Regarding the deprecation announcement of ingress-nginx, I want to move to something else now to give time for the Gateway API to get mature enough so I can move to that later. NGINX Ingress Controller > NGINX Gateway Fabric seems like the perfect path forward for me. However, after deploying the controller I am having issues with my SSO setup, I never get redirected to the SSO login page and instead I get the login page of my app. This works with the ingress-nginx controller but not with the NGINX Ingress Controller.

Here is the config I am moving away from:

---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: opennms
  namespace: opennms
  annotations:
    cert-manager.io/cluster-issuer: "letsencrypt-prod"
    nginx.ingress.kubernetes.io/app-root: /opennms/
    nginx.ingress.kubernetes.io/auth-signin: "https://oauthproxy.myprovider.com/oauth2/start"
    nginx.ingress.kubernetes.io/auth-url: "https://oauthproxy.myprovider.com/oauth2/auth"
    nginx.ingress.kubernetes.io/proxy-read-timeout: "240"
    nginx.ingress.kubernetes.io/proxy-connect-timeout: "240"
    nginx.ingress.kubernetes.io/proxy-send-timeout: "240"
    nginx.ingress.kubernetes.io/auth-response-headers: "x-auth-request-email, x-auth-request-user"
    nginx.ingress.kubernetes.io/configuration-snippet: |
      more_clear_input_headers "x-auth-request-preferred-username";
      more_clear_input_headers "x-auth-request-user";
      more_clear_input_headers "x-remote-roles";
spec:
  ingressClassName: "nginx"
  tls:
    - hosts:
        - opennms.mydomain.network
      secretName: opennms-tls
  rules:
    - host: opennms.mydomain.network
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: opennms
                port:
                  number: 8980

I migrated the above to:

---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: opennms
  namespace: opennms
  annotations:
    cert-manager.io/cluster-issuer: "letsencrypt-prod"
    nginx.org/app-root: "/opennms/"
    nginx.org/auth-signin: "https://oauthproxy.myprovider.com/oauth2/start"
    nginx.org/auth-url: "https://oauthproxy.myprovider.com/oauth2/auth"
    nginx.org/proxy-read-timeout: "240s"
    nginx.org/proxy-connect-timeout: "240s"
    nginx.org/proxy-send-timeout: "240s"
    nginx.org/auth-response-headers: "x-auth-request-email, x-auth-request-user"
    nginx.org/config-snippet: |
      more_clear_input_headers "x-auth-request-preferred-username";
      more_clear_input_headers "x-auth-request-user";
      more_clear_input_headers "x-remote-roles";
spec:
  ingressClassName: "nginx-oss"
  tls:
    - hosts:
        - opennms.mydomain.network
      secretName: opennms-tls
  rules:
    - host: opennms.mydomain.network
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: opennms
                port:
                  number: 8980

And my values.yaml for the NGINX Ingress Controller:

controller:
  replicaCount: 1
  service:
    annotations:
      service.beta.kubernetes.io/linode-loadbalancer-preserve: "true"
      service.beta.kubernetes.io/linode-loadbalancer-throttle: "20"
  enableCustomResources: true
  enableSnippets: true
  proxyBufferSize: "32k"
  largeClientHeaderBuffers: "4 32k"

How I encountered the problem: Requests not using SSO work fine, but anything that requires SSO doesn’t. I use oauth2-proxy in my K8s cluster which is working fine, and changed the ingressclass in there accordingly as well for the ingress, also added a specific path for /oauth2/ just in case:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  labels:
    app: oauth2-proxy
    helm.sh/chart: oauth2-proxy-7.12.16
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: authentication-proxy
    app.kubernetes.io/part-of: oauth2-proxy
    app.kubernetes.io/name: oauth2-proxy
    app.kubernetes.io/instance: oauth2-proxy
    app.kubernetes.io/version: "7.9.0"
  name: oauth2-proxy
  namespace: oauth2-proxy
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
  ingressClassName: nginx-oss
  rules:
    - host: oauthproxy.myprovider.com
      http:
        paths:
          - path: /oauth2/
            pathType: Prefix
            backend:
              service:
                name: oauth2-proxy
                port:
                  number: 80
          - path: /
            pathType: ImplementationSpecific
            backend:
              service:
                name: oauth2-proxy
                port:
                  number: 80
  tls:
    - hosts:
        - oauthproxy.myprovider.com
      secretName: oauth2-proxy-tls

Solutions I’ve tried: There are no errors on the logs, it just doesn’t redirect to the SSO login page, like it is ignoring all the SSO settings. I have tried looking at the documentation but can’t find anything resembling this kind of integration. At most, what I could find is how to do the OIDC integration directly in the controller (which gotta say, was a little bit confusing to me since I didn’t find all of the flags I needed and also looks to be referencing NGINX plus), which would be overkill since I already have a working oauth2-proxy setup.

Version of NIC and/or NGINX: Helm Chart nginx-ingress-2.3.1, APP VERSION 5.2.1

Deployment environment: Kubernetes 1.33

Can you please provide some guidance on how to proceed here?

Or, If what I want to do is already supported in NGINX Gateway Fabric, I would happily give that a shot as well with some guidance. I also need mTLS authentication to work for another ingress I have, and NGF didn’t seem to support that yet.

PS: I have been following this: The Ingress NGINX Alternative: Open Source NGINX Ingress Controller for the Long Term – NGINX Community Blog

and this Migrate from Ingress-NGINX Controller to NGINX Ingress Controller | NGINX Documentation

Thank you!

1 Like

A little more info here, my oauth2-proxy is configured as specified here: Integration | OAuth2 Proxy

My guess is the NGINX controller would have a similar way of doing the same thing, but I am not sure what I should use to emulate that same behavior.

Hi @dxiri,

Thank you for posting this!

We do appreciate the migration over to our NGINX Ingress Controller from ingress-nginx. We understand how difficult this could be, so posting this could help others in the future.

I will come back to you soon after some discussion with the team.

1 Like

Hi @dxiri .

These two controllers have different configuration methods for these use-cases (especially auth), but we can see how we can help as best we can. While ingress-nginx relies heavily on specific annotations (like auth-url), nginx-ingress can allow you to inject raw NGINX configuration via snippets or using Custom Resources (CRDs) like VirtualServer and Policy.

nginx-ingress does not have direct annotations for auth-url, auth-signin, auth-response-headers, auth-snippet, and app-root.. However, we went through this (and your other post from the AMA) to see how we can help.

Below is the converted configuration using standard Ingress annotations.

Certificate Manager (Unchanged, it should work)
cert-manager.io/cluster-issuer: "letsencrypt-prod"

Timeouts

We should have direct mappings for these thankfully:

nginx.org/proxy-read-timeout: "240s"
nginx.org/proxy-connect-timeout: "240s"
nginx.org/proxy-send-timeout: "240s"

Client Certificate Auth (Requires Snippets)
nginx-ingress does not have annotations for mTLS (we have added them to the long list of annotations we need to look at after the ingress-nginx announcement, it might be worth raising this as a request in our GH repo :slight_smile:). Today they can be configured via native NGINX directives in the server block, so may be possible with Snippets.

nginx.org/server-snippets: |

ssl_verify_client on;
ssl_verify_depth 1;

# You must mount the 'opennms/client-ca' secret to the pod
ssl_client_certificate /etc/nginx/secrets/opennms-client-ca;

Mounting the CA file is not ideal.. Would you consider using our VirtualServer CRD for this? We have native mTLS as a Policy: Policy resources | NGINX Documentation

Here is a full example of mTLS with our CRD’s. We encourage the use of the CRD’s for advanced use-cases like this: kubernetes-ingress/examples/custom-resources/ingress-mtls at main · nginx/kubernetes-ingress · GitHub

The tricky one (Auth)

Just because we don’t have these auth annotations, we can do our best to mimic what ingress-nginx did behind the scenes using raw NGINX config snippets (also added a location block at the end for app-root). This wasn’t tested, it’s just a guideline:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: opennms
  namespace: opennms
  annotations:
    cert-manager.io/cluster-issuer: "letsencrypt-prod"
    nginx.org/server-snippet: |
      location = /_oauth2_validate {
        internal;
        proxy_pass https://oauthproxy.myprovider.com/oauth2/auth;
        proxy_pass_request_body off;
        proxy_set_header Content-Length "";
        proxy_set_header X-Original-URI $request_uri;
      }
    nginx.org/location-snippet: |
      auth_request /_oauth2_validate;
      auth_request_set $auth_user $upstream_http_x_auth_request_user;
      auth_request_set $auth_email $upstream_http_x_auth_request_email;
      proxy_set_header x-auth-request-user $auth_user;
      proxy_set_header x-auth-request-email $auth_email;
      error_page 401 =302 https://oauthproxy.myprovider.com/oauth2/start?rd=$scheme://$http_host$request_uri;

      # App root redirect
      location = / {
        return 302 /opennms/;
      }

Our VirtualServer CRD also supports Snippets, so if you wanted to do mTLS natively with our CRD’s, you could use snippets for auth.

We have taken note of these auth annotations. This is good feedback.

Thanks!

Micheal

2 Likes

@Micheal_Kingston thanks a lot for your thoughtful reply! Looking the custom CRD for mTLS you pointed out, I see it expects a secret of type "nginx.org/ca"

Is this a hard requirement or can I use “Opaque“ type as well? (which is what I am using now). If it supports Opaque I can probably convert both ingresses to use the VirtualServer CRDs for consistency. Looking at the secret itself in your linked example that one looks just like what I have now, with the only difference being the type.

1 Like

That’s right, we did explore it in the past. The main reason NGINX Ingress Controller only accepts “typed” secrets (like kubernetes.io/tls) is because Opaque secrets are unstructured blobs with no guaranteed schema. Because of this, NIC can’t reliably extract certificate keys or validate them. We use a LocalSecretStore to work with known secret formats to keep the boundary between app secrets and infra secrets clean. Letting in arbitrary Opaque data would break that validation and risk mixing secrets meant for other apps. I also believe it caused a lot of logging issues.. Hopefully converting them is possible in this scenario!

2 Likes

Thanks again @Micheal_Kingston , indeed converting them won’t be an issue. One final thing, on my initial post, there were also some clearing of headers in a config-snippet:

This is a security measure to prevent people/bots/whatever from manually editing those headers to try to fool the webserver into thinking they are a different user. After doing some research, it seems like NGINX-ingress doesn’t have this module. I really would like to avoid messing with the stock image or anything like that, so if there is any other solution for this bit I would appreciate it.

I found some guidance here: Using NGINX Ingress Controller with NGINX Dynamic Modules | NGINX Documentation but that mentions you need NGINX plus, and not the Open Source one.

Ok, in the spirit this helps people, I will share my old and converted configs now. All of these are working. I only had to create one new secret (type nginx.org/ca) for the mTLS cert. For the others, I just re-used the secrets I already had.

To create the secret (just replace the namespace with the proper one):

kubectl create secret generic mtls-client-ca --from-file=ca.crt=./yourCAcert.pem --type=nginx.org/ca -n opennms

yourCAcert.pem is a local file on your computer containing the CA cert you would like your mTLS to authenticate against.

Everything is inside its own namespace, keep this in mind if you plan to copy/paste and adapt to your environment.

The only open question remaining is the use of the headers-more module on the old config, which doesn’t seem to be available in nginx-ingress out of the box.

OLD - ingress config using ingress-nginx

I converted 3 different ingresses. One without mTLS but with external auth (using oauth2 proxy) and two with mTLS. The two with mTLS use the same policy.

old-values.yaml

controller:
  replicaCount: 1
  containerPort:
    http: 80
    https: 443
  affinity:
    podAntiAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        - labelSelector:
            matchExpressions:
              - key: app.kubernetes.io/name
                operator: In
                values:
                  - ingress-nginx
          topologyKey: kubernetes.io/hostname
  config:
    allow-snippet-annotations: "true"
    annotations-risk-level: "Critical"
    proxy-buffer-size: "32k"
    large-client-header-buffers: "4 32k"
    strict-validate-path-type: "false"

old-ingress.yaml

---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: opennms
  namespace: opennms
  annotations:
    cert-manager.io/cluster-issuer: "letsencrypt-prod"
    nginx.ingress.kubernetes.io/app-root: /opennms/
    nginx.ingress.kubernetes.io/auth-signin: "https://oauthproxy.myprovider.com/oauth2/start"
    nginx.ingress.kubernetes.io/auth-url: "https://oauthproxy.myprovider.com/oauth2/auth"
    nginx.ingress.kubernetes.io/proxy-read-timeout: "240"
    nginx.ingress.kubernetes.io/proxy-connect-timeout: "240"
    nginx.ingress.kubernetes.io/proxy-send-timeout: "240"
    nginx.ingress.kubernetes.io/auth-response-headers: "x-auth-request-email, x-auth-request-user"
    nginx.ingress.kubernetes.io/configuration-snippet: |
      more_clear_input_headers "x-auth-request-preferred-username";
      more_clear_input_headers "x-auth-request-user";
      more_clear_input_headers "x-remote-roles";
spec:
  ingressClassName: "nginx"
  tls:
    - hosts:
        - opennms.mydomain.com
      secretName: opennms-tls
  rules:
    - host: opennms.mydomain.com
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: opennms
                port:
                  number: 8980
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: opennms-api
  namespace: opennms
  annotations:
    kubernetes.io/ingress.class: "nginx"
    cert-manager.io/cluster-issuer: "letsencrypt-prod"
    nginx.ingress.kubernetes.io/proxy-read-timeout: "240"
    nginx.ingress.kubernetes.io/proxy-connect-timeout: "240"
    nginx.ingress.kubernetes.io/proxy-send-timeout: "240"
    nginx.ingress.kubernetes.io/auth-tls-verify-client: "on"
    nginx.ingress.kubernetes.io/auth-tls-secret: "opennms/client-ca"
    nginx.ingress.kubernetes.io/auth-tls-verify-depth: "1"
    nginx.ingress.kubernetes.io/configuration-snippet: |
      more_clear_input_headers "x-auth-request-preferred-username";
      more_clear_input_headers "x-remote-roles";
spec:
  ingressClassName: "nginx"
  tls:
    - hosts:
        - opennms-api.mydomains.com
      secretName: opennms-api-tls
  rules:
    - host: opennms-api.mydomain.com
      http:
        paths:
          - path: /opennms/rest
            pathType: Prefix
            backend:
              service:
                name: opennms
                port:
                  number: 8980
          - path: /opennms/api/v2
            pathType: Prefix
            backend:
              service:
                name: opennms
                port:
                  number: 8980
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: vmselect
  namespace: opennms
  annotations:
    cert-manager.io/cluster-issuer: "letsencrypt-prod"
    nginx.ingress.kubernetes.io/proxy-read-timeout: "240"
    nginx.ingress.kubernetes.io/proxy-connect-timeout: "240"
    nginx.ingress.kubernetes.io/proxy-send-timeout: "240"
    nginx.ingress.kubernetes.io/auth-tls-verify-client: "on"
    nginx.ingress.kubernetes.io/auth-tls-secret: "opennms/client-ca"
    nginx.ingress.kubernetes.io/auth-tls-verify-depth: "1"

spec:
  ingressClassName: "nginx"
  tls:
    - hosts:
        - vmselect.mydomain.com
      secretName: vmselect-tls
  rules:
    - host: vmselect.mydomain.com
      http:
        paths:
          - path: /select/1/prometheus
            pathType: Prefix
            backend:
              service:
                name: vmselect-opennms-vmcluster
                port:
                  number: 8481
---

NEW - VirtualServer config using NGINX Ingress Controller

Install

I changed the ingressClass to nginx-oss so that I can distinguish old from new configs.

helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx

helm install nginx-ingress-controller oci://ghcr.io/nginx/charts/nginx-ingress --version 2.3.1 --namespace nginx-ingress-controller --create-namespace --set controller.ingressClassResource.name=nginx-oss --set controller.ingressClass.name=nginx-oss -f values.yaml

new-values.yaml

controller:
  replicaCount: 1
  enableCustomResources: true
  enableSnippets: true
  enableCertManager: true
  proxyBufferSize: "32k"
  largeClientHeaderBuffers: "4 32k"

new-virtualservers.yaml

---
apiVersion: k8s.nginx.org/v1
kind: VirtualServer
metadata:
  name: opennms
  namespace: opennms
spec:
  ingressClassName: nginx-oss
  host: opennms.mydomain.com
  tls:
    cert-manager:
      cluster-issuer: "letsencrypt-prod"
    secret: opennms-tls
    redirect:
      enable: true
  upstreams:
    - name: opennms
      service: opennms
      port: 8980
  routes:
    - path: /
      action:
        pass: opennms
      location-snippets: |
        auth_request /_oauth2_validate;
        auth_request_set $auth_user $upstream_http_x_auth_request_user;
        auth_request_set $auth_email $upstream_http_x_auth_request_email;
        proxy_set_header x-auth-request-user $auth_user;
        proxy_set_header x-auth-request-email $auth_email;
        error_page 401 =302 https://oauthproxy.myprovider.com/oauth2/start?rd=$scheme://$http_host$request_uri;

        # App root redirect
        location = / {
          return 302 /opennms/;
        }
  server-snippets: |
    location = /_oauth2_validate {
    internal;
    proxy_pass https://oauthproxy.myprovider.com/oauth2/auth;
    proxy_pass_request_body off;
    proxy_set_header Content-Length "";
    proxy_set_header X-Original-URI $request_uri;
    proxy_ssl_server_name on;
    proxy_ssl_name oauthproxy.myprovider.com;
    }
---
apiVersion: k8s.nginx.org/v1
kind: Policy
metadata:
  name: opennms-mtls-policy
  namespace: opennms
spec:
  ingressMTLS:
    clientCertSecret: mtls-client-ca
    verifyClient: "on"
    verifyDepth: 1
---
apiVersion: k8s.nginx.org/v1
kind: VirtualServer
metadata:
  name: opennms-api
  namespace: opennms
spec:
  ingressClassName: nginx-oss
  host: opennms-api.mydomain.com
  tls:
    cert-manager:
      cluster-issuer: "letsencrypt-prod"
    secret: opennms-api-tls
    redirect:
      enable: true
  policies:
    - name: opennms-mtls-policy
  upstreams:
    - name: opennms
      service: opennms
      port: 8980
      connect-timeout: 240s
      read-timeout: 240s
      send-timeout: 240s
  routes:
    - path: /opennms/rest
      action:
        pass: opennms
    - path: /opennms/api/v2
      action:
        pass: opennms
---
apiVersion: k8s.nginx.org/v1
kind: VirtualServer
metadata:
  name: vmselect
  namespace: opennms
spec:
  ingressClassName: nginx-oss
  host: vmselect.mydomain.com
  tls:
    cert-manager:
      cluster-issuer: "letsencrypt-prod"
    secret: vmselect-tls
    redirect:
      enable: true
  policies:
    - name: opennms-mtls-policy
  upstreams:
    - name: vmselect
      service: vmselect-opennms-vmcluster
      port: 8481
      connect-timeout: 240s
      read-timeout: 240s
      send-timeout: 240s
  routes:
    - path: /select/1/prometheus
      action:
        pass: vmselect
---

Validation

kubectl get virtualserver -n opennms

STATE should be valid, and you should have IP/ports listed. If you get a “Warning“ state, then:

kubectl describe virtualserver -n opennms <virtualservername>

should give you the exact reason. The most common problem is that you already have a host with that name (which will be your old-ingress config), you would need to delete the old ingress to fix this. Alternatively, use a different host other than your current one.

Another common one:

Message: IngressMTLS policy “opennms/opennms-mtls-policy” references an invalid secret opennms/akamai-mtls-client-ca: secret doesn’t exist or of an unsupported type

Means your secret is either on the incorrect namespace, has a typo, or it is the wrong type.

Hope this helps! and thanks to everyone who helped me figure this out!

1 Like

Really appreciate you putting this together. Glad to hear you’re using the VirtualServer CRD’s too! They allow for many advanced use-cases when compared to annotations. I have been checking the headers-more part. Unfortunately that module is not included in our base image. We do have Dockerfiles to build your own image, and you could add that module or njs (this should be possible with that), but I was thinking, you may be able to clear these with VirtualServer, with a few caveats..

This VirtualServer configuration will effectively remove headers from the request before it reaches your upstream:

apiVersion: k8s.nginx.org/v1
kind: VirtualServer
metadata:
  name: myapp
spec:
  host: myapp.example.com
  upstreams:
  - name: backend
    service: backend-svc
    port: 80
  routes:
  - path: /
    action:
      proxy:
        upstream: backend
        requestHeaders:
          pass: false  # This blocks ALL headers including your three problem headers
          set:
            - name: Host
              value: $http_host
            - name: X-Forwarded-For
              value: $proxy_add_x_forwarded_for
            - name: X-Forwarded-Proto
              value: $scheme

This blocks the auth headers by default and only passes through safe headers. You may need to re-add the headers you want though, so it may require some tuning to fit your use-case, but it’s worth a shot!

Not allowing conflicting host headers was another design choice we made. I believe there was logic in ingress-nginx to allow for it (not fully sure about. that)..

2 Likes

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.