Apple devices cannot access https websites

I’m trying to set up Nginx as a reverse proxy for my web pages. Unfortunately, Apple devices (iPad, iPhone, macOS) have serious connectivity and performance issues.

  • On iPad, access is completely cut off - I receive NSURLErrorDomain - timeout after several minutes

  • On iPhone and Mac, pages load but with very poor performance.

  • The same pages load quickly and reliably when:

    • Connected via VPN

    • Using a cellular network

    • Connected directly to the local network

What I’ve tried so far

  • Forced HTTP/1.1 (since Apple devices previously had issues with newer protocols) — no improvement

  • Deployed a fresh Nginx instance on a bare-metal server

  • Tested different ports

  • Checked Nginx error logs — they are empty

Despite these changes, the issue persists. Apple devices still behave differently depending on the network path, while non-Apple devices do not show these problems.

michal@debian:/etc/nginx/sites-enabled$ cat default-ssl
server {
    listen 443 ssl;
    server_name _;

    # SSL Certificates
    ssl_certificate     /etc/nginx/ssl/fullchain.pem;
    ssl_certificate_key /etc/nginx/ssl/privkey.pem;

    # Recommended SSL settings
    #ssl_protocols TLSv1.2 TLSv1.3;
    #ssl_prefer_server_ciphers on;

    root /var/www/html;
#    index index.html;
    index index.html index.htm index.nginx-debian.html;

    location / {
        try_files $uri $uri/ =404;
    }
}
michal@debian:/etc/nginx$ cat nginx.conf
user www-data;
worker_processes auto;
pid /run/nginx.pid;
error_log /var/log/nginx/error.log;
include /etc/nginx/modules-enabled/*.conf;

events {
	worker_connections 768;
	# multi_accept on;
}

http {

	##
	# Basic Settings
	##

	sendfile on;
	tcp_nopush on;
	types_hash_max_size 2048;
	# server_tokens off;

	# server_names_hash_bucket_size 64;
	# server_name_in_redirect off;

	include /etc/nginx/mime.types;
	default_type application/octet-stream;

	##
	# SSL Settings
	##

	ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3; # Dropping SSLv3, ref: POODLE
	ssl_prefer_server_ciphers on;

	##
	# Logging Settings
	##

	access_log /var/log/nginx/access.log;

	##
	# Gzip Settings
	##

	gzip on;

	# gzip_vary on;
	# gzip_proxied any;
	# gzip_comp_level 6;
	# gzip_buffers 16 8k;
	# gzip_http_version 1.1;
	# gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;

	##
	# Virtual Host Configs
	##

	include /etc/nginx/conf.d/*.conf;
	include /etc/nginx/sites-enabled/*;
}

Hi @michal123456747 ,

From the description, this looks much more like an issue below the HTTP layer - DNS/TCP/TLS, on a specific network path, rather than a problem with nginx itself. As a first step, I would try disabling ipv6 or at least making sure there is no AAAA record for your FQDN.

Do you have the ability to use wireshark/tcpdump on the nginx host or on the client side?

Thank you for your reply. I don’t use ipv6 and I don’t have AAAA record set up. I will try to capture some data from my client device, nginx server and router both on a wan and a lan port.

For now, these are logs from a fast test. Filter by port 3443.

1 Like

Cool, your dump helped a lot. There are significant delays, but I’d investigate intermediate devices/server os, since the issue is 100% network-related.

In your capture, there are repeated delivery failures of TCP segments in the direction xxx.xxx.xxx.101 → 192.168.1.36, which forces TCP to wait for retransmissions and results in ‘hangs’ lasting 15–25 seconds.

For example:
Session 192.168.1.36:63019 → xxx.xxx.xxx.101:3443

#1 loss (1440 b) occurs immediately after the nginx starts sending data.
Ngiinx sends data, the client receives part of it, but doesn’t receive the first segment, so the ACK does not advance.

This is visible from the client SACK:

  • Frame 251 (~0s): client ACK with SACK indicating that data beyond the ACK point was received, but there is a hole of 1440 bytes
  • This ‘hole’ is only filled after ~16s:
    • Frame 496 (~16s): nginx sends a segment with old SEQ number (effectively a retransmit of the missing 1440 bytes).
    • Frame 497 (~16s): the client finally advances the ACK.

Thank you. In that case, I will make a full capture on the devices I have access to. I realised, that these retransmissions occurs also on different IPs, which don’t belong to me.

Delays may be caused because of my ISP on server side (20Mb/s download, 2Mb/s upload from server perspective)

Do you have any ideas, why this problem doesn’t occur if I am using VPN? Maybe because of usage of UDP instead of TCP?

Yeah, I didn’t analyze the entire capture in detail, but the #1 suspect is the Sagemcom broadband device. It might be easier to temporarily replace it to eliminate that factor, since I’m not sure you’ll be able to capture a dump on it easily.

I have OpenWRT on my router, so tcpdumping data from it is not very challenging. I will make a test as soon as possible. Now I can provide you my openwrt config.