Split incomming files

Please use this template for troubleshooting questions.

My issue:

A massive long stream creates big files. Need to split them either in time or size or with a script

How I encountered the problem:

Juge resultfiles after hours of streaming

Solutions I’ve tried:

1 Restart server, not nice, stream continued in next file.

2 Looking at the documentation. No clue

Version of NGINX or NGINX adjacent software (e.g. NGINX Gateway Fabric):

Latest

Deployment environment:

Debian

Minimal NGINX config to reproduce your issue (preferably running on https://tech-playground.com/playgrounds/nginx for ease of debugging, and if not as a code block): (Tip → Run nginx -T to print your entire NGINX config to your terminal.)

NGINX access/error log: (Tip → You can usually find the logs in the /var/log/nginx directory.)

Hey @Wim! There isn’t enough information in your post to be able to help you out. Can you please answer the following?:

  • What type of files are being created?
  • What information is in the files?
  • What exactly do you mean by a long stream? It has a few potential meanings in NGINX, depending on the context.
  • What version of NGINX are you running? Due to various differences depending on where it is being installed, latest can actually mean a version that was released 4 years ago. Please run nginx -V and paste the output :slight_smile:
  • Finally, can you share your sanitized config? Like the template says, running nginx -T should print it out to your terminal.

Thanks!

Hi, thanks for you reaction.

Version nr:

nginx version: nginx/1.22.1
built with OpenSSL 3.0.15 3 Sep 2024 (running with OpenSSL 3.0.17 1 Jul 2025)
TLS SNI support enabled
configure arguments: --with-cc-opt=‘-g -O2 -ffile-prefix-map=/build/reproducible-path/nginx-1.22.1=. -fstack-protector-strong -Wformat -Werror=format-security -fPIC -Wdate-time -D_FORTIFY_SOURCE=2’ --with-ld-opt=‘-Wl,-z,relro -Wl,-z,now -fPIC’ --prefix=/usr/share/nginx --conf-path=/etc/nginx/nginx.conf --http-log-path=/var/log/nginx/access.log --error-log-path=stderr --lock-path=/var/lock/nginx.lock --pid-path=/run/nginx.pid --modules-path=/usr/lib/nginx/modules --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --with-compat --with-debug --with-pcre-jit --with-http_ssl_module --with-http_stub_status_module --with-http_realip_module --with-http_auth_request_module --with-http_v2_module --with-http_dav_module --with-http_slice_module --with-threads --with-http_addition_module --with-http_flv_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_mp4_module --with-http_random_index_module --with-http_secure_link_module --with-http_sub_module --with-mail_ssl_module --with-stream_ssl_module --with-stream_ssl_preread_module --with-stream_realip_module --with-http_geoip_module=dynamic --with-http_image_filter_module=dynamic --with-http_perl_module=dynamic --with-http_xslt_module=dynamic --with-mail=dynamic --with-stream=dynamic --with-stream_geoip_module=dynamic

Part of nginx.conf

rtmp {
server {
listen 1935;

    # Incoming stream 1
    application stream1 {
        live on;
        record all;
        record_path /var/www/html/opnames/stream1;
        record_unique on;
        record_suffix -%H_%M_%S.flv;
    }

The result is that it will create a file -678967-gdggd.flv

This file will grow very fast to more than 500Mb/h for continues usage this is a problem. Splitting the files into more smaller files either by size and/of length or other is the solution I am looking for. Cannot find it in the docs.

Any help is welcome.

Solved it. Docs did provide the answer, just got lost in alle the docs.

record_max_size 128K;
1 Like

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.