User documentation

Scheduling and data transfer protocols

The Perfect Streamer program is designed to transmit MPEG-TS streams over the public Internet with packet loss and delays based on UDP.

For each MPEG-TS stream (Stream), a transmitter server (Sender) and one or several receivers (Receiver) are configured, hereinafter the bundle is designated Peer.

Configuring the transmitter and receiver comes down to entering a list of streams (Stream) and settings for each Stream of the list input and output. Multiple inputs in the list provide redundancy for sources. Multiple outputs in the list allow streams to be sent to different destinations at once.

For a transmitter, input is the sources of MPEG-TS streams, output is the transmission of streams to receivers. For receivers input - receiving streams from transmitters. Four Peer protocols are available to transfer streams between transmitter and receiver:

  • Perfect Stream Protocol (PS1).

  • SRT.

  • Pro-MPEG / RTP+FEC.

  • RIST.

PS1 protocol

The PS1 protocol works on the principle of Automatic Repeat reQuest (ARQ). It has a low resource consumption and allows you to transmit streams with a high bit rate.

On transmitter - configurable in output. Only one copy is available for one stream. It is required to register logins for receivers in Peer. UDP listen port is set, must be unique for each stream.

At the receiver - configurable in input. The host and port of the transmitter are indicated, as well as the login and password.

Encryption of streams is available (Crypto protection), AES-128 is used. To enable encryption on both sides, enter Crypt Passphrase - a shared key.

During operation, the receiver (client) transmits its statistics data to the transmitter (server). This can be viewed in the Peers section by selecting a client from the list.

Stream latency and the ability to correct losses depends on the receiver (client) settings:

Round Trip Time - RTT, ms, default 300. Estimated delay (ping) in the channel. After starting the stream, the real RTT can be seen in the statistics (PS1 recovery delay).

Client Latency (RTT multiplexor) - multiplier to RTT, by default 10, which determines the stream delay on the sender’s buffer. Those. the default buffer delay is 3000 ms.

From the sender’s side there is a delay setting (buffer length) - Latency (ms). It should be greater than the delays specified by clients.

The ability of a protocol to compensate for losses is determined by the number of retransmission requests and depends on Client Latency (RTT multiplexor). Large losses lead to additional network traffic. To reduce latency, you should fine tune these parameters.

See client statistics for the correctness of the protocol. Counters PS1 recovery - if Not found, increase the sender’s buffer, Dublicates - increase RTT.

Since the connection is initiated from the side of the receiver, the transmitter requires authentication, the receivers are registered in the peers section. Login and password required. Authorization by server IP is possible, for this specify only the IP of the receiver in peer settings.

SRT protocol

Open protocol designed by Haivision. Based on the UDT protocol. It has wide distribution and good packet loss performance.

Use cases:

  • Peer between two Perfect Streamer instances. On transmitter - endpoint is configured as output, listen mode (default). For one stream only one such output can be set. In this mode several receivers can be connected. For authorisation logins for receivers must be set in Peer. On receiver - endpoint is configured as input. Host and port of the transmitter, login and password are specified. To pass login and password, srt streamer uses streamid in the format “login|password”.

  • Peer between Perfect Streamer and third-party SRT streamers. On transmitter you can set up srt client mode, switching off listen. SRT stream ID is entered in login field, if needed. For listen mode, the IP-address authorization is available - entered in login field in Peer. On receiver it is available to turn on listen mode, enter the SRT stream ID in the login field, also you can specify the host from which the connection is allowed.

Work in the Listener mode: receiving and transmitting a TV channel stream with the indication of the receiving port.

Ports for listen mode must be unique.

Encryption of streams is available (Crypto protection), AES-128 is used. To enable encryption on both sides, enter Crypt Passphrase - a shared key.

If the transmitter uses the listen mode (default), then the connection is initiated by the receiver, and the transmitter requires authentication, the receivers are registered in the peers section. Login and password required.

SRT protocol options match the description - https://github.com/Haivision/srt/blob/master/docs/API/API-socket-options.md

Reorder (SRTO_LOSSMAXTTL) - The value to which the reorder tolerance can increase. Reorder tolerance is the number of packets that must follow a detected ‘gap’ in the sequence numbers of incoming packets before a loss report is sent (in the hope that the gap is caused by packet reordering rather than loss). The reorder tolerance value starts at 0 and increases when packet reordering is detected. This occurs when a ‘late’ packet is received with a sequence number higher than the last received one, but without the retransmission flag. Upon such detection, the reorder tolerance is set to the value of the interval between the last number and the sequence number of that packet, but not more than the value set by the SRTO_LOSSMAXTTL parameter. By default, this value is 0, which means that this mechanism is disabled. https://github.com/Haivision/srt/blob/master/docs/API/API-socket-options.md#SRTO_LOSSMAXTTL

Overhead (SRTO_OHEADBW, %) - Overhead for bandwidth recovery beyond the input rate (see SRTO_INPUTBW), as a percentage of the input rate. Effective only if SRTO_MAXBW is set to 0. Sender: user-configurable, default: 25%.

Recommendations: Overhead is intended to provide additional bandwidth capacity in case a packet has taken up some bandwidth but is then lost and needs to be retransmitted. Therefore, the effective maximum bandwidth should be sufficiently higher than the bitrate of your stream to leave room for retransmissions, while being limited so that retransmitted packets do not lead to a sharp increase in bandwidth usage when large groups of packets are lost. Do not set too low a value and avoid 0 if the SRTO_INPUTBW parameter is set to 0 (automatic). Otherwise, your stream will quickly interrupt with any increase in packet loss.

Max Band (SRTO_MAXBW, bps) - This option is effective only when SRTO_MAXBW is set to 0 (relative). It controls the maximum bandwidth together with the SRTO_OHEADBW parameter according to the formula: MAXBW = INPUTBW * (100 + OHEADBW) / 100. If this option is set to 0 (automatic), the actual INPUTBW value will be estimated based on the input stream (cases where the application calls the srt_send* function) during transmission. The minimum allowable estimate value is limited by the SRTO_MININPUTBW parameter, i.e. INPUTBW = MAX(INPUTBW_ESTIMATE; MININPUTBW).

Recommendations: Set this parameter to the expected bitrate of your broadcast and leave the default value of 25% for SRTO_OHEADBW. https://github.com/Haivision/srt/blob/master/docs/API/API-socket-options.md#srto_inputbw

Timeout (SRTO_CONNTIMEO, ms) - Connection timeout value in milliseconds. This is the time during which the connecting object will attempt to connect and wait for a response from the remote endpoint before terminating the connection with an error code. https://github.com/Haivision/srt/blob/master/docs/API/API-socket-options.md#SRTO_CONNTIMEO

Pro-MPEG / RTP+FEC Protocol (COP3 / SMPTE 2022-1/2)

Delivery of MPEG-TS over RTP with forward error correction (FEC, Forward Error Correction). The same protocol appears in literature and equipment under different names:

  • Pro-MPEG / Pro-MPEG COP3 — Code of Practice #3 of the Pro-MPEG forum, described in the IEEE standard (https://ieeexplore.ieee.org/document/6738329);

  • RTP + FEC — functional name (RTP stream plus FEC channels);

  • SMPTE 2022-1 — Column FEC (same scheme, published as an SMPTE standard);

  • SMPTE 2022-2 — Row + Column FEC (two-dimensional matrix, implemented in PSS).

Достоинства — низкая задержка. Его недостаток — высокий дополнительный трафик (overhead), и он плохо работает при больших потерях пакетов (более 0.2%).

Этот протокол основан на RTP с добавлением 2-х каналов для FEC (кода коррекции ошибок). Два канала FEC используют порты port+2 и port+4, что надо учитывать при добавлении нескольких потоков на один хост или мультикаст группу.

At the sender, the stream of RTP packets is grouped into a matrix with Cols columns and Rows rows. Example for cols=8 and row=4 (default):

RTP01

RTP02

RTP03

RTP04

RTP05

RTP06

RTP07

RTP08

R1

RTP11

RTP12

RTP13

RTP14

RTP15

RTP16

RTP17

RTP18

R2

RTP21

RTP22

RTP23

RTP24

RTP25

RTP26

RTP27

RTP28

R3

RTP31

RTP32

RTP33

RTP34

RTP35

RTP36

RTP37

RTP38

R4

C1

C2

C3

C4

C5

C6

C7

C8

Rx and Cx packets form data for FEC in rows and columns. The smaller the matrix size, the better the loss correction ability, but the more additional traffic. In this example, there are 12 FEC packets for 32 RTP packets of the stream.

Доступно шифрование потоков (Crypto protection), используется AES-128, но это не включено в стандарт, поэтому не гарантируется совместимость со сторонним ПО или оборудованием.

There are non-standard protocol extensions:

Multiplexing — мультиплексирование RTP каналов через один UDP порт. Может упростить настройку сети. Header XOR — обфускация RTP заголовка. Усложнит определение типа трафика в сети.

RIST protocol

It is new open protocol. It is based on RTP/RTCP. It works as Automatic Repeat reQuest (ARQ) without ACK, only NACK, which provides high efficiency.

It uses unicast and multicast

Simple and Main profiles are implemented. Simple uses 2 udp ports in a row, the specified port must be even. Main uses only one RTP port with data multiplexing.

On transmitter - endpoint is configured as output. Receiver address and port are configured for unicast mode. For multicast mode, you need to set the network interface, through which the data will be transmitted. Also for multicast mode you can set up receivers authorization through IP address in Peer.

On receiver - endpoint is configured as input. For unicast mode, the receiver port (listen) is configured and the network interface is required. For multicast mode, only multicast group and port are set.

RIST supports multiple individual peers (addresses). You can set weight (more than 1), to activate load balancing mode on peers depending of weight.

If the transmitter uses multicast, then there can be many receivers. In this case, it is possible to authenticate receivers by IP address. To do this, enable authentication in the transmitter settings (disabled by default) and add the client to the peer list, enter the IP address in the login field.”

Other protocols

In addition to peer protocols for receiving and transmitting streams, others are available:

Protocol

Input

Output

UDP

Yes

Yes

RTP

Yes

Yes

TCP

Yes

No

HLS

Yes

Yes

UDP (Unicast or Multicast) - reception and transmission of MPEG-TS in a UDP packet, up to 7 TS packets per UDP packet.

RTP (Unicast or Multicast) is a standard RFC-based protocol. Reordered package recovery is supported.

TCP - receiving MPEG-TS in TCP connection, TCP client mode.

HLS - receiving and transmitting MPEG-TS over http or Apple’s HLS protocol. If receiving an adaptive playlist, a stream with a highest bitrate is selected.

Working with files and devices

file/device protocol is available for both input and output for working with files and devices.

output file/device - writing to a file or output to a device. File writing may be required to write to a ts-file and subsequent diagnosis by other analyzers. Output to a device - any device (including SDI) that is registered in /dev.

input file/device - cyclic playback of video from a ts-file.

When working with files, the full path to the file is specified in the field File Path:

/catalog/stream.ts.

When working with devices, the Is Device flag is additionally activated.

Stream access list and Peer limitations

Peer’s stream access list allows you to configure access restrictions for clients in SRT Listen mode, PS1, HLS and HTTP modes. You can configure stream access list on Sender side in Peer settings. You only should append allowed streams to Stream Access block. It is empty by default and in this case all the streams are allowed.

Time limit and connections count limit are also available for the Peer.

Anonymous peer

By default, the peer login is anonymous. Anonymous peer allows to distribute streams without binding to IP or login and password. Restrictions on the number of streams issued by transport protocols, date restrictions and allowed streams list are applied.

It is possible to create an individual peer by login (name) and password.

To authorize peer by IP, you should enable the “Login Is IP” option.

Authorization options:

  • By single IP

  • By IP range, for example: “192.168.1.10-192.168.1.20”

  • Combined option, syntax of IP lists: ip[-ip2][,…]

Connection of third-party applications

To support other protocols that are not supported by the built-in tools, it is possible to receive and transmit a stream through third-party console applications. There is a separate std protocol for input and output for this. The MPEG-TS stream is received and transmitted through the operating system I/O stream.

The setting specifies the console application (absolute path), command line. You can also set environment variables.

For input, when configuring an external application, it is necessary to exclude the appearance of messages in the standard output, only in stderr.

For output it is possible to set packing up to 7 MPEG-TS packets.

Input stream requirements

Compliant with iso13818-1, Single Program (SPTS) or Multi Program Transport Stream (MPTS). Features of MPTS are described below, further settings are indicated for Single Program.”

At least one audio track is required.

Streams without video are supported, enabled by Radio mode.

Supported encoded streams, you need to enable Scrambled Stream.

For synchronization, the stream must have valid PCR marks.

Stream settings

Set an unique stream name. Use latin, numbers, signs «_», «-». Also you can set stream display name, national symbols are supported.

Stream Timeout - total stream timeout. If during this time there is no valid input stream, then a complete restart is performed.

Pause - transfer stream, as well as all input and output to inactive state. By default, when adding a new stream as well as input and output they will be paused and inactive.

The program checks the input stream with input for its validity. If the stream fails, then input is considered abnormal.

Check Interval - Interval to re-check the stream.

MPEG-TS filter options:

Remove All Unnecessary Data - Deletes all unnecessary data except PAT/PMT, video and sound, except those specified in separate filters (see below)

Remove SDT - Deletes SDT data (channel name, provider, etc.).

Remove EIT/EPG - Deletes EPG data.

Remove Teletext - Deletes teletext data.

Remove Subtitles - Deletes subtitles data.

Stream bitrate management:

Bitrate mode - bitrate management mode.

  1. Origin (default) - stream is transmitted without changes.

  2. VBR - removes NULL packets to minimize bitrate. Enable if streams are used only for OTT broadcasting.

  3. CBR auto - enables bitrate alignment by inserting NULL packets (Stuffing). The resulting bitrate will be set to the maximum bitrate of the input stream.

  4. CBR set stuffing bitrate - explicitly set the desired bitrate. If you set less than the input stream, it will be aligned as in the CBR Auto mode.

When CBR mode is enabled, PCR Accuracy will correspond to TR 101 290. PCR interval will be as in the original stream.

Streams correction:

Fix PAT/PMT interval - corrects the interval to correspond to TR 101 290, inserting additional PAT/PMT.

Source reservation

Several input can be specified as a list, but only one is active. If input crashed, then an attempt is made to use the next one in the list, and so on in a circle.

If stream has Fallback Check enabled, then when the backup input (not the first one in the list) is running, a recheck will be performed higher in the list with the Check Interval interval. If, when rechecking, the stream is valid, then stream switches to it.

Since the order of input matters, it can be changed. If input is paused, then it will not be taken into account when working.

Filtering and modifying MPEG-TS

By default, the MPEG-TS stream is transmitted as is.

For each input the following MPEG-TS stream filtering options are available:

PID Accept - list of allowed pids. If empty, allow all except PID Reject.

PID Reject - list of prohibited pids. Takes precedence over PID Accept.

It is possible to change the pid. For this, the PID Old and PID New lists are introduced.

Mapping PID and Languages — remap of audio track languages.

Default Language - assign the default language if there is no language for the sound track.

For stream you can assign new MPEG-TS data (SDT table):

  • MPEG-TS Network ID

  • Service Name

  • Provider Name

  • Language

MPTS streams

MPTS stream - MPRG-TS stream with several streams (services), each of them has unique number (PNR). It is used for DVB broadcasting.

Filtering options are not available for MPTS streams. Streams are passed as is.

The mosaic feature is disabled by default. It is not recommended to enable it on weak CPUs, it can add jitter.

Stream diagnostics displays data for all programs separately, as well as summary statistics.

Demultiplexor

of the demux type to the SPTS stream, select a source and a PNR service. If the source MPTS is active, then a list will be available when selecting the PNR, otherwise you need to set the PNR manually.

Multiplexor

Assembles MTPS stream from separate SPTS streams. To confgure:

  • Create MPTS stream.

  • Add input of muxer type. Set bitrate for CBR stream alignment (stuffing, TR 101 290 and T-STD compliance).

    If you set 0 (default) then alignment will not be performed, bitrate will correspond to the input streams. There you can also enter some parameters of MPEG-TS stream, for most applications the default parameters will be suitable.

  • Add output of muxer type into source stream. Set up service name and provider name if needed. Select language in MPEG-TS settings if you use non-latin alphabet.

  • Repeat for all sources.

Multiplexor generates SDT, NIT and TDT/TOT (time marks) for stream. EIT (EPG) is taken from source streams. New PIDs are assigned.

Test streams

Test Stream generator — a test signal (test card). Lets you build generated video streams as fillers for the air or fall-backs during failures of the main streams. The picture type, audio and overlaid text and time can all be configured.

It is configured by enabling the corresponding type of input for the stream. It is possible to set video format type, resolution, bitrate, sound volume and frequency, etc.

A list of available Test Streams is available in the left side menu of the program.

OTT service

Outputs streams over HTTP-based protocols — HLS, MPEG-DASH (since version 1.12) and MPEG-TS over HTTP. HTTPS (SSL) is supported. The output is enabled on the OTT tab of the Stream settings.

URLs for connections have format:

host and port - set in http server settings.

stream - stream ID. Not to be confused with the sequence number in streams list. ID is shown in stream stats page header and in streams list ID column, ID is set at stream creation and never changes.

Likewise for HLS and DASH:

Output URL templates and working status are shown on stream stats page. Unauthorized access is denied, all clients should be registered in Peers.

Additional HLS parameters are available in URL (optional):

[URL]?a=1&s=40&m=40&v=5

  • a: 1 - use absolute paths in playlist (default), 0 - relative paths.

  • s - dynamic play list duration (sec), default 40 sec.

  • m: dynamic playlist minimum duration (sec), 40 sec by default. Maximum play list duration is 60 sec. If current chunk buffer size is less than minimum in request, error 404 will be raised. This is done so that HLS would start with a full chunk buffer on the server.

  • v: HLS protocol version for play list, 5 by default. Version change may be required for some players.

File name index.m3u8 could be added to URL for some players support, for example: http://host:port/hls/stream/login/password/index.m3u8.

There are two modes supported for HLS server - Peer mode and OTT mode

Peer mode - simple chunks segmentation mode. Recommended for streams peering (streams distribution).

OTT mode - optimized chunks segmentation mode for players fast start. In this case CPU load is higher, recommended for broadcasting.

SSL (HTTPS) can be enabled for HTTP server, this is done in the server settings.

Chunk Min Interval and Chunk Max Interval

In OTT mode, the stream is analyzed for PAT/PMT/SPS/PPS/IFrame and chunks are cut based on the criterion of fast player start. The analysis starts from min interval and if for some reason the data is not found, then the chunk is forcibly cut at max interval.

HLS Adaptive Multistream

HLS Adaptive Multistream support is available starting from version 1.10.

HLS playlist should be configured for each adaptive stream. To do it:

  • Enable HLS with OTT mode for each stream you going to use in adaptive streams.

  • Streams Adaptive menu item will available in main menu. There you should add adaptive stream and select all streams you need for current adaptive stream playlist.

  • Bitrate parameter can be configured for each stream. Default is 0, which indicates to use measured stream bitrate. Otherwise you can configure it manually.

Adaptive stream URL is differ:

Peer (client) can have access list where adaptive streams are also available.Permission for an adaptive stream includes permissions for all streams that are included in it.

Caching model for OTT HLS and DASH.

The server emits responses of three categories that differ in content lifetime and suitability for caching by intermediate nodes (reverse proxy, CDN, client cache).

1. Caching model

1.1. Resources and HTTP headers

Resource

URL

Content-Type

Cache-Control

TS segment

/h<sess>/<keyID>.ts, /h<sess>/<subID>/<keyID>.ts

video/mp2t

public, max-age=60, immutable

DASH MPD

/h<sess>/index.mpd

application/dash+xml; charset=utf-8

public, max-age=1

HLS master

/hls/<stream>/<login>/<pass>/index.m3u8

application/vnd.apple.mpegurl

public, max-age=1

HLS media

/h<sess>/index.m3u8, /h<sess>/<subID>/index.m3u8

application/vnd.apple.mpegurl

public, max-age=1

302 Redirect

/dash/<stream>/<login>/<pass>/index.mpd

no-cache, no-store

Raw TS

/http/<stream>/<login>/<pass>

video/mp2t

not set; not cached

1.2. TS segment characteristics

The keyID identifier is computed as CRC64(startTime || streamID) and is globally unique. A segment URL addresses immutable content — repeat requests for the same URL return an identical byte stream (as long as the segment stays within the sliding window).

The immutable directive suppresses conditional revalidation by the client (If-None-Match, If-Modified-Since). The max-age=60 value is compatible with a typical timeShiftBufferDepth=40s.

1.3. Manifest characteristics

max-age=1 caps the upper bound of cached-content staleness at one second. Combined with proxy_cache_lock on (nginx) it collapses bursts of manifest requests into a single origin request per second.

1.4. Content variability

With absPath=0 (default, no a URL parameter) HLS media and DASH MPD manifests do not embed a session identifier. Manifest content is identical across sessions belonging to the same (stream, param) combination, so the reverse-proxy cache can share a single entry across sessions when the cache key is normalised.

With absPath=1 (a=1 URL parameter) the manifest body contains absolute URLs that include scheme, host, and session id. Content becomes session-specific, eliminating cross-session cache reuse.

2. Client behaviour

Client

Manifest refresh URL

Effect on session count

VLC 3.x HLS

/h<sess>/index.m3u8

One session per playback

VLC 3.x DASH

/dash/<stream>/.../index.mpd

Handled by session reuse (see 3.3)

ffmpeg 5.x HLS

/h<sess>/index.m3u8

One session per playback

ffmpeg 5.x DASH

/dash/<stream>/.../index.mpd (re-request loop)

Handled by session reuse (see 3.3)

dash.js, hls.js

/h<sess>/... via <Location> / session URL

One session per playback

3. Special mechanisms

3.1. HTTP 302 Redirect for DASH

A /dash/<stream>/<login>/<pass>/index.mpd request returns 302 Found with a Location: /h<sess>/index.mpd header. The body is empty. Authentication and session allocation happen during the redirect.

Clients that cache the redirect address the session URL directly in subsequent requests. Clients that do not, re-issue the redirect request. The cost of repeat redirect handling is limited to auth check and session-reuse operations.

3.2. Session reuse for DASH

While processing a /dash/.../index.mpd request executed under login-id L for stream-id S and adaptive=A, if _ottClientList already holds a DASH session with the same (L, S, A) the existing sessID is returned. No new session is created, no maxConn slot is consumed.

Applies to DASH only. HLS does not need a separate reuse mechanism: HLS clients refresh the media playlist via the session URL and do not trigger applyNewOTTSess on each refresh.

3.3. Reusing segments between sessions

The /h<sess>/<keyID>.ts path is sess-independent when resolving keyID into content: keyID uniquely identifies a segment within the registered ChunkList (see _ottStreamList). Nginx with a normalised cache key (stripping the /h<sess>/ prefix) serves every request for the same keyID from a single cache entry.

4. Request parameters

Parameter

Default value

Effect

a

0

1 — absolute URLs in manifests; 0 — relative

s

40

timeShiftBufferDepth in seconds

m

40

Minimum window length for manifest emission

v

3

#EXT-X-VERSION in HLS (ignored by DASH)

Changing a parameter via query string updates the values stored in the session on the next applyNewOTTSess call.

5. Load characteristics

Origin load scales with the number of distinct streams being watched concurrently. Increasing the number of clients watching the same stream does not increase origin requests when a reverse-proxy cache with a normalised cache key is in place.

Scenario

Origin request rate (ref.)

1 client per stream X

MPD: 0.4 req/s, segment: 0.2 req/s

N clients on one stream X (cache enabled)

MPD: 1 req/s, segment: 0.2 req/s

N ffmpeg clients in replay mode on one stream

MPD: 1 req/s (with proxy_cache_lock)

N clients on N distinct streams

MPD: 0.4·N req/s, segment: 0.2·N req/s

6. Nginx as a caching reverse proxy

6.1. Basic configuration

proxy_cache_path /var/cache/nginx/pss_segments
    levels=1:2 keys_zone=pss_segments:100m
    max_size=20g inactive=30m use_temp_path=off;

proxy_cache_path /var/cache/nginx/pss_manifests
    levels=1:2 keys_zone=pss_manifests:10m
    max_size=256m inactive=5m use_temp_path=off;

upstream pss_backend {
    server 127.0.0.1:41972;
    keepalive 64;
}

map $uri $pss_cache_key {
    ~^/h[0-9a-f]{16}(?<tail>/.+\.(ts|m3u8))$  "stream:$tail";
    default                                     $uri;
}

server {
    listen 80;
    server_name stream.example.com;

    location ~* "^/h[0-9a-f]{16}(/[0-9]+)?/[0-9a-f]+\.ts$" {
        proxy_cache               pss_segments;
        proxy_cache_key           $pss_cache_key;
        proxy_cache_valid         200 60s;
        proxy_cache_valid         404 403 0s;
        proxy_cache_lock          on;
        proxy_cache_use_stale     updating error timeout;
        proxy_cache_revalidate    on;
        add_header                X-Cache-Status $upstream_cache_status;

        proxy_pass                http://pss_backend;
        proxy_http_version        1.1;
        proxy_set_header          Connection "";
        proxy_buffering           on;
    }

    location ~* "(^/h[0-9a-f]{16}(/[0-9]+)?/index\.(m3u8|mpd)$|^/(hls|dash)/.*\.(m3u8|mpd)$)" {
        proxy_cache               pss_manifests;
        proxy_cache_key           $pss_cache_key;
        proxy_cache_valid         200 1s;
        proxy_cache_valid         404 403 0s;
        proxy_cache_lock          on;
        proxy_cache_lock_timeout  2s;
        proxy_cache_use_stale     updating;
        add_header                X-Cache-Status $upstream_cache_status;

        proxy_pass                http://pss_backend;
        proxy_http_version        1.1;
        proxy_set_header          Connection "";
    }

    location / {
        proxy_pass                http://pss_backend;
        proxy_http_version        1.1;
        proxy_set_header          Connection "";
        proxy_set_header          X-Forwarded-Proto $scheme;
        proxy_set_header          X-Forwarded-Host  $host;
        proxy_buffering           off;
        proxy_read_timeout        3600s;
    }
}

6.2. Directive purposes

Directive

Purpose

proxy_cache_lock on

Serialises upstream requests when concurrent cache misses target the same key

proxy_cache_use_stale updating

Returns the stale copy to concurrent requests while the cache is being refreshed

proxy_cache_revalidate on

Uses If-Modified-Since on cache miss when a saved copy exists

proxy_cache_valid 404 403 0s

Disables caching of authorisation errors and 404

keepalive 64 in upstream

Maintains a pool of persistent connections to origin

proxy_buffering on

For segments; enables response buffering in nginx

proxy_buffering off

For the / location; disables buffering (raw streaming)

6.3. Calculating segment cache max_size

Rough value: bitrate × timeShiftBufferDepth × distinct_streams × 2

Example: 10 streams × 8 Mbps × 40s × 2 ≈ 800 MB. A 10x headroom is recommended to absorb bitrate variance.

6.4. TLS termination

The Perfect Streamer server accepts connections on HTTP and HTTPS ports. With TLS termination at nginx the upstream uses the HTTP port. Forwarding X-Forwarded-Proto and X-Forwarded-Host headers is required for correct absolute URL composition when absPath=1.

server {
    listen 443 ssl http2;
    server_name stream.example.com;

    ssl_certificate           /etc/letsencrypt/live/stream.example.com/fullchain.pem;
    ssl_certificate_key       /etc/letsencrypt/live/stream.example.com/privkey.pem;
    ssl_protocols             TLSv1.2 TLSv1.3;
    ssl_session_cache         shared:SSL:10m;
    ssl_session_timeout       1d;

    add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;

    location ... {
        proxy_pass                http://pss_backend;
        proxy_set_header          X-Forwarded-Proto https;
        proxy_set_header          X-Forwarded-Host  $host;
        proxy_set_header          Host              $host;
        # + caching directives from 6.1
    }
}

server {
    listen 80;
    server_name stream.example.com;
    return 301 https://$host$request_uri;
}

For HTTPS between nginx and origin, proxy_ssl_verify and proxy_ssl_trusted_certificate directives apply. Encryption is redundant for loopback connections.

6.5. Multi-host

When serving multiple server_name from a single nginx process, $host is added to the cache key to isolate content:

map $uri $pss_cache_key {
    ~^/h[0-9a-f]{16}(?<tail>/.+\.(ts|m3u8))$  "$host:stream:$tail";
    default                                     "$host:$uri";
}

keys_zone size is sized at 8000 keys per MB. For multi-host installations with thousands of streams, keys_zone=...:300m or higher is recommended.

7. Client-side caching

Cache-Control: immutable is honoured by Chrome/Firefox/Safari. The client cache returns the segment without a conditional request on re-access (including backward seek within the player buffer).

Service Workers can apply a cache-first strategy based on Cache-Control content. DASH players (dash.js, Shaka) use MSE through SourceBuffer; a segment placed in the buffer remains available without a repeat HTTP request until it slides out of the window.

For cross-domain requests the Access-Control-Allow-Origin: * header allows caching in shared caches without Vary: Origin. Switching ACAO to a specific Origin requires Vary: Origin, which reduces shared-cache efficiency.

8. Deployment via CDN

Perfect Streamer is compatible with pull-from-origin CDNs (Cloudflare, Akamai, Fastly, BunnyCDN, Amazon CloudFront).

Origin shield. Placing one or more shield nodes between CDN edge and origin is recommended to reduce origin request rate when clients are globally distributed.

Purge. Content-addressed segments require no purge. When stream metadata changes (codec, resolution), manifests refresh within max-age=1 without an explicit purge.

Cache warming. When a specific stream is expected to spike, the CDN may be warmed from several geographic points before broadcast start.

Geo-distribution. Segments (max-age=60) are well suited for geographically distributed caching. Manifests (max-age=1) tolerate up to one-second delivery delay — acceptable for non-low-latency live.

9. Monitoring

9.1. X-Cache-Status

Add add_header X-Cache-Status $upstream_cache_status; in every cached location. Values:

Value

Description

HIT

Response from cache

MISS

Not in cache; fetched from origin and stored

EXPIRED

Expired, refreshed

UPDATING

Stale copy returned to a concurrent request during refresh

STALE

use_stale returned the expired copy (origin unreachable)

REVALIDATED

Origin returned 304 Not Modified

BYPASS

proxy_cache_bypass triggered

9.2. Access-log format

log_format pss_cache '$remote_addr $status $request_method "$request" '
                     '$body_bytes_sent rt=$request_time ut=$upstream_response_time '
                     'cache=$upstream_cache_status key=$pss_cache_key';

server {
    access_log /var/log/nginx/pss.log pss_cache;
}

9.3. Metrics

The nginx-vts module exports per-zone metrics in Prometheus format:

GET /status/format/prometheus

Recommended alert thresholds:

Metric

Threshold

Possible cause

Segment HIT rate

< 90% over 5 minutes

Cache-key normalisation broken; max_size too small

Manifest MISS rate

> 50% over 1 minute

proxy_cache_lock is not serialising requests

Upstream response time p95

> 500 ms over 1 minute

Origin overload

Cache zone fill

> 90% over 10 minutes

Approaching max_size; LRU eviction expected

10. Diagnostics

Symptom

Likely cause

Resolution

Low segment HIT rate

Vary: Origin with high Origin variance; broken normalisation in map

Inspect headers and the regex in the map directive

404 on segments after they leave the sliding window

Cached 404 for a segment that fell out of the sliding window

Add proxy_cache_valid 404 0s in the segments location

Playback start delay of 2–5 s

proxy_cache_lock_timeout exceeds target latency

Lower to 1–2 s; enable proxy_cache_use_stale updating

Manifest does not refresh

proxy_cache_valid overrides max-age

Set proxy_cache_valid 200 1s explicitly

Growing TIME_WAIT on upstream

keepalive missing in upstream block

Add keepalive 64, proxy_http_version 1.1, proxy_set_header Connection ""

403 on /dash/.../<segment>.ts from ffmpeg

Client resolves relative URLs against the pre-redirect URL

Server emits <BaseURL>/h<sess>/</BaseURL> (absolute path); compatible in the current build

11. Security

11.1. Session URL

A URL of the form /h<sess>/... acts as the session token — no repeat authentication is required. Lifetime is bounded by an idle timeout (30 s). On inactivity the session is removed by the cleaner task.

Requirements:

  • HTTPS on every OTT path (/hls/, /dash/, /h<sess>/) in production

  • Session ID in the Location header of 302 is not cached (no-cache, no-store)

11.2. Rate limiting

limit_req_zone $binary_remote_addr zone=dash_top:10m rate=5r/s;
limit_req_zone $binary_remote_addr zone=hls_top:10m  rate=5r/s;

server {
    location /dash/ {
        limit_req zone=dash_top burst=20 nodelay;
        proxy_pass http://pss_backend;
    }
    location /hls/ {
        limit_req zone=hls_top burst=20 nodelay;
        proxy_pass http://pss_backend;
    }
}

Session URLs (/h<sess>/) do not require rate limiting — handling is cheap and responses are cached.

11.3. Caching error responses

proxy_cache_valid 200 60s;
proxy_cache_valid 301 302 0s;
proxy_cache_valid 404 403 0s;
proxy_cache_valid any 1s;

Disables caching of redirects (unique sess in Location) and of authorisation/missing-resource error responses.

11.4. Restricting network access to origin

Port 41972 (41982 for HTTPS) must be closed to external traffic. Acceptable configurations:

  1. Bind Perfect Streamer to 127.0.0.1 (when nginx is co-located)

  2. Firewall rule:

iptables -A INPUT -p tcp --dport 41972 ! -s 10.0.0.0/8 -j DROP

12. Middleware integration

12.1. The prefix-login model

Perfect Streamer supports delegating user identification to a middleware/billing system via the prefix-login mechanism. An external connector to the billing system is not included in the current release.

Embedded-user configuration:

{
  "id": 9,
  "login": "sub",
  "password": "xxx",
  "is-prefix": true,
  "max-conn-http-hls": 1,
  "accept-stream": [ ... ]
}

With is-prefix: true the server accepts URLs whose login follows <prefix><billing_user_id>:

/dash/test1/sub42/xxx/index.mpd
/hls/test1/sub43/xxx/index.m3u8

12.2. Statistics format

<clients>
  <client login-id="-1974387287" login="sub" match-login="sub42"
          sess-id="11331..." ott-type="dash" stream-id="10000" .../>
  <client login-id="-2147031294" login="sub" match-login="sub43"
          sess-id="11132..." ott-type="dash" stream-id="10000" .../>
</clients>

The login-id field holds the hash of the URL login. The login field is the configured value. The match-login field is the URL login used by the client.

12.3. Limitations of prefix-login

  • Shared password. All subscribers of a prefix pool use a single password value. Compromising the password grants access to any <prefix><string>.

  • ACL granularity. accept-stream applies to the whole prefix pool. Per-subscriber ACL is not available without external billing.

  • Password rotation. Changing the password disconnects all active subscribers. Gradual rotation requires temporarily using two prefix logins.

13. WebVTT subtitles

The subtitle source is DVB Teletext / DVB Subtitling from the input MPEG-TS. Teletext subtitle tracks must be present in the Media Information or Original Media Information sections. The Analyzer section can also be used to verify that packets of the corresponding PIDs are active.

For OTT HLS/DASH the OTT mode must be enabled (in Peer mode WebVTT subtitles are not available). The chunk counter OTT WebVTT buffer chunk count in the Output # OTT section must become non-zero.

To diagnose subtitles, enable Analyze and Trace on the stream. At stream start the stream log should contain:

Start Teletext subtitle decoder
[ttxsubdec] ttx: pid=331 magazine=8 page=0x88 lang=***

Subsequent log entries contain the decoded subtitle text.

13.1. URL of VTT segments

Scheme

URL

Content

HLS master

/hls/.../index.m3u8

#EXT-X-MEDIA:TYPE=SUBTITLES,GROUP-ID="subs",...,URI="/h<sess>/sub/<pid>/index.m3u8"

HLS subtitle playlist

/h<sess>/sub/<pid>/index.m3u8

list of <keyHex>.vtt with #EXTINF

HLS VTT segment

/h<sess>/sub/<pid>/<keyHex>.vtt

VTT with HLS-flavoured X-TIMESTAMP-MAP

DASH MPD AdaptationSet

inside index.mpd

contentType="text" mimeType="text/vtt" + <SegmentTemplate media="$Number$.vtt">

DASH VTT segment

/h<sess>/sub/<pid>/<seq>.vtt

VTT with DASH-flavoured X-TIMESTAMP-MAP

<keyHex> is the 16-character hex of CRC64(startTime, streamID, pid). <seq> is the decimal subtitle-storage chunk number (a counter separate from the video storage).

DVR / Archive

Since version 1.13, Perfect Streamer provides a built-in DVR — a persistent on-disk stream archive that operates in parallel with regular OTT delivery (HLS / DASH). The archive is written automatically, without a separate process, and is played back through the same OTT URLs as live — the only difference is the query parameter.

Features:

  • Recording of each OTT stream to the archive on the selected storage.

  • HLS and DASH archive playback (VOD) on the same URLs as live.

  • Subtitle support (WebVTT) — written alongside TS chunks.

  • Multiple storages — a stream is bound to one; different streams may write to different disks.

  • Automatic cleanup by retention time and by disk usage.

  • EPG-aligned VOD — archive delivery by EPG-event reference.

  • Adaptive VOD — supported for adaptive groups.

DVR requires no separate license. It is enabled per stream by adding a storage binding.

DVR does not replace live broadcasting. If a stream has an archive, the client receives a live playlist with the same behavior as without DVR. The archive starts playing only when the client explicitly requests VOD mode via a URL query parameter (see below).

Storage configuration

A storage is a record in the Configuration / DVR Storage section. Each record describes one on-disk directory into which PSS writes archive files. A stream uses one storage.

When adding a storage, the following are configured:

Name — display name.

Dir Path — path to the on-disk directory. After record creation, the path cannot be changed — to move the archive to another disk, delete the record and add a new one with the new path. Existing files are not touched on disk when the record is deleted.

Max Usage, % — disk usage threshold (default 90 %). When exceeded, size-based cleanup activates (see below). Minimum 1 %, maximum 100 %.

Cleanup Interval, sec — cleanup task period (default 10 sec). On each tick, everything older than the stream retention depth is trimmed first; then, if Max Usage is exceeded, old chunks are removed.

Disk Pressure Grace, sec — how many seconds Used % must continuously exceed Max Usage before Size-based cleanup starts (default 60 sec). Filters out short spikes.

Disk Pressure Cut, sec — upper bound for one cleanup tick: how many seconds of video per stream may be deleted at once (default 300 sec). The remainder is carried over to the next tick.

Disk Emergency Bytes — free-space threshold below which the storage transitions to Error state and recording stops (default 2 GiB). Automatic recovery occurs when free space ≥ 2 × this value.

Alarm Disk-Full Hysteresis, % — hysteresis band width when exiting the DiskFullDegraded state (default 2 %).

Most default values suit typical installations; adjustments are typically required only for Max Usage and Dir Path.

One storage per disk is the recommended layout. Creating several records on the same disk with different subdirectories causes them to compete for free space — statvfs gives a shared view, but cleanup is per-record.

Binding a stream to a storage

In the Stream / OTT settings, a DVR section appears:

Storage — drop-down list of available storages; 0 means “archive disabled for this stream”.

Storage Hours — archive depth for this stream, in hours. Chunks older than this are deleted on each cleanup tick. 0 disables Rolling cleanup (only Size-based cleanup by Max Usage runs).

Storage Min Hour — lower protection threshold (hours). The cleanup task never deletes chunks younger than this, even under Max Usage pressure. Use it when business logic requires a guaranteed fresh recording, e.g. “the last 2 hours are always present”.

Changing Storage on the fly:

  • setting 0 — disables archiving; on-disk chunks are preserved, new ones are not written. VOD sessions on this stream begin returning 404;

  • selecting a different storage — the stream is detached from the old one and starts writing to the new one. Files from the old disk are not migrated.

Changes to Storage Hours and Storage Min Hour apply immediately — the cleanup task picks up the new value on the next tick.

After configuration, the stream begins writing the archive automatically as soon as it enters Running state.

VOD: archive playback

The archive is played through the same URLs as live HLS / DASH broadcasting (see the OTT service section) — only the query parameter differs.

URLs and parameters

URLs for HLS and DASH:

Without query parameters — regular live with a sliding window. When the t parameter is present — VOD mode.

Parameter

Purpose

t=<epoch>

VOD start time (Unix epoch, sec). t=0 — from archive start. The presence of t (even t=0) enables VOD mode.

d=<sec>

VOD window duration, sec. d=0 or parameter absent — “up to the current moment”. Meaningful only together with t.

epg=<epoch>

EPG-aligned VOD: the server itself locates the EPG event active at the given moment and takes its start and duration as window bounds. Incompatible with t and d (server-side override). See below.

a, s, m, v

Standard live parameters (see OTT service); s and m are ignored in VOD mode.

Behavior by t and d:

t

d

Window

404 condition

no

live (sliding window)

0

none / 0

[archive start, now]

DVR not bound

0

> 0

[archive start, +d]

DVR not bound

> 0

none / 0

[t, now]

DVR not bound

> 0

> 0

[t, t+d]

DVR not bound

Boundary normalization:

  • t earlier than archive start — start is automatically aligned to the first available chunk (cleanup may have already trimmed). This is not 404 — what remains is served.

  • t in the future or window entirely earlier than the archive — empty but valid playlist (HLS: header only + EXT-X-ENDLIST; DASH: @type="static", mediaPresentationDuration="PT0S").

  • Chunks are selected strictly by startTime ∈ [t, t+d) — there are no partial segments.

VOD on a stream without an archive (or whose storage is in Error) — 404 immediately on the master playlist / MPD. No session is created.

Error text in 404 body (visible in server logs and HTTP body):

  • VOD: stream not running — the stream is in the config but not in Running.

  • VOD: no DVR archive — no storage is set on the stream, or the storage is in Error.

  • VOD: DVR detached — the stream was detached from the storage between requests.

  • VOD: EPG event not found — no event found for ?epg=.

VOD HLS playlist — closed (the player sees the duration and can seek):

#EXTM3U
#EXT-X-VERSION:3
#EXT-X-PLAYLIST-TYPE:VOD
#EXT-X-TARGETDURATION:6
#EXT-X-MEDIA-SEQUENCE:0
#EXTINF:5.000,
...
#EXT-X-ENDLIST

These markers are absent in the live playlist — that is the only difference.

VOD DASH MPD — static: @type="static", fixed mediaPresentationDuration, explicit <SegmentURL>. Live DASH remains @type="dynamic".

If the selected interval has gaps in the archive (e.g. a recording restart or cleanup trimmed a middle portion), the DASH MPD is automatically split into multiple <Period> — one per contiguous track. Players (VLC, dashjs, Shaka) seek across period boundaries without special configuration.

EPG-aligned VOD

If the stream is bound to an EPG source (fields EPG Source and EPG Channel in the stream settings), the client may request the archive by a moment that falls within an EPG event:

The server locates the EPG event active at the given epoch and substitutes its start and duration as the VOD window bounds. Useful for program catalogs: the UI knows the event time but is not required to compute exact sec/ms boundaries.

If the stream is not bound to EPG or no event is active at that moment — 404 ``VOD: EPG event not found``. Parameters t and d are ignored when epg is present.

Adaptive streams

Adaptive groups (see HLS Adaptive Multistream) support the same VOD parameters:

Only variants that have a DVR storage configured are included in the master playlist (HLS). Variants without DVR are skipped (VOD: variant N has no DVR appears in the log); assembly from the remaining variants proceeds.

In the DASH variant, each quality becomes a separate <Representation> inside common <Period>; the player may switch between qualities without reopening the manifest.

Player behavior

VOD HLS / DASH from the archive plays in standard players: VLC, hls.js, dashjs, Shaka, ffmpeg. Timeline seeking works.

If the player requests a segment that cleanup has already removed by that moment, the server returns 404 for that segment (not 500). VLC, hls.js, dashjs skip such a segment and continue playback from the next one.

Additional capabilities:

  • Active session protection: while a VOD session is open, cleanup does not delete chunks within its window (see below).

  • Live-edge bridge: if a client in a VOD session requests a segment beyond vodEnd — e.g. moves forward along the timeline upon reaching the archive end — the server automatically serves the segment from live memory. No redirects or re-authentication.

  • Playlist cache: on repeated requests for the same VOD index.m3u8 / index.mpd, the server returns a byte-for-byte identical response — without rebuilding. Suitable for a CDN in front of PSS.

Subtitles in the archive

If the stream carries subtitles (DVB Subtitling, Teletext, or WebVTT) and the OTT WebVTT option is enabled in the stream settings, subtitles are archived in parallel with the TS chunks — as .vtt files next to .ts.

In live mode, the master playlist contains EXT-X-MEDIA:TYPE=SUBTITLES; in VOD mode the server returns a VOD subtitle playlist (with ENDLIST) and .vtt segments at the same URLs.

Notes:

  • HLS: the X-TIMESTAMP-MAP header is preserved at the start of every .vtt (required by the HLS specification).

  • DASH: the X-TIMESTAMP-MAP header is stripped on the fly (it is bound to the absolute PCR and conflicts with the DASH anchor; otherwise VLC shows empty subtitles).

  • In an adaptive group: if the active sub-stream does not write subtitles, VTT segments return 404. On the next variant, the player may receive subtitles again.

Subtitles can be disabled either via the OTT WebVTT option on the stream, or by not enabling HLS in OTT mode at all.

Cleanup and retention

PSS uses two cleanup strategies that run on every cleanup task tick (by default every 10 sec):

  1. Rolling cleanup (per-stream): for each stream, chunks older than Storage Hours are deleted. Runs always, even with a half-empty disk.

  2. Size-based cleanup (storage-wide): when Used % on disk continuously exceeds Max Usage for Disk Pressure Grace seconds, the oldest chunks are trimmed proportionally across all streams bound to the storage. Cleanup proceeds in batches of Disk Pressure Cut sec of video per stream per tick. Never deletes chunks younger than Storage Min Hour.

  3. Emergency disk-full cut: when free space drops below Disk Emergency Bytes, cleanup runs aggressively and may delete even session-protected chunks. Recording stops until free space recovers with a hysteresis of × 2.

  4. Orphan scan (hourly): files not accounted for in the index may remain on disk (after an abrupt PSS shutdown). Once per hour, the scanner walks the stream subdirectories and removes such “forgotten” files. Race-protection against the writer — files younger than 60 sec are skipped.

Note. Cleanup tasks run in the background; the user normally does not have to act. If the archive grows faster than it is trimmed, lower Storage Hours on the streams or raise Disk Pressure Cut.

Active VOD session protection

When a client opens a VOD session, a “protection slot” with the window start time is registered on every storage involved in its window. Rolling and size-based cleanup do not touch chunks within the open session’s window. The slot is released automatically when the session closes (FIN, timeout).

This means:

  • If a client holds a VOD session for a long time, it is guaranteed to be able to seek to any moment within its window — chunks will not “vanish from under it”.

  • Cleanup by Max Usage may temporarily fail to bring Used % to the desired threshold while a session is active; once the client leaves, cleanup catches up.

  • Emergency disk-full cut and Storage Min Hour bypass the protection: if the disk is near zero free space, chunks are deleted and the client receives 404 for the affected segments (the player skips them).

  • After a PSS restart, protection slots disappear — cleanup resumes immediately.

Multiple storages

Any number of DVR Storage records may be created — one per directory / disk. Streams are bound to different storages independently. Cleanup and thresholds (Max Usage, Disk Pressure) operate per-storage.

Use cases:

  • Tiering by value: a fast-SSD storage for premium channels with deep retention, a high-capacity HDD storage for the rest.

  • Dedicated disk for the archive: so that DVR recording does not compete with the system disk or temporary files.

  • Project separation: one disk for channel set #1, another for #2 — simplifies migration and audit.

Changing the stream binding (the Storage field in Stream / OTT) switches the recording on the fly. Old files remain untouched on the prior disk — they may either be deleted manually, or recording may be reattached by switching the stream back.

Storage state and monitoring

The Data / DVR Storage List section (and the API GET /data/dvr-storage-list) reports per storage:

  • StateReady / DiskFullDegraded / Error.

  • Total / Free / Used Bytes — disk statistics (statvfs).

  • Used % — current usage percentage.

  • Archived Bytes — total size of chunks accounted for in the index of all bound streams (excluding orphan files).

  • Attached Streams — number of streams bound to this storage.

  • Pressure Since Sec — moment (epoch) when Used % first exceeded Max Usage in the current pressure episode; 0 means “no pressure now”.

States:

  • Ready — normal operation.

  • DiskFullDegraded — free space < (Disk Emergency Bytes × 2); recording still continues but may transition to Error at any moment.

  • Error — free space < Disk Emergency Bytes; recording stopped; restart by hysteresis.

If a storage has gone into Error, check free space on the disk; PSS does not exit this state on its own until more space physically appears.

Protection against accidental data loss

Several admin-interface operations lead to loss of recorded archive or cessation of VOD delivery. Before execution, the admin UI displays a modal confirmation:

  • Deleting DVR Storage — all bound streams lose VOD access; on-disk files remain but are unreachable through PSS without the storage record.

  • Switching the stream to another storage — VOD over the old archive stops working.

  • Detaching the stream from storage (Storage = 0) — same effect.

  • Reducing Max Usage — may trigger size-based cleanup and delete old chunks.

  • Reducing Storage Hours / Storage Min Hour — may push part of the archive out of the rolling window.

This is a deliberate product decision: the operation is permitted but requires confirmation, and deleted files remain on disk (can be restored from backup). Placing the archive on a separate file server / RAID substantially lowers the risk of irreversible loss.

Current-version limitations

  • A VOD session opened by a client does not track the live edge — if new chunks have been added to the archive during the session, the client must re-request the playlist (standard behavior of all HLS / DASH players).

  • Assigning one storage per disk is the recommended layout — multiple records with different subdirectories on the same disk compete for free space.

  • Segments are addressed by hash (HLS) or by the $Number$.ts template (live DASH) / explicit <SegmentURL> (VOD DASH). Changing chunk sizes between live and VOD does not require reopening the URL.

  • The orphan scanner runs hourly; to accelerate, restart PSS.

Streams manipulations

Delete stream. To delete stream, go to the stream settings and press the button Delete stream.

Stream cloning. To clone stream go to the source stream settings and press the button Clone stream.

Sorting. To set up the sorting of streams, click the Sort button in the stream list window. After that, specify the desired order by dragging the streams up and down the list. To save the specified sort order, click the Save order button or Cancel to cancel the changes.

Filtering the list. To filter the list of streams, click on the button with the search icon and enter the filter string. To cancel the filtering, press the back arrow button.”

Group operations. In the stream list filtering mode, it is possible to select multiple streams by turning on the checkboxes in the left column. If any streams are selected, the Delete and Clone buttons for the selected streams become available.

Streams export and import using python script

The export and import of the configuration is implemented via .m3u playlist, done via a Python script.

Python ver 3 is required in path /usr/bin/python3.

Streams export and import using web-interface

In the list of streams, clicking on the Playlist button opens a dialog for exporting streams to an m3u8 playlist. Only those streams that are currently displayed based on the applied list filters are exported.

Settings:

  • Host / IP - the name of the server or the address that will be used to generate the URL of the streams.

  • Protocols - types of protocols for export.

  • Login - account used for URLs with access credentials.

  • Use Display Names - use stream Display name in m3u8-file instead of Stream name.

The playlist is downloaded as an m3u8 file when you click on the Download button.

When you click on the Import Playlist button, the dialog switches to the playlist import mode. When importing streams, existing streams are not deleted. For all imported streams, the import time is recorded in the Note field.

Settings:

  • Playlist - playlist-file for import.

  • Create Outputs - output protocol type for imported streams.

  • Output Ports From - start output port number.

  • Output IP - interface for binding outputs.

  • Tags - mark imported streams with tags. Usefull for streams selection and group operations. For example, to delete all imported streams.

If the file to import is specified in the Playlist field, the Load Playlist button becomes active. When you click on the Load Playlist button, the playlist is preloaded, the result of parsing the playlist is displayed in the table. After parsing the playlist, the Import Streams button becomes active, when clicked, streams are imported and outputs are generated for them.

Reports and diagnostics

Streams - displays data about all stream in the form of a table. Pause - available to admin and restricted admin roles.

Detailed statistics and reports are available for each stream.

Peers - list of active recipients (clients). Separate statistics are available for each.

Stream analysis

The streams analyzer measures and analyzes various MPEG-TS stream parameters, which allows you to evaluate the quality.

Input speed — stream rate (bitrate) in kbps. Changes after filtering by PCR marks. Shown as a graph that also displays the output bitrate after the synchroniser.

Raw data speed - data input speed according to specified protocol. Overhead - the protocol service data overhead in %.

CC errors - skipped packets (CC, discontinuity). Periodic counters and summary counter for stream uptime are shown. Also value history chart is shown.

Scrambled - encoded MPEGTS ES packets counter. If value is more than 0, there are decoding issues in protected channels.

Sync by - Source of synchronization, PCR by default. If PCR is invalid, video PTS/DTS can be used instead (see related setting in input Force Sync by PTS).

PCR interval - interval between PCR marks. Not more than 50 ms is recommended.

PCR jitter - accuracy characteristic of output stream syncronization. Measured as the difference between PCR and real time.

Analyze PCR PMT Gap - enabled by special setting. Spread between PCR and PTS/DTS for each ES is analyzed. There is corresponding value history chart. Too much spread can cause problems for players with small sync buffers. Enabled by setting Analyze PAT/PMT/KF.

PAT interval and PMT interval - measured interval between PAT (PMT) tables. Recommended value is not more than 500 ms. Enabled by setting Analyze PAT/PMT/KF.

Key Frame interval - measured interval between key frames. Not more than 1 sec is recommended for players with random time access. It is enabled by setting Analyze PAT/PMT/KF.

PAT/KF interval - Measures the average interval between the beginning and end of the PAT/PMT/SPS/PPS/KF sequence. The playback speed of the players depends on this. It is measured by the beginning of KF in the stream, so the real start time of the player will be longer. Enabled by setting Analyze PAT/PMT/KF.

Enabling Analyze PAT/PMT/KF enables the flow analyzer in permanent mode, which leads to an increase in CPU load.

Jitter control

There are several settings in the stream for jitter control:

  • Jitter Compensation Delay (ms) - network Jitter compensation function, buffer size is set. Extended function description: https://forum.pstreamer.tv/viewtopic.php?t=25

  • Jitter Auto sync - activates the value of 2000 ms for TCP-based protocols (HTTP, HLS), for UDP protocols the value remains the same - 500 ms.

  • Limit PCR gap (ms) - checks how much PCR can jump, if more, then resynchronization will occur.

If PCR Accuracy errors appear after the transcoder, you need to set an even bitrate using stuffing for the encoded stream along the followingpath: “input - transcoder - Align Total Bitrate”. The speed must be set guaranteed higher than the bitrate of video and audio.

System Monitor

Control of the main parameters of the operating system.

Mosaic

The function of taking a screenshot from the input stream. Included separately for each stream.

If not required, in order to save resources, it can be completely disabled in the Settings/Server Settings section.

Administration

In the Configuration/Administration/Administrators List section, users are added to access the web interface - the built-in HTTP-server.

Users are assigned roles:

admin - full access.

restricted admin - settings are unavailable, it is possible to change pause state.

viewer - only view mode access.

Configuration backup

In order to make a backup copy of the settings, you need to save the contents of the folder /opt/pss/config.

To restore the settings, stop the service and replace the contents of the folder /opt/pss/config.

Connection of external monitoring systems

pss-metrics — universal metrics exporter for Perfect Streamer

A single Python 3 CLI script that pulls statistics from the PSS web-server HTTP API and formats output for the most common monitoring systems:

  • Zabbix (UserParameter, Low-Level Discovery, zabbix_sender trapper)

  • Prometheus (text exposition format for the textfile collector)

  • InfluxDB / Telegraf (line protocol or JSON for the exec input)

  • Universal JSON for arbitrary scripts and Nagios-style health checks

The exporter is a single self-contained file with no third-party dependencies, using only the Python 3.6+ standard library (urllib, xml.etree, json, argparse).

Files

pss-metrics.py                    main CLI (executable)
userparameter_pss.conf.example    UserParameter template for Zabbix

Installation

The exporter ships at /opt/pss/monitoring/pss-metrics.py. Make sure Python 3.6 or newer is installed:

# RHEL / Rocky / AlmaLinux
yum install -y python3

# Debian / Ubuntu
apt-get install -y python3

No additional packages are required.

Configuration

By default, pss-metrics connects to http://127.0.0.1:43971 and auto-detects the port from /opt/pss/config/pss.json (or /opt/pss/config/pss_default.json). Parameters can be overridden via environment variables, the /etc/pss-metrics.conf file (format key=value), or command-line flags. Priority: CLI > env > file > defaults.

Supported variables:

PSS_URL          full URL, e.g. http://10.0.0.1:43971         (default: auto)
PSS_USER         web-server login (if authentication is enabled)
PSS_PASS         web-server password
PSS_TIMEOUT      HTTP timeout, seconds                        (default 5)
PSS_CACHE_DIR    cache directory                              (default /run/pss-metrics)
PSS_CACHE_TTL    cache TTL, seconds                           (default 10)
PSS_CA_BUNDLE    path to a CA bundle for HTTPS
PSS_INSECURE     1 — disable TLS certificate verification
PSS_VERBOSE      1 — log requests to stderr

Cache: each run makes at most one HTTP GET per endpoint within the TTL window. At TTL=10 s even hundreds of UserParameter checks per minute result in ~6 HTTP requests per minute to PSS.

Quick start

Health check (Nagios-style exit codes):

pss-metrics.py health
# OK: total=42 running=15 stopped=27 unhealthy=0 version=1.12.2.430d

Zabbix LLD discovery:

pss-metrics.py discover streams
pss-metrics.py discover inputs --running-only
pss-metrics.py discover outputs

Fetching a single metric (use in Zabbix UserParameter):

pss-metrics.py get summary.running
pss-metrics.py get stream.10031.bitrate
pss-metrics.py get input.10031.1.speed1
pss-metrics.py get output.10031.1.speed
pss-metrics.py get sysmon.cpu.self-usage
pss-metrics.py get server.server-version

Full export:

pss-metrics.py dump --format=json
pss-metrics.py dump --format=prometheus
pss-metrics.py dump --format=influx
pss-metrics.py dump --format=zabbix-trapper --zabbix-host=streamer-01

Metric paths

pss-metrics get accepts a dot-separated path. Empty output means “value absent” (for example, the metric exists only for running streams).

server.<attr>                e.g. server.server-version, server.uptime
summary.<key>                total | running | stopped | unhealthy |
                             input_bitrate_kbps | output_bitrate_kbps
sysmon.cpu.<attr>            self-usage | total-usage | cores
sysmon.memory.<attr>         self-usage-kb | available-kb | total-kb
sysmon.netbw.<iface>.<attr>  rx-bw | tx-bw  (interface name as in XML)
stream.<id>.<attr>           any <stream> attribute
input.<sid>.<iid>.<attr>     any <input> attribute
output.<sid>.<oid>.<attr>    any <output> attribute

Useful per-stream attributes (from /data/stream/detail):

stream:  state, state-str, bitrate, thread-usage, mpts
input:   speed1, recv-bytes, recv-packets, recv-err,
         stat-disc, stat-disc1, stat-scrambled, stat-scrambled1,
         health-state-good, health-status, check-status
output:  speed, sent-bytes, sent-packets, sent-err, uri, type

Zabbix integration

Two scenarios are supported — pick the one that fits your environment.

  1. Static UserParameter + LLD (Zabbix agent v1 / v2)

    Copy userparameter_pss.conf.example to /etc/zabbix/zabbix_agentd.d/pss.conf, restart zabbix-agent, and import a template with LLD prototypes using the pss.discover keys on the server. Example bindings:

    UserParameter=pss.discover[*],/opt/pss/monitoring/pss-metrics.py discover $1
    UserParameter=pss.get[*],/opt/pss/monitoring/pss-metrics.py get $1
    UserParameter=pss.health,/opt/pss/monitoring/pss-metrics.py health
    

    On the Zabbix server:

    Discovery rule key:    pss.discover[streams]
    Item prototype keys:   pss.get[input.{#STREAM_ID}.1.speed1]
                           pss.get[stream.{#STREAM_ID}.bitrate]
                           pss.get[summary.unhealthy]
    
  2. Trapper (push) via zabbix_sender

    Run on a timer (cron / systemd) and pipe the output:

    /opt/pss/monitoring/pss-metrics.py dump --format=zabbix-trapper \
        --zabbix-host="$(hostname)" \
      | zabbix_sender -z zabbix.example.com -i -
    

Prometheus integration

Two options.

  1. Textfile collector (recommended for one-shot environments).

    Run a periodic export via systemd-timer or cron:

    */1 * * * * /opt/pss/monitoring/pss-metrics.py dump --format=prometheus \
        > /var/lib/node_exporter/textfile_collector/pss.prom.$$ \
        && mv /var/lib/node_exporter/textfile_collector/pss.prom.$$ \
              /var/lib/node_exporter/textfile_collector/pss.prom
    

    node_exporter will serve the file through –collector.textfile.directory.

  2. Direct scrape through a small wrapper (e.g. socat + pss-metrics dump) or any third-party HTTP proxy of your choice.

Telegraf / InfluxDB integration

Telegraf inputs.exec:

[[inputs.exec]]
  commands = ["/opt/pss/monitoring/pss-metrics.py dump --format=influx"]
  interval = "10s"
  timeout  = "5s"
  data_format = "influx"

For the JSON parser, use –format=json and configure data_format = “json” with field paths.

HTTPS and authentication

If the PSS web-server runs behind HTTPS or is password-protected:

PSS_URL=https://streamer.example.com:8443 \
PSS_USER=monitor PSS_PASS=secret \
pss-metrics.py health

Self-signed certificates: set PSS_INSECURE=1 (not recommended) or provide PSS_CA_BUNDLE=/path/to/ca.pem.

Exit codes

pss-metrics follows the Nagios convention:

0  OK
1  WARNING       (e.g. running streams are unhealthy)
2  CRITICAL      (PSS unreachable)
3  UNKNOWN       (invalid arguments / internal error)

get and dump output an empty string and exit with code 0 when the requested entity is absent — this matches Zabbix’s expectation that an empty value is treated as “NOT_SUPPORTED” rather than an agent failure.

Diagnostics

pss-metrics.py -v health        # log every HTTP request to stderr
pss-metrics.py --cache-ttl=0   # bypass cache while debugging
rm -rf /run/pss-metrics         # clear the cache

Let’s Encrypt and certbot for HTTPS

Starting from version 1.9.2.340, Perfect Streamer supports automatic renewal of Let’s Encrypt certificates for HTTPS in Web Server, HTTP Server, EPG Server.

RHEL certbot configuration.

Restriction:

  • A white IP address with a public domain name must be assigned and the tcp/80 port must be free.

  • A single host name is used for all https servers. (web server, http server, epg server).

Installing certbot (https://certbot.eff.org/instructions?ws=other&os=snap) :

sudo yum install snapd
sudo systemctl enable --now snapd.socket
sudo ln -s /var/lib/snapd/snap /snap
sudo snap install certbot --classic

certbot configuration:

sudo ln -s /snap/bin/certbot /usr/bin/certbot
sudo certbot certonly --standalone

certbot check:

sudo certbot renew --dry-run

certbot timer check:

systemctl list-timers | grep certbot

In the Perfect Streamer admin panel, enable HTTPS on servers (Web Server, HTTP Server, EPG Server).

Create a hook script (https://pstreamer.tv/distrib/scripts/cert_update.zip). Place it in the following path:

/opt/pss/scripts/cert_update.sh

Specify the host domain name in the script, by default it is taken from /etc/hostname.

Make the file executable.

chmod +x /opt/pss/scripts/cert_update.sh

Check the script run, there should be no errors:

/opt/conf/scripts/cert_update.sh

HTTPS settings should be applied, the status change will be displayed in the logs.

Add hook file to certbot:

certbot renew --deploy-hook "/opt/conf/scripts/cert_update.sh"

Check certbot again:

sudo certbot renew --dry-run

certbot configuration for Debian/Ubuntu.

Configuration for Debian is similar to RHEL, a brief installation description is given for Ubuntu 24.04.2 LTS.

certbot installation:

apt install certbot
certbot certonly

Make the script executable:

chmod +x /opt/pss/scripts/cert_update.sh

Run the script:

/opt/pss/scripts/cert_update.sh

Output:

Select domain name (your domain name)

Check if the certificates are updated:

ls -lat /opt/pss/config/cert/
total 44
-rw------- 1 root root  241 May 26 07:52 epgserver.key
-rw------- 1 root root  241 May 26 07:52 httpserver.key
-rw------- 1 root root  241 May 26 07:52 webserver.key
-rw-r--r-- 1 root root 1338 May 26 07:52 epgserver.crt
-rw-r--r-- 1 root root 1338 May 26 07:52 httpserver.crt
-rw-r--r-- 1 root root 1338 May 26 07:52 webserver.crt

The date should be current.

DVB adapters

Perfect Streamer supports any DVB adapter installed in the system. Supported standards: DVB-S, DVB-S2, DVB-T, DVB-T2, DVB-C, ATSC. Additionally implemented: T2-MI decapsulation (ETSI TS 102 773) and BISS-1 / BISS-E descrambling.

The main requirement is a correctly installed and working adapter driver in the system.

The DVB section appears only if the system has valid DVB adapters present. On-the-fly reconfiguration is not supported; a restart of Perfect Streamer is required.

Adapter connection

To add a new DVB adapter, go to the corresponding section and add the adapter:

  • Set the adapter name.

  • Select an adapter from the list of those available in the system.

  • Select Stream mode.

  • Specify the delivery system type: DVB-S, DVB-S2, DVB-T, DVB-T2, DVB-C.

  • Specify reception parameters: Carrier Frequency, Polarization, Symbol Rate, FEC, Modulation, DVB Stream ID, Heterodyne Frequency, High Band Heterodyne, High Band Range, DiSEqC 1.0 Mode and other parameters depending on the type.

Optionally, recording EIT into the EPG database can be enabled (Write EIT to EPG database).

Repeat the addition for each adapter present in the system.

DVB scanning

To avoid entering reception parameters (frequency, polarization, symbol rate, FEC, modulation) manually, Perfect Streamer includes a built-in transponder scanner. The scanner iterates over the transponders of the selected satellite (DVB-S/S2) or regional band (DVB-C, DVB-T/T2), tunes and locks each one, collects PSI/SI tables (PAT, PMT, SDT), and produces a final list of multiplexes with their programs. Any discovered multiplex can be added to the DVB adapter list with a single click.

The scanner uses the entire physical adapter; to start a scan, an adapter is required that is in use neither by the operating-system kernel nor by any active DVB adapter entry in Perfect Streamer.

Свободные адаптеры

When the scanning screen is opened in the admin panel, a list of the system’s physical DVB adapters is shown together with their occupancy status:

  • free — the adapter is available for scanning.

  • kernel — the device is held by another operating-system process.

  • pss-id-N — the adapter is already used by a DVB adapter entry in Perfect Streamer with the indicated identifier. The scanner cannot be started on it while that entry is active. To free the adapter temporarily, the existing DVB adapter entry must be paused (the Pause flag in its settings).

Transponder reference data

The scanner relies on Enigma2-format reference data: the satellite list satellites.xml and the regional lists cables.xml / terrestrial.xml. Each file contains a set of transponders for a known orbital position or a regional DVB-T/C band (for details, see the project site oe-alliance-tuxbox-common).

The files are located in the sat/ directory relative to pss.json (by default /etc/pss/sat/). They are shipped with the Perfect Streamer distribution and are loaded when the scanning screen is opened. They can be updated when needed by replacing the corresponding XML file.

In the admin panel the reference data is presented as three lists:

  • Satellite — orbital positions (for example, Hot Bird 13.0°E, Astra 19.2°E).

  • Cable region — country or DVB-C provider.

  • Terrestrial region — DVB-T/T2 region.

If the required satellite or region is not present in the reference data, the XML can be updated or blind scan can be used instead (see below).

Starting a scan

In the scan dialog the following are specified in order:

  • A free physical adapter.

  • Тип delivery system: DVB-S, DVB-S2, DVB-T, DVB-T2 или DVB-C.

  • Source:

    • for DVB-S/S2 — an orbital position from the satellite list and LNB parameters (local-oscillator frequencies LO1 and LO2, upper-band threshold, DiSEqC port);

    • for DVB-C — a cable region;

    • for DVB-T/T2 — a terrestrial region.

After Start is pressed, the scan runs in the background. Progress is displayed in the admin panel:

  • the completion percentage based on the number of transponders processed;

  • the current frequency and polarization;

  • the Multiplexes found and Programs found counters;

  • a tree of the multiplexes found so far, expandable down to the programs.

The scanner is a resource shared across the entire streamer: no more than one scan runs at a time. If a new scan is started while another is in progress, the previous one is automatically cancelled. The Cancel button aborts the scan and clears the accumulated list.

The scan duration depends on the number of transponders in the selected reference data (typically up to 5 seconds per transponder: up to 2 seconds to lock the signal and up to 5 seconds to collect the PSI). Typical values:

  • DVB-S Hot Bird 13.0°E — about 2 minutes (44 transponders).

  • DVB-S Astra 19.2°E — about 1.5 minutes.

  • DVB-T European region — less than a minute.

Result

The scan result is displayed as a multiplex → programs tree.

Multiplex parameters:

  • frequency, polarization, symbol rate;

  • FEC, modulation, delivery system;

  • transport-stream identifier (TSID);

  • the frontend readings at the moment PSI collection finishes — SNR, Signal, BER;

  • the pmt-total / pmt-recv counters — how many PMT tables were announced by the PAT and how many were actually collected within the timeout.

Параметры программы:

  • PNR (program_number) — the service identifier within the multiplex;

  • Name and Provider — taken from the SDT table, in UTF-8;

  • scrambled — the scrambling indicator. Its source is determined in the following order: the free_CA_mode bit from SDT (the broadcaster’s declaration) → the transport-scrambling flags from PMT. If neither SDT nor PMT arrives within the timeout, the default value is 0 (“not scrambled”); the actual state is determined when reception is attempted;

  • video-pid, audio-pid, pcr-pid — the main elementary streams of the service.

Applying the result

The multiplex selected in the tree is added to the DVB adapter list with a single button. A new entry is created with parameters taken from the scan result (frequency, polarization, symbol rate, FEC, modulation, delivery system) along with the LNB parameters and the adapter/device pair specified at the start of the scan. The adapter name is set by the user; additional parameters (BISS keys for scrambled channels, T2-MI, shared LNB, etc.) are filled in once the entry is created, via its settings.

Programs from the discovered multiplexes are applied separately — by creating a stream (Stream) with a demuxer input source addressed by PNR (see the section Connecting an SPTS stream to a DVB multiplex service).

Blind scan

Blind mode is used when:

  • the required satellite is not in the reference data (a non-standard orbital position, a local uplink);

  • the regional DVB-T/C reference data is insufficient or is outdated for a specific location;

  • a band segment needs to be rechecked independently of the list of known transponders.

In this mode the scanner does not consult the reference data; it synthesises a list of transponders from a frequency grid. By default the following typical ranges are used:

  • DVB-S/S2 Ku — 10700..12750 MHz, 4 MHz step, both polarizations (H and V), typical symbol rates 22000 / 27500 / 30000 ksym/s.

  • DVB-C — 47000..862000 kHz, 8000 kHz step, QAM-64 and QAM-256, typical symbol rates 6875 / 6900 / 6952 ksym/s.

  • DVB-T/T2 — 174000..862000 kHz, 8000 kHz step.

A full blind sweep of the Ku band takes on the order of 100 minutes (several thousand tuning points). In practice the range is narrowed manually in the admin panel — the minimum/maximum frequency and the grid step. For example, a sweep of 11700..11800 MHz with a 4 MHz step for a single LNB band takes about 5 minutes.

The result format of a blind scan is identical to that of a normal one. Specifics:

  • the FEC and Modulation fields of the discovered multiplexes are fixed at the value AUTO — the scanner does not determine their exact values;

  • the delivery system equals the one requested (DVB-S, DVB-S2, …). For mixed networks it is recommended to perform two passes — DVB-S and DVB-S2 separately.

Applying a multiplex from a blind scan is performed in the same way as from a normal one — via the button that adds it to the DVB adapter list. The FEC and Modulation fields are usually left at AUTO and, if necessary, refined once a stable signal lock is obtained on the specific transponder.

Kernel reception buffer size

The buffer-size parameter (integer, default 512) sets the size of the kernel DVB demux ring buffer in 65536-byte blocks.

  • 512 (32 MB) — recommended default. Covers full-transponder DVB-S/S2 scenarios (>= 33 Mbit/s) with one or several MPTS consumers. Chosen based on bench testing with a TBS adapter under full transponder load.

  • 8...64 (512 KB … 4 MB) — acceptable for embedded systems with limited RAM or for adapters in Scanner / Femon modes where traffic is low.

  • 0 — keep the driver default (usually 8…32 KB). Suitable only for very lightly loaded scenarios. On streams above 10 Mbit/s losses will occur.

When a message of the following form appears in the log:

DVB adapter X/Y dvr buffer overflow (NN so far, KK pids);
raise 'buffer-size' or reduce pid filter

increase buffer-size or reduce the number of PIDs passing through the filter (e.g. drop the MPTS output if not needed).

Memory cost: value N consumes N × 64 KB of kernel memory per adapter. With many adapters (8 or more) this is worth considering (8 × 32 MB = 256 MB).

Connecting an SPTS stream to a DVB multiplex service

When adding a new SPTS channel to the stream’s input, select:

  • Type: demuxer.

  • Source: DVB adapter.

  • Multiplex created on the DVB adapter, by adapter name.

  • PNR — selected from the popup list of services detected in the multiplex, or entered manually.

DVB device permissions

If DVB adapters are not displayed in Perfect Streamer, do the following:

sudo nano /etc/udev/rules.d/99-dvb-permissions.rules
SUBSYSTEM=="dvb", GROUP="video", MODE="0660"

sudo usermod -aG video pss
sudo chown -R root:video /dev/dvb/*
sudo reboot

T2-MI decapsulation

T2-MI (T2-Modulator Interface, ETSI TS 102 773) is a format for transporting DVB-T2 streams over DVB-S2 multistream. The outer DVB-S/S2 transponder carries one or more T2-MI carrier-PIDs, each encapsulating BBFRAMEs with one or more PLPs (Physical Layer Pipe). After decapsulation, the inner MPEG-TS containing programs and PSI/SI tables is extracted from the BBFRAME.

The Perfect Streamer implementation works in multi-carrier mode: a single physical adapter simultaneously serves the outer DVB-S/S2 multiplex and all decapsulated T2-MI carriers (one per each carrier-PID found in the outer stream’s PMT).

Configuration parameters

t2mi-mode (Int, 0..2, default 0) — decapsulation mode:

  • 0 — Disabled. The outer MPEG-TS is passed through unchanged. If a T2-MI descriptor (tag 0x51) is detected in the PMT, a one-time hint is logged.

  • 1 — Manual. Decapsulation is always on. If t2mi-pid is non-zero, a carrier is pre-created on that PID at startup. Additional carriers continue to be discovered from the PMT automatically.

  • 2 — Auto. Carriers are discovered automatically from the outer multiplex PMT for all ES that look like T2-MI (descriptor 0x51, or a single ES with stream_type=0x06 on a service with no other A/V ES). If no carriers are found, the adapter works as a regular DVB multiplex.

t2mi-pid (Int, 0..8191, default 0) — PID for pre-creating a carrier at startup, before PMT arrives:

  • 0 — no pre-creation. Carriers are discovered from the PMT (recommended for auto mode 2).

  • 1..8191 — pre-create a carrier on this PID. Additional T2-MI ES found in the PMT still get their own carriers.

In multi-carrier mode the t2mi-pid parameter is not a single-carrier selector — each detected T2-MI ES gets its own carrier with its own decapsulator. The parameter provides early initialization for a known PID.

t2mi-plp (Int, 0..255, default 0) — identifier of the PLP extracted from each T2-MI carrier on the adapter. Applied to all carriers — per-carrier override is not supported in the current version. If in production different carriers carry different PLPs, you should:

  • specify a PLP common to all carriers, or

  • configure separate adapters for different PLPs using lnb-sharing.

This is the identifier of the BBFRAME plp_id field, not the DVB-S2 multistream ISI (which is set by the dvb-stream-id parameter). These are different identifiers at different layers.

PLP selection diagnostics:

  • Five seconds after a carrier starts, if no BBFrame has been received for the configured PLP but other PLPs are seen, a warning is logged with the list of observed plp_id values.

t2mi-tsid (Int, -1..255, default -1) — reserved for future use. Selector of the T2-MI stream identifier when several T2-MI streams share one carrier-PID. Ignored in the current version.

Composite PNR — connecting SPTS from T2-MI

One adapter can expose several logical multiplexes:

  • carrier-id = 0 — outer DVB-S/S2 multiplex (regular A/V services).

  • carrier-id = 1..N — decapsulated T2-MI carriers (one per outer T2-MI ES).

BISS descrambling

Descrambling of encrypted DVB streams using BISS-1 (mode E1) and BISS-E (mode E2) is supported. Applicable to delivery systems DVB-S, DVB-S2, DVB-T, DVB-T2.

The implementation allows several descramblers to be active on a single adapter simultaneously:

  • By PNR in the outer multiplex (regular service).

  • By plp_id for decrypting T2-MI carrier-PID before decapsulation (required for encrypted multistream streams — otherwise the decapsulator drops every scrambled packet, see counter <t2mi scrambledDropped>).

EPG

EPG/XMLTV import

EPG data is collected into the EPG Database from various sources:

  • EIT from received streams (SPTS and MPTS). Enabled by setting Stream: Extract EIT to EPG Database.

  • Import in XMLTV format from various external sources. Configured in Configuration/EPG/EPG Sources. EPG sources in the form of a link to a web resource and locally from a file with the full path are supported.

The storage time of EPG events configured by EPG storage period (days) setting.

Auto-clean database - Deletes programs for which there are no events.

The EPG section displays EPG sources and related data. For each channel (EPG Channels List), you can set:

  • Channel Name - the name that will be used in the export on the XMLTV server.

  • Time Zone - you can adjust the time zone if it was not tied to UTC when importing.

  • EPG Channel Sets - bind the channel to the Channel Set (see below).

  • Icon - channel icon URL (http://example.com/mychannel/myicon.png).

EIT generator

EIT data from EPG Database can be generated in SPTS Stream. To do this, in the Stream setup, you need to set EPG Source ID and select EPG Channel ID. At the same time, SDT will be generated, even if it is not in the source. Set SDT Data correctly.

If this Stream is used in a multiplexer, then the Service Name can be redefined separately in the output/mixer setting.

EPG server (XMLTV)

A separate built-in HTTP server in Perfect Streamer serves the full XMLTV for a given set of channels. The endpoint is intended for middleware and players that store the programme guide locally and refresh it infrequently, as a whole file — typically once or twice a day.

The server and its clients are configured under Configuration/EPG/EPG Server.

URL and authentication

The service is served by a separate HTTP server epg-server (not the one for /data/*). By default it listens on ports 10444 (HTTP) and 10445 (HTTPS); the ports and SSL are configured under /config/epg-server.

Routes:

URL

Content-Type

Behavior

GET /xmltv

text/xml

With Accept-Encoding: gzip the response is compressed on the fly (Content-Encoding: gzip), otherwise it is returned as plain XML.

GET /xmltv.gz

application/octet-stream

Always returns a gzip stream with Content-Disposition: inline; filename="xmltv.xml.gz" — convenient to save as a file.

Three authentication methods are supported:

  • HTTP Digest (preferred) — account from /config/epg-server/login.

  • URL parameters?l=<login>&p=<password> (synonyms: login=…, password=…).

  • Loopback — a request from 127.0.0.1 is handled anonymously. Convenient for scripts deployed on the same machine.

Warning

A login and password in the URL end up in the reverse-proxy access logs and in the browser history. For public XMLTV distribution use HTTP Digest or accept requests only from private addresses.

Access by channel-set

Each epg-server/login account is bound to one channel-set from /config/epg-channel-set. The EPG in the response contains only the channels that belong to the specified set. This lets a single PSS deployment serve different XMLTV to different operators/middleware.

Basic setup in the UI:

  1. Under Configuration/EPG/EPG Channel Sets, create a channel group and assign the desired channels to it at the EPG sources.

  2. Under Configuration/EPG/EPG Server Clients, create an account and bind the channel group to it. Without a channel-set binding the client gets an empty XMLTV.

Additional restrictions for a login:

  • ip-addr — if set and not a wildcard, a request from a different IP gets 403 Forbidden.

  • limit-day — Unix epoch sec after which the account stops being served (403 Forbidden). Convenient for a subscription model.

  • pause — temporarily disable a login without deleting it.

Response format

Response body is an XMLTV document rooted at <tv>. The structure follows the commonly used XMLTV DTD schema:

<?xml version="1.0" encoding="utf-8"?>
<tv source-info-name="..." source-info-url="...">
  <channel id="ch.one">
    <display-name lang="en">Channel One</display-name>
    <icon src="https://.../channel-one.png"/>
  </channel>
  ...
  <programme start="20260504060000 +0300"
             stop ="20260504070000 +0300"
             channel="ch.one">
    <title lang="en">Morning News</title>
    <desc  lang="en">Daily news roundup</desc>
    <rating system="RU"><value>12+</value></rating>
  </programme>
  ...
</tv>

Field notes:

  • The source-info-name and source-info-url attributes of the <tv> root are filled from the EPG Source Name and EPG Source URL fields under Configuration/EPG/EPG Server.

  • The start and stop attributes use the YYYYMMDDhhmmss ±zone format (the time zone comes from the channel’s time_zone field).

  • A <programme> may contain several <title>/<desc> for different languages. The lang attribute is empty when the source EPG language id could not be mapped to the dictionary (the entry still appears in the output).

  • Channels with a conflicting channel_id (if the same id came from multiple sources) are listed once, the remaining sources are skipped with a warning in the server log.

  • Only events with stop_time >= now are included in the output.

HTTP headers

The server always sends:

Cache-Control: no-cache, no-store, must-revalidate
Pragma:        no-cache
Expires:       0
Connection:    close

For /xmltv additionally — Content-Encoding: gzip when Accept-Encoding: gzip is present in the request. For /xmltv.gzContent-Disposition: inline; filename="xmltv.xml.gz".

The client-side caching ban is intentional: XMLTV changes on every EIT import or external XMLTV-source refresh, and the player must not hold stale data indefinitely. An edge cache (nginx) is fully acceptable, however — see the performance section below.

Server cache and how to flush it

The ready XMLTV is cached in the PSS process memory:

  • One entry per channel-set; both body variants (raw and gzip) are stored — repeat requests with Accept-Encoding: gzip or /xmltv.gz do not re-compress the data.

  • Each entry is tagged with an update-time counter. Any EPG update (EIT import, XMLTV source refresh) increments the counter, and the cache is rebuilt on the next request.

Force cache flush:

POST /xmltv/reset-cache

The route is served by the admin server (port 43971/43981), not by epg-server. Empty request body; the response is 200 OK with a JSON envelope.

HTTP response codes

Code

Condition

200 OK

Request processed. Body is an XMLTV document (possibly an empty <tv></tv> on a transient database failure).

401 Unauthorized

Neither Digest nor l/p parameters passed the check (for non-loopback requests).

403 Forbidden

The login exists but the request is not from an allowed IP, or limit-day has expired.

404 Not Found

Any URL other than /xmltv and /xmltv.gz.

405 Method Not Allowed

Method is not GET.

Error body is a fixed-format JSON envelope:

{"status": 401, "message": "Unauthorized"}

Performance and scaling

Server cache

The server cache serves repeated requests to a single channel-set without touching SQLite — by copying the ready body.

Building XMLTV from scratch (cache miss) is more expensive: a separate SELECT on channel_name is issued per channel, and on event_text and event_rating per event. Approximate build times:

Output size

Cache hit

Cache miss (build)

100 channels / day

tens of ms

~0.5–1 s

500 channels / day

~50 ms

2–5 s

1000+ channels / week

~100–300 ms

5–15 s

For most middleware it is acceptable to fetch XMLTV every few hours or once a day.

When an external reverse-proxy (nginx) is needed

Unlike /data/epg/channel (short JSON responses), XMLTV is a single large document per channel-set, ideally suited for an edge cache:

  • Tens to hundreds of clients per channel-set — the internal PSS cache is usually enough if they poll XMLTV every hour-to-day.

  • Thousands of concurrent clients — a caching reverse-proxy is recommended. Serving an XMLTV file to hundreds/thousands of requests is, first of all, a network load (hundreds of KB – a few MB per response) that is best taken off PSS.

  • Geographically distributed delivery — a CDN/edge cache is unavoidable regardless of client count.

PSS sends Cache-Control: no-cache, so nginx must be told explicitly to ignore the upstream header and keep its own TTL.

Sample nginx configuration

# /etc/nginx/conf.d/pss-xmltv.conf

proxy_cache_path /var/cache/nginx/pss-xmltv
                 levels=1:2
                 keys_zone=pss_xmltv:8m
                 max_size=4g
                 inactive=2h
                 use_temp_path=off;

upstream pss_epg {
    server 127.0.0.1:10444;
    keepalive 16;
}

server {
    listen 80;
    # listen 443 ssl http2;     # SSL termination is reasonably terminated here
    server_name epg-files.example.com;

    # Do not enable gzip on: /xmltv.gz is already compressed, /xmltv will be
    # gzip-encoded by PSS itself, re-compression is pointless.

    location ~ ^/xmltv(\.gz)?$ {
        proxy_pass http://pss_epg;
        proxy_http_version 1.1;
        proxy_set_header Connection "";
        proxy_set_header Host $host;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

        # PSS sends no-cache; we cache on the edge regardless.
        proxy_ignore_headers Cache-Control Expires Set-Cookie;
        proxy_hide_header   Cache-Control;
        proxy_hide_header   Pragma;
        proxy_hide_header   Expires;

        # Cache key = full URL including query (login/password in /xmltv?l=...&p=...)
        # so different accounts with different channel-sets get different keys.
        proxy_cache_key     "$scheme$host$request_uri";

        proxy_cache         pss_xmltv;
        proxy_cache_valid   200      30m;       # XMLTV does not change often
        proxy_cache_valid   401 403  1m;
        proxy_cache_lock           on;          # coalesce cache misses
        proxy_cache_lock_timeout   60s;         # building XMLTV can take seconds
        proxy_cache_use_stale      error timeout updating
                                   http_500 http_502 http_503;

        # Large buffer: XMLTV for a big channel-set may
        # reach several megabytes.
        proxy_buffering      on;
        proxy_buffers        16 256k;
        proxy_buffer_size    256k;
        proxy_busy_buffers_size 1m;

        # Allow the client to cache the response briefly.
        add_header Cache-Control "public, max-age=600";
        add_header X-Cache-Status $upstream_cache_status always;
    }
}

Notes and recommendations

  • TTL ``proxy_cache_valid 200 30m`` — XMLTV rarely changes more often than every half hour. If source sync runs hourly or less often, this can be raised to 1 hour or more; if freshness after POST /xmltv/reset-cache matters, lower it.

  • ``proxy_cache_lock_timeout 60s`` — increased compared to /data/epg/channel (where 5 seconds is typical), because building the XMLTV for a large channel-set takes longer.

  • A dedicated ``keys_zone`` — even on a large deployment, unique XMLTV keys are few (login count × channel-set); 8 MB is plenty. max_size is sized by XMLTV volume, not by key count.

  • gzip on the nginx side is unnecessary: for /xmltv.gz the response is already compressed, and for /xmltv PSS itself replies gzip-encoded when Accept-Encoding: gzip is present.

  • HTTPS termination on nginx gives better performance under many concurrent TLS handshakes.

Related endpoints

  • POST /xmltv/reset-cache — force-flush the server-side XMLTV cache (on the admin server 43971/43981).

  • POST /data/epg/update?s=<src_id> — force-refresh an external XMLTV source; after a successful refresh the server-side XMLTV cache is flushed automatically.

  • GET /data/epg/channel?… — JSON EPG output for one channel for a day; see the separate section.

The complete list and detailed description of the HTTP API are given in manual/http_data_api.txt.

EPG for OTT middleware

The server returns electronic programme guide (EPG) data for the selected day for a single channel in JSON format. The endpoint is intended for OTT middleware back-ends that aggregate the schedule from Perfect Streamer to build the programme guide for the end client.

URL and authentication

The endpoint is served by the built-in admin server. By default it is available on ports 43971 (HTTP) and 43981 (HTTPS); the ports are configured under Settings/Server Settings.

GET /data/epg/channel?src=<src_id>&ch=<channel_id>&lang=<lang>&t=<time>

Authentication is HTTP Digest, like for the rest of /data/*. For middleware a viewer (read-only) account is sufficient.

Note

Requests from the loopback address (127.0.0.1) bypass HTTP Digest verification — the server treats them as anonymous. This is convenient for local scripts and health checks of middleware deployed on the same machine as Perfect Streamer; for remote access credentials are mandatory.

Request parameters

Parameter

Type

Required

Default

Description

src

unsigned integer

no

0

EPG source. 0 — data imported from MPEG-TS EIT of input streams. 1, 2, — entry id from /config/epg/epg-source (an external XMLTV source).

ch

string

yes

Channel identifier from the EPG database: the value of the channel_id field in the channel table. The list of available identifiers can be obtained via an SQL query through POST /data/epg/sql.

lang

integer or ISO 639

no

system default language

0 — system default language; an integer > 0 — internal language id (see GET /schema/lang); a string — a two- or three-letter ISO 639 code, e.g. eng, rus, fra.

t

unsigned integer (Unix epoch, sec)

no

current server time

Any point inside the day of interest. The server returns events for the UTC day that contains t: interval [t / 86400 · 86400, (t / 86400 + 1) · 86400). To request data for the «next day» simply add 86400 to the current time.

Note

The day is taken in UTC, not in the local time zone. If the middleware builds the schedule by the local calendar day, the UTC-day boundary may not coincide with local midnight; in that case make two requests (for adjacent UTC days) and stitch the results by the start field.

Response format

  • Content-Type: application/json.

  • HTTP headers forbid caching on the client side and on intermediate proxies:

    Cache-Control: no-cache, no-store, must-revalidate
    Pragma: no-cache
    Expires: 0
    
  • Response body is a JSON with the list of events for the day:

{
  "event": [
    {"start": 1715000000, "end": 1715003400, "title": "Morning News", "desc": "Daily news roundup"},
    {"start": 1715003400, "end": 1715007000, "title": "Weather",      "desc": ""}
  ]
}

Event fields:

  • start, end — programme start and end in Unix epoch (sec), UTC.

  • title — programme title in the selected language. If a title in the requested language is missing for the event, the server falls back to the entry in the system default language, then to any other available one.

  • desc — extended description. May be an empty string if the database had no separate description for the event.

Output specifics:

  • Events with an empty title and no description, as well as events with invalid timestamps, are excluded from the output.

  • Duplicates by start are dropped: for the same start moment one record is returned with the best language priority (requested → system default → others).

  • The order of events in the array is not guaranteed — if needed, the middleware sorts the list by the start field on its own.

  • If the channel is not found, the source does not exist, or there are no events for the chosen day, the server returns 200 OK with an empty array {"event":[]} or, in the rare case of a missing source, with an empty body. The middleware must handle both variants correctly.

Caching on the server

The ready JSON is cached by the server under the key (channel_id, UTC date, language), so repeated requests for the same day and channel are served without touching the database. The middleware does not need to manage the cache.

The cache is flushed automatically:

  • when new events arrive from MPEG-TS EIT of input streams (src=0);

  • on a successful refresh of an external XMLTV source (src > 0 — scheduled or forced via POST /data/epg/update?s=<src_id>);

  • when a day falls out of the EPG retention window.

A separate endpoint for forced cache flushing is neither provided nor needed.

HTTP response codes

Code

Condition

200 OK

Request processed. Body is a JSON with the list of events (possibly empty).

400 Bad Request

The ch parameter is missing or empty; or src, t cannot be parsed as unsigned integers; or a numeric lang points to a non-existent language id.

401 Unauthorized

HTTP Digest authentication is missing or invalid (for requests not from a loopback address).

Other situations (unknown src, missing channel, no events for the day) do not produce 4xx — the middleware gets 200 OK with the empty array {"event":[]}.

On error, the response body is a fixed-format JSON envelope:

{"status": 400, "message": "Bad Request"}

The status field duplicates the HTTP code; the message field carries a refined error reason in English (or the standard HTTP-status text if no additional information is available). The Content-Type of the error response is application/json.

Example

curl -u middleware:secret --digest \
  'http://pss.example.com:43971/data/epg/channel?src=0&ch=12.0.1&lang=rus&t=1715000000'

Sample response:

{
  "event": [
    {"start": 1715000000, "end": 1715003400, "title": "Morning News", "desc": "Daily news roundup"},
    {"start": 1715003400, "end": 1715007000, "title": "Weather",      "desc": ""}
  ]
}

Performance and scaling

Perfect Streamer server cache

Inside the PSS process there is an in-memory LRU response cache keyed by (channel_id, UTC date, language) with a hard cap of 1024 entries per EPG source. Under typical load (tens–hundreds of channels × 1–3 languages × keep-day days) all current entries fit into the cache; repeat requests are served without touching SQLite.

Order of magnitude (debug build, local loopback, no HTTPS):

Scenario

Latency (single request)

Throughput (P=8)

Cache hit

~0.3 ms

~1100 req/s

Cache miss (SQL + JSON)

~1.0–1.5 ms

~1000 req/s

In a release build with debug logging off, the numbers are roughly 2–3× better. Bandwidth — about 14 KB per response for a typical channel-day.

When an external reverse-proxy (nginx) is needed

The server cache speeds up repeat requests for the same (channel, day, language), but every request still passes through the built-in PSS HTTP server and consumes a thread from its pool. With many clients it makes sense to move caching to the edge:

  • up to ~1,000 online clients — the internal cache is usually enough, a reverse-proxy is not required.

  • tens of thousands and more — a caching reverse-proxy (for example, nginx) is recommended. An edge cache handles 99 % of requests without PSS, smooths out peaks (middleware startup, mass player refresh) and allows SSL termination to be moved to a separate node.

  • geographically distributed delivery — an external CDN/proxy is needed even before counting clients.

PSS sends its own header Cache-Control: no-cache, no-store, must-revalidate so that end clients do not cache the EPG for long. The reverse-proxy may (and should) cache the response itself — below it is shown how to tell nginx explicitly to ignore the upstream Cache-Control and keep its own TTL.

Sample nginx configuration

A minimal config for an EPG edge cache aimed at tens of thousands of clients polling every 1–5 minutes:

# /etc/nginx/conf.d/pss-epg.conf

proxy_cache_path /var/cache/nginx/pss-epg
                 levels=1:2
                 keys_zone=pss_epg:32m
                 max_size=2g
                 inactive=30m
                 use_temp_path=off;

upstream pss_admin {
    server 127.0.0.1:43971;
    keepalive 64;
}

server {
    listen 80;
    # listen 443 ssl http2;  # recommended: terminate SSL here
    server_name epg.example.com;

    # gzip helps: a typical EPG-JSON compresses ~5–8x.
    gzip on;
    gzip_types application/json;
    gzip_min_length 512;
    gzip_proxied any;

    location = /data/epg/channel {
        proxy_pass http://pss_admin;
        proxy_http_version 1.1;
        proxy_set_header Connection "";
        proxy_set_header Host $host;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

        # PSS sends no-cache; we cache on the edge regardless.
        proxy_ignore_headers Cache-Control Expires Set-Cookie;
        proxy_hide_header   Cache-Control;
        proxy_hide_header   Pragma;
        proxy_hide_header   Expires;

        # Cache key = full URL with query string. The src/ch/lang/t parameters
        # already make each response uniquely keyed.
        proxy_cache_key     "$scheme$host$request_uri";

        proxy_cache         pss_epg;
        proxy_cache_valid   200 60s;          # freshness window
        proxy_cache_valid   400 404 10s;
        proxy_cache_lock           on;        # coalesce cache misses
        proxy_cache_lock_timeout   5s;
        proxy_cache_use_stale      error timeout updating
                                   http_500 http_502 http_503;

        # Hand the JSON to the client with its own TTL
        # (the player will not re-fetch EPG before that period).
        add_header Cache-Control "public, max-age=60";
        add_header X-Cache-Status $upstream_cache_status always;
    }
}

Notes and recommendations

  • TTL ``proxy_cache_valid 200 60s`` — a compromise between EPG freshness and load on the upstream. The programme guide does not change in real time, so 30–300 seconds is reasonable. After importing new events PSS flushes its own cache instantly, and the edge cache catches up at the next TTL.

  • ``proxy_cache_lock on`` is mandatory for many clients: on a cache miss it coalesces parallel requests for the same key into a single upstream request, shielding SQLite from peak BUSY under load.

  • ``keys_zone`` and ``max_size`` are sized by the count of (channel × day × language): 32 MB of keys_zone covers hundreds of thousands of keys; 2 GB of max_size covers a month of history for hundreds of channels with room to spare.

  • gzip significantly cuts traffic: responses compress well (repeated JSON keys, Cyrillic in UTF-8).

  • ``X-Cache-Status`` in the response lets the middleware see HIT/MISS/EXPIRED and gauge the cache effectiveness.

  • If nginx and PSS live on the same machine, the admin server does not require HTTP Digest for loopback, so the upstream block can be left without proxy_set_header Authorization . For a cross-network setup, create a dedicated viewer account and add Digest authentication to proxy_pass.

  • HTTPS is best terminated at nginx: PSS supports HTTPS directly, but an edge server is usually more efficient at handling TLS handshakes with thousands of concurrent clients.

Related endpoints

  • POST /data/epg/sql?s=<src_id> — arbitrary SQL query against the EPG database (in particular, to obtain a list of channel_id).

  • POST /data/epg/update?s=<src_id> — force-refresh an external XMLTV source.

The complete list and detailed description of the HTTP API are given in manual/http_data_api.txt.

Programm optimization

If there are problems with a large CPU load or lack of memory in configuration with large number of streams, then you can optimize the settings.

You can disable the MPEG-TS filtering and processing features if you don’t need to. By default, the stream has the Clean All Unnecessary Data function enabled, disable it if there is no unwanted data in the stream. Disabling these features completely will remove the Original Media Information section from the Report.

You can completely disable or change Mosaic settings. A completely disabling is done in the server settings. You can disable it individually for each stream, or change the update interval with the Check Interval setting.

Queue overload errors for DBStat and DBEPG databases

Errors occur due to insufficient databases performance - slow storage is used or the system is overloaded.

Databases location is configured by data-dir parameter in configuration file pss.properties

Possible solutions of the problem:

  1. Moving database files to /tmp. The system memory will be used, requires an estimate of free memory and setting up the storage time of statistics (see server settings). When the system restarts, the data will be lost.

  2. Reduce the statistics detalization - see dbstat-detail. 5 sec by default. Can be increased up to 20.

  3. Locate database in memory - set the dbepg-memory option to true.

Transcoders

Transcoders are implemented as separate executable binaries that are run from pstreamer as separate processes.

Configuration of type 1toN are supported, so you can get several streams with different encoder settings from the one decoder.

Video and audio must be present in the source stream. The options without video or without audio are not supported.

Codecs are implemented:

  • Video SW decoder: mpeg2, h.264, hevc (h.265)

  • Video NW decoder: mpeg2, h.264, hevc (h.265)

  • Video SW encoder: mpeg2, h.264, hevc (h.265)

  • Video NW encoder: h.264, hevc (h.265)

Interlaced stream is supported on input and output.

For H.264 and HEVC decoder, interlace alternate format (two separate fields in the stream) is supported. It is converted to interlace interleaved.

For HEVC decoder, Main10 profile with bt.709 (SDR) and bt.2020 (HDR) is supported. Encoder for HEVC always uses Main profile with bt.709.

For H.264 and HEVC decoder, VBR (Variable Frame Rate) format is supported. It is converted to constant frame rate.

  • Audio decoder - mpeg (layer 1,2,3), aac, ac3

  • Audio encoder - mpeg (layer 2), aac

Video Passthrough mode is mode without video transcoding, audio only is processed. SW transcoder is used.

Note

To configure transcoder instance, you need to configure two or more streams, with output (decoder) and input (encoder).

To configure transcoder instance:

  • Source - add stream output type transcoder (decoder). Select SW, NV or Video Passthrough type option.

  • Destination - add input type transcoder (encoder). Select corresponding decoder source in options.

  • Repeat this to configure several transcoder outputs for one decoder.

Transcoder output (decoder) options

  • Convert colors to BT.709 - conversion of SD BT.470-2 (PAL) and SMPTE 170M (NTSC) to BT.709

  • Trace - enable detailed transcoder log for diagnostics.

For correct transcoder operation, the source stream must meet certain requirements, and in some cases, this can be corrected. These settings do not convert the stream; they work as hints for correct transcoder operation.

There are settings to correct the input stream data:

  • Fix PAR - fix Pixel Aspect Ratio. Set as a fractional number in N/D format. For example, for Wide SD it is 16/9.

  • Fix Framerate - explicitly specify framerate. In some streams, the framerate in SPS may be missing, and the corresponding error will be in the transcoder log. In these cases, you need to explicitly specify the framerate. Set as a fractional number in N/D format.

Examples of framerate values:

  • PAL - 25/1

  • NTCS - 30/1 or 30000/1001

  • Cinema - 24/1 or 24000/1001

Transcoder input (encoder) options

  • Encoder Type - video codec.

  • Align Total Bitrate - stream stuffing bitrate (filling with null packets). It is important to set this if the stream will be used for DVB broadcasting. The bitrate must be guaranteed to be higher than the video bitrate and all audio tracks.

  • Video Profile - for H.264 you can select encoding profile.

  • Video Bitrate - video stream bitrate, kbps. Encoding always use CBR mode. Summary bitrate will be some higher due to audio tracks.

  • Speed Preset - encoding options preset, values 1 - 7. Less value - better quality and more resources consuming. Default: 4.

  • GOP Interval - GOP interval in frames (related to Key Frame Interval). Default: 25 (1 sec for 25p), it is recommended for players with random start.

  • BFrame - select for better quality. Recommended value: 3.

  • Lookahead - configure for better quality. Recommended value is 20 - 50 frames.

  • Resize - picture resize.

  • Deinterlace - converts interlace to progressive.

Crop insert (empty padding around the picture) is not supported. Setting an arbitrary image size is not supported, as this may distort the proportions.

Resize options available:

  • Reduce the size by 2 and 4 times proportionally.

  • Make Wide SD 16:9 format with Aspect Ratio recalculation.

  • Upscale SD->HD. Applied for source format SD PAL/NTSC. Interlace is not supported, applied when deinterlacing is required.

  • Set the width. The height will be recalculated proportionally.

  • Set the height. The width will be recalculated proportionally.

Some parameters can be unsupported by selected transcoder. You can observe errors in transcoders logs.

Audio processing

By default, all audio tracks are transmitted from input to output without processing. Unnecessary tracks can be removed by configuring the PID filters in the stream.

If you need to transcode audio, you can set up the rules separately for each audio codec. The skip option is to remove the audio track with this codec.

If there are no audio tracks in the output stream, there will be an error, see the transcoder logs.

PCR and TR 101 290 formation.

MPEG-TS multiplexer generates a new PCR. If you correctly set Align Total Bitrate (more than the sum of video and audio bitrates), then PCR should pass the check according to the TR 101 290 standard.

Transcoders processing status

If there are problems with the transcoder (there is no output stream from the encoder), you need to look at the logs in the Transcoders section, a list of instances is displayed here (each line is a separate transcoder instance, decoder + N encoders) and, when you click on the desired instance, the logs status dialog opens. The current log and the log from the previous launch are displayed. For a detailed log, enable trace in the output (decoder) settings.