OTT service¶
Outputs streams over HTTP-based protocols — HLS, MPEG-DASH (since version 1.12) and MPEG-TS over HTTP. HTTPS (SSL) is supported. The output is enabled on the OTT tab of the Stream settings.
URLs for connections have format:
http://host:port/http/stream/login/password - login/password authorization
http://host:port/http/stream/login - login authorization (token)
http://host:port/http/stream/ - IP authorization
host and port - set in http server settings.
stream - stream ID. Not to be confused with the sequence number in streams list. ID is shown in stream stats page header and in streams list ID column, ID is set at stream creation and never changes.
Likewise for HLS and DASH:
Output URL templates and working status are shown on stream stats page. Unauthorized access is denied, all clients should be registered in Peers.
Additional HLS parameters are available in URL (optional):
[URL]?a=1&s=40&m=40&v=5
a: 1 - use absolute paths in playlist (default), 0 - relative paths.
s - dynamic play list duration (sec), default 40 sec.
m: dynamic playlist minimum duration (sec), 40 sec by default. Maximum play list duration is 60 sec. If current chunk buffer size is less than minimum in request, error 404 will be raised. This is done so that HLS would start with a full chunk buffer on the server.
v: HLS protocol version for play list, 5 by default. Version change may be required for some players.
File name index.m3u8 could be added to URL for some players support, for example: http://host:port/hls/stream/login/password/index.m3u8.
There are two modes supported for HLS server - Peer mode and OTT mode
Peer mode - simple chunks segmentation mode. Recommended for streams peering (streams distribution).
OTT mode - optimized chunks segmentation mode for players fast start. In this case CPU load is higher, recommended for broadcasting.
SSL (HTTPS) can be enabled for HTTP server, this is done in the server settings.
Chunk Min Interval and Chunk Max Interval
In OTT mode, the stream is analyzed on PAT/PMT/SPS/PPS/IFrame and chunks are cut by the fast start players criterion. Analysis starts with min interval and if for some reason the data is not found, the chunk is forcibly cut by max interval.
HLS Adaptive Multistream
HLS Adaptive Multistream support is available starting from version 1.10.
HLS playlist should be configured for each adaptive stream. To do it:
Enable HLS with OTT mode for each stream you going to use in adaptive streams.
Streams Adaptive menu item will available in main menu. There you should add adaptive stream and select all streams you need for current adaptive stream playlist.
Bitrate parameter can be configured for each stream. Default is 0, which indicates to use measured stream bitrate. Otherwise you can configure it manually.
Adaptive stream URL is differ:
Peer (client) can have access list where adaptive streams are also available.Permission for an adaptive stream includes permissions for all streams that are included in it.
Caching model for OTT HLS and DASH.¶
The server emits responses of three categories that differ in content lifetime and suitability for caching by intermediate nodes (reverse proxy, CDN, client cache).
1. Caching model¶
1.1. Resources and HTTP headers¶
Resource |
URL |
Content-Type |
Cache-Control |
|---|---|---|---|
TS segment |
|
|
|
DASH MPD |
|
|
|
HLS master |
|
|
|
HLS media |
|
|
|
302 Redirect |
|
— |
|
Raw TS |
|
|
not set; not cached |
1.2. TS segment characteristics¶
The keyID identifier is computed as CRC64(startTime || streamID) and is globally unique. A segment URL addresses immutable content — repeat requests for the same URL return an identical byte stream (as long as the segment stays within the sliding window).
The immutable directive suppresses conditional revalidation by the client (If-None-Match, If-Modified-Since). The max-age=60 value is compatible with a typical timeShiftBufferDepth=40s.
1.3. Manifest characteristics¶
max-age=1 caps the upper bound of cached-content staleness at one second. Combined with proxy_cache_lock on (nginx) it collapses bursts of manifest requests into a single origin request per second.
1.4. Content variance¶
With absPath=0 (default, no a URL parameter) HLS media and DASH MPD manifests do not embed a session identifier. Manifest content is identical across sessions belonging to the same (stream, param) combination, so the reverse-proxy cache can share a single entry across sessions when the cache key is normalised.
With absPath=1 (a=1 URL parameter) the manifest body contains absolute URLs that include scheme, host, and session id. Content becomes session-specific, eliminating cross-session cache reuse.
2. Client behaviour¶
Client |
Manifest refresh URL |
Effect on session count |
|---|---|---|
VLC 3.x HLS |
|
One session per playback |
VLC 3.x DASH |
|
Handled by session reuse (see 3.3) |
ffmpeg 5.x HLS |
|
One session per playback |
ffmpeg 5.x DASH |
|
Handled by session reuse (see 3.3) |
dash.js, hls.js |
|
One session per playback |
3. Specialised mechanisms¶
3.1. HTTP 302 redirect for DASH¶
A /dash/<stream>/<login>/<pass>/index.mpd request returns 302 Found with a Location: /h<sess>/index.mpd header. The body is empty. Authentication and session allocation happen during the redirect.
Clients that cache the redirect address the session URL directly in subsequent requests. Clients that do not, re-issue the redirect request. The cost of repeat redirect handling is limited to auth check and session-reuse operations.
3.2. Session reuse for DASH¶
While processing a /dash/.../index.mpd request executed under login-id L for stream-id S and adaptive=A, if _ottClientList already holds a DASH session with the same (L, S, A) the existing sessID is returned. No new session is created, no maxConn slot is consumed.
Applies to DASH only. HLS does not need a separate reuse mechanism: HLS clients refresh the media playlist via the session URL and do not trigger applyNewOTTSess on each refresh.
3.3. Cross-session segment reuse¶
The /h<sess>/<keyID>.ts path is sess-independent when resolving keyID into content: keyID uniquely identifies a segment within the registered ChunkList (see _ottStreamList). Nginx with a normalised cache key (stripping the /h<sess>/ prefix) serves every request for the same keyID from a single cache entry.
4. Request parameters¶
Parameter |
Default value |
Effect |
|---|---|---|
|
|
|
|
|
|
|
|
Minimum window length for manifest emission |
|
|
|
Changing a parameter via query string updates the values stored in the session on the next applyNewOTTSess call.
5. Load characteristics¶
Origin load scales with the number of distinct streams being watched concurrently. Increasing the number of clients watching the same stream does not increase origin requests when a reverse-proxy cache with a normalised cache key is in place.
Scenario |
Origin request rate (ref.) |
|---|---|
1 client per stream X |
MPD: 0.4 req/s, segment: 0.2 req/s |
N clients on one stream X (cache enabled) |
MPD: 1 req/s, segment: 0.2 req/s |
N ffmpeg clients in replay mode on one stream |
MPD: 1 req/s (with |
N clients on N distinct streams |
MPD: 0.4·N req/s, segment: 0.2·N req/s |
6. Nginx as a caching reverse proxy¶
6.1. Basic configuration¶
proxy_cache_path /var/cache/nginx/pss_segments
levels=1:2 keys_zone=pss_segments:100m
max_size=20g inactive=30m use_temp_path=off;
proxy_cache_path /var/cache/nginx/pss_manifests
levels=1:2 keys_zone=pss_manifests:10m
max_size=256m inactive=5m use_temp_path=off;
upstream pss_backend {
server 127.0.0.1:41972;
keepalive 64;
}
map $uri $pss_cache_key {
~^/h[0-9a-f]{16}(?<tail>/.+\.(ts|m3u8))$ "stream:$tail";
default $uri;
}
server {
listen 80;
server_name stream.example.com;
location ~* "^/h[0-9a-f]{16}(/[0-9]+)?/[0-9a-f]+\.ts$" {
proxy_cache pss_segments;
proxy_cache_key $pss_cache_key;
proxy_cache_valid 200 60s;
proxy_cache_valid 404 403 0s;
proxy_cache_lock on;
proxy_cache_use_stale updating error timeout;
proxy_cache_revalidate on;
add_header X-Cache-Status $upstream_cache_status;
proxy_pass http://pss_backend;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_buffering on;
}
location ~* "(^/h[0-9a-f]{16}(/[0-9]+)?/index\.(m3u8|mpd)$|^/(hls|dash)/.*\.(m3u8|mpd)$)" {
proxy_cache pss_manifests;
proxy_cache_key $pss_cache_key;
proxy_cache_valid 200 1s;
proxy_cache_valid 404 403 0s;
proxy_cache_lock on;
proxy_cache_lock_timeout 2s;
proxy_cache_use_stale updating;
add_header X-Cache-Status $upstream_cache_status;
proxy_pass http://pss_backend;
proxy_http_version 1.1;
proxy_set_header Connection "";
}
location / {
proxy_pass http://pss_backend;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host $host;
proxy_buffering off;
proxy_read_timeout 3600s;
}
}
6.2. Directive purposes¶
Directive |
Purpose |
|---|---|
|
Serialises upstream requests when concurrent cache misses target the same key |
|
Returns the stale copy to concurrent requests while the cache is being refreshed |
|
Uses |
|
Disables caching of authorisation errors and 404 |
|
Maintains a pool of persistent connections to origin |
|
For segments; enables response buffering in nginx |
|
For the |
6.3. Computing segment cache max_size¶
Rough value: bitrate × timeShiftBufferDepth × distinct_streams × 2
Example: 10 streams × 8 Mbps × 40s × 2 ≈ 800 MB. A 10x headroom is recommended to absorb bitrate variance.
6.4. TLS termination¶
The Perfect Streamer server accepts connections on HTTP and HTTPS ports. With TLS termination at nginx the upstream uses the HTTP port. Forwarding X-Forwarded-Proto and X-Forwarded-Host headers is required for correct absolute URL composition when absPath=1.
server {
listen 443 ssl http2;
server_name stream.example.com;
ssl_certificate /etc/letsencrypt/live/stream.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/stream.example.com/privkey.pem;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 1d;
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
location ... {
proxy_pass http://pss_backend;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header Host $host;
# + caching directives from 6.1
}
}
server {
listen 80;
server_name stream.example.com;
return 301 https://$host$request_uri;
}
For HTTPS between nginx and origin, proxy_ssl_verify and proxy_ssl_trusted_certificate directives apply. Encryption is redundant for loopback connections.
6.5. Multi-host¶
When serving multiple server_name from a single nginx process, $host is added to the cache key to isolate content:
map $uri $pss_cache_key {
~^/h[0-9a-f]{16}(?<tail>/.+\.(ts|m3u8))$ "$host:stream:$tail";
default "$host:$uri";
}
keys_zone size is sized at 8000 keys per MB. For multi-host installations with thousands of streams, keys_zone=...:300m or higher is recommended.
7. Client-side caching¶
Cache-Control: immutable is honoured by Chrome/Firefox/Safari. The client cache returns the segment without a conditional request on re-access (including backward seek within the player buffer).
Service Workers can apply a cache-first strategy based on Cache-Control content. DASH players (dash.js, Shaka) use MSE through SourceBuffer; a segment placed in the buffer remains available without a repeat HTTP request until it slides out of the window.
For cross-domain requests the Access-Control-Allow-Origin: * header allows caching in shared caches without Vary: Origin. Switching ACAO to a specific Origin requires Vary: Origin, which reduces shared-cache efficiency.
8. Distribution via CDN¶
Perfect Streamer is compatible with pull-from-origin CDNs (Cloudflare, Akamai, Fastly, BunnyCDN, Amazon CloudFront).
Origin shield. Placing one or more shield nodes between CDN edge and origin is recommended to reduce origin request rate when clients are globally distributed.
Purge. Content-addressed segments require no purge. When stream metadata changes (codec, resolution), manifests refresh within max-age=1 without an explicit purge.
Cache warming. When a specific stream is expected to spike, the CDN may be warmed from several geographic points before broadcast start.
Geo-distribution. Segments (max-age=60) are well suited for geographically distributed caching. Manifests (max-age=1) tolerate up to one-second delivery delay — acceptable for non-low-latency live.
9. Monitoring¶
9.1. X-Cache-Status¶
Add add_header X-Cache-Status $upstream_cache_status; in every cached location. Values:
Value |
Description |
|---|---|
|
Response from cache |
|
Not in cache; fetched from origin and stored |
|
Expired, refreshed |
|
Stale copy returned to a concurrent request during refresh |
|
|
|
Origin returned 304 Not Modified |
|
|
9.2. Access-log format¶
log_format pss_cache '$remote_addr $status $request_method "$request" '
'$body_bytes_sent rt=$request_time ut=$upstream_response_time '
'cache=$upstream_cache_status key=$pss_cache_key';
server {
access_log /var/log/nginx/pss.log pss_cache;
}
9.3. Metrics¶
The nginx-vts module exports per-zone metrics in Prometheus format:
GET /status/format/prometheus
Recommended alert thresholds:
Metric |
Threshold |
Possible cause |
|---|---|---|
Segment HIT rate |
< 90% over 5 minutes |
Cache-key normalisation broken; |
Manifest MISS rate |
> 50% over 1 minute |
|
Upstream response time p95 |
> 500 ms over 1 minute |
Origin overload |
Cache zone fill |
> 90% over 10 minutes |
Approaching |
10. Diagnostics¶
Symptom |
Likely cause |
Resolution |
|---|---|---|
Low segment HIT rate |
|
Inspect headers and the regex in the |
404 on segments after they leave the sliding window |
Cached 404 for a segment that fell out of the sliding window |
Add |
Playback start delay of 2–5 s |
|
Lower to 1–2 s; enable |
Manifest does not refresh |
|
Set |
Growing TIME_WAIT on upstream |
|
Add |
403 on |
Client resolves relative URLs against the pre-redirect URL |
Server emits |
11. Security¶
11.1. Session URL¶
A URL of the form /h<sess>/... acts as the session token — no repeat authentication is required. Lifetime is bounded by an idle timeout (30 s). On inactivity the session is removed by the cleaner task.
Requirements:
HTTPS on every OTT path (
/hls/,/dash/,/h<sess>/) in productionSession ID in the
Locationheader of 302 is not cached (no-cache, no-store)
11.2. Rate limiting¶
limit_req_zone $binary_remote_addr zone=dash_top:10m rate=5r/s;
limit_req_zone $binary_remote_addr zone=hls_top:10m rate=5r/s;
server {
location /dash/ {
limit_req zone=dash_top burst=20 nodelay;
proxy_pass http://pss_backend;
}
location /hls/ {
limit_req zone=hls_top burst=20 nodelay;
proxy_pass http://pss_backend;
}
}
Session URLs (/h<sess>/) do not require rate limiting — handling is cheap and responses are cached.
11.3. Caching of error responses¶
proxy_cache_valid 200 60s;
proxy_cache_valid 301 302 0s;
proxy_cache_valid 404 403 0s;
proxy_cache_valid any 1s;
Disables caching of redirects (unique sess in Location) and of authorisation/missing-resource error responses.
11.4. Restricting network access to origin¶
Port 41972 (41982 for HTTPS) must be closed to external traffic. Acceptable configurations:
Bind Perfect Streamer to
127.0.0.1(when nginx is co-located)Firewall rule:
iptables -A INPUT -p tcp --dport 41972 ! -s 10.0.0.0/8 -j DROP
12. Middleware integration¶
12.1. Prefix-login model¶
Perfect Streamer supports delegating user identification to a middleware/billing system via the prefix-login mechanism. An external connector to the billing system is not included in the current release.
Embedded-user configuration:
{
"id": 9,
"login": "sub",
"password": "xxx",
"is-prefix": true,
"max-conn-http-hls": 1,
"accept-stream": [ ... ]
}
With is-prefix: true the server accepts URLs whose login follows <prefix><billing_user_id>:
/dash/test1/sub42/xxx/index.mpd
/hls/test1/sub43/xxx/index.m3u8
12.2. Statistics format¶
<clients>
<client login-id="-1974387287" login="sub" match-login="sub42"
sess-id="11331..." ott-type="dash" stream-id="10000" .../>
<client login-id="-2147031294" login="sub" match-login="sub43"
sess-id="11132..." ott-type="dash" stream-id="10000" .../>
</clients>
The login-id field holds the hash of the URL login. The login field is the configured value. The match-login field is the URL login used by the client.
12.3. Prefix-login limitations¶
Shared password. All subscribers of a prefix pool use a single password value. Compromising the password grants access to any
<prefix><string>.ACL granularity.
accept-streamapplies to the whole prefix pool. Per-subscriber ACL is not available without external billing.Password rotation. Changing the password disconnects all active subscribers. Gradual rotation requires temporarily using two prefix logins.
13. WebVTT subtitles¶
The subtitle source is DVB Teletext / DVB Subtitling from the input MPEG-TS. Teletext subtitle tracks must be present in the Media Information or Original Media Information sections. The Analyzer section can also be used to verify that packets of the corresponding PIDs are active.
For OTT HLS/DASH the OTT mode must be enabled (in Peer mode WebVTT subtitles are not available). The chunk counter OTT WebVTT buffer chunk count in the Output # OTT section must become non-zero.
To diagnose subtitles, enable Analyze and Trace on the stream. At stream start the stream log should contain:
Start Teletext subtitle decoder
[ttxsubdec] ttx: pid=331 magazine=8 page=0x88 lang=***
Subsequent log entries contain the decoded subtitle text.
13.1. VTT segment URLs¶
Scheme |
URL |
Content |
|---|---|---|
HLS master |
|
|
HLS subtitle playlist |
|
list of |
HLS VTT segment |
|
VTT with HLS-flavoured X-TIMESTAMP-MAP |
DASH MPD AdaptationSet |
inside |
|
DASH VTT segment |
|
VTT with DASH-flavoured X-TIMESTAMP-MAP |
<keyHex> is the 16-character hex of CRC64(startTime, streamID, pid). <seq> is the decimal subtitle-storage chunk number (a counter separate from the video storage).