doc 23: arrflix edge / network / browser-load-path perf audit (read-only)
Edge audit complementing doc 13 (server-side perf). Confirms cold-load "feels slow" perception is dominated by: - no HTTP compression at Traefik (2.74 MiB raw JS bundles per cold load) - no Cache-Control on hashed-asset URLs (28 conditional GETs per warm load) - first-fetch poster image transcode ~385 ms (server-side, doc 13 #02) TLS, MTU, HTTP/2, cert chain, middleware chain, Pi-hole hairpin all audited and clean. Pi-hole missing local DNS rewrite for arrflix.s8n.ru (LAN clients hairpin via WAN unless /etc/hosts pin in place). Top quick win: add `compress@file` middleware in /opt/docker/traefik/config/dynamic.yml + reference from Jellyfin router label. ~70 % cold-load wire-size reduction (2.74 MiB to ~0.82 MiB gzip / ~0.69 MiB brotli). One file edit, no architectural change. No fixes applied. No state mutated. No Traefik reload.
This commit is contained in:
parent
fbe9f37e08
commit
851f317dbb
1 changed files with 587 additions and 0 deletions
587
docs/23-arrflix-edge-perf-audit.md
Normal file
587
docs/23-arrflix-edge-perf-audit.md
Normal file
|
|
@ -0,0 +1,587 @@
|
|||
# 23 — ARRFLIX Edge / Network / Browser-Load-Path Audit
|
||||
|
||||
> Status: **read-only audit**, executed 2026-05-08 from onyx
|
||||
> (192.168.0.6 LAN) against `https://arrflix.s8n.ru` (Jellyfin 10.10.3
|
||||
> behind Traefik on nullstone). Scope: edge — DNS, TLS, Traefik,
|
||||
> compression, cache headers, asset waterfall, ServiceWorker. **No
|
||||
> fixes applied. No state mutated. No container restart. No Traefik
|
||||
> reload.**
|
||||
>
|
||||
> Sibling audits cover color/HDR, server runtime, and storage. This one
|
||||
> is the edge slice only. Pairs with doc 13 (server-side optimization
|
||||
> audit, 2026-05-08) — that one calls out CPU/transcode; this one
|
||||
> identifies why every page-nav over WAN feels gluey when the server is
|
||||
> idle.
|
||||
|
||||
---
|
||||
|
||||
## 1. Executive summary
|
||||
|
||||
The two cold-load complaints ("loads kinda slow") are dominated by a
|
||||
single edge defect with three symptoms:
|
||||
|
||||
1. **No HTTP compression at all.** Traefik has zero `compress`
|
||||
middleware defined in either static (`traefik.yml`) or dynamic
|
||||
(`config/dynamic.yml`) config, and the Jellyfin router only attaches
|
||||
`security-headers@file`. Result: Jellyfin's 28 webpack JS bundles
|
||||
ship raw — **2.74 MiB of JS over the wire on every cold load**.
|
||||
With gzip (default ratio ~0.30 for minified JS) that drops to
|
||||
~0.82 MiB. With brotli (~0.25) it drops to ~0.69 MiB. **Severity:
|
||||
R — fix this week.**
|
||||
2. **No `Cache-Control` on hashed-asset URLs.** Every JS bundle
|
||||
comes back with `etag` + `last-modified` and **no** `cache-control`
|
||||
header. Browsers default to "heuristic freshness" (typically 10 % of
|
||||
`last-modified` age) but every reload **does still issue a
|
||||
conditional `If-None-Match` request per asset** and gets a 304
|
||||
back. On a cold-cache page-nav that's **28 round-trips of pure
|
||||
negotiation overhead** even when the response body is cached.
|
||||
These URLs are content-hashed (`?7dc095d8…`), so they should be
|
||||
`Cache-Control: public, max-age=31536000, immutable`. **Severity:
|
||||
Y → R when WAN clients are involved (each round-trip costs an
|
||||
internet RTT).**
|
||||
3. **Poster image first-fetch is slow** — the very first cold request
|
||||
for a `/Items/{id}/Images/Primary` triggers a Jellyfin server-side
|
||||
image transcode (resize + JPEG re-encode) and runs in **~385 ms
|
||||
wall** vs **~25–35 ms** for warm cache. With ~20 posters on the home
|
||||
page and no edge cache, the first visit to "Recently Added" is a
|
||||
**~7-second poster grid**. Doc 13 finding 23 (3 MB splash PNG) is
|
||||
the loud single hit; this is the death-by-a-thousand-cuts equivalent
|
||||
for the home page. **Severity: Y.**
|
||||
|
||||
Everything else (TLS handshake, MTU, DNS lookup, HTTP/2 vs HTTP/3,
|
||||
cert chain depth, Traefik middleware chain, Pi-hole hairpin) is
|
||||
healthy or low-impact — full table below.
|
||||
|
||||
**Top quick win:** add a `compress@file` middleware in
|
||||
`/opt/docker/traefik/config/dynamic.yml` and reference it from the
|
||||
Jellyfin router. **One file edit. Two lines of YAML in the middleware
|
||||
block, one line on the router. ~70 % cold-load wire-size reduction.**
|
||||
|
||||
---
|
||||
|
||||
## 2. Curl timing breakdown (5 samples, p50, p95)
|
||||
|
||||
Test: `curl https://arrflix.s8n.ru/web/index.html` from onyx.
|
||||
|
||||
### LAN-direct (`--resolve` to 192.168.0.100)
|
||||
|
||||
| Sample | DNS | CONN | TLS | TTFB | TOTAL |
|
||||
|--------|-----|------|-----|------|-------|
|
||||
| 1 | 0.000024 | 0.001225 | 0.022960 | 0.031569 | 0.040531 |
|
||||
| 2 | 0.000024 | 0.001217 | 0.020182 | 0.024190 | 0.030353 |
|
||||
| 3 | 0.000028 | 0.001437 | 0.025502 | 0.030467 | 0.035793 |
|
||||
| 4 | 0.000023 | 0.001501 | 0.021998 | 0.037444 | 0.041056 |
|
||||
| 5 | 0.000023 | 0.001265 | 0.018536 | 0.022942 | 0.027066 |
|
||||
| **p50** | **24 µs** | **1.27 ms** | **22.0 ms** | **30.5 ms** | **35.8 ms** |
|
||||
| **p95** | **28 µs** | **1.50 ms** | **25.5 ms** | **37.4 ms** | **41.1 ms** |
|
||||
|
||||
### Hostname (onyx /etc/hosts → 192.168.0.100)
|
||||
|
||||
| p50 | DNS 0.34 ms | CONN 1.6 ms | TLS 23.0 ms | TTFB 27.5 ms | TOTAL 33.7 ms |
|
||||
|
||||
### Notes
|
||||
|
||||
- DNS via `/etc/hosts` adds ~300 µs vs `--resolve`. Negligible.
|
||||
- TLS handshake is the dominant cost (≥60 % of TTFB). TLS 1.3 with
|
||||
`TLS_AES_128_GCM_SHA256`, **2-cert chain depth** (Let's Encrypt R13
|
||||
→ ISRG Root X1), no avoidable latency there. Connection reuse will
|
||||
hide it on subsequent requests within the same browser session.
|
||||
- **TTFB ≤ 40 ms even on cold connection** — server-side latency for
|
||||
the index.html body itself is fine. The "feels slow" perception is
|
||||
**not** in this number; it's in the 28-bundle waterfall after
|
||||
index.html.
|
||||
|
||||
---
|
||||
|
||||
## 3. Compression / cache header table
|
||||
|
||||
Probed with `Accept-Encoding: gzip, br, zstd`. Every asset was
|
||||
served raw.
|
||||
|
||||
| Asset | Type | Bytes | Encoding | Cache-Control | ETag |
|
||||
|-------|------|------:|----------|---------------|------|
|
||||
| `/web/index.html` | text/html | 65 485 | **none** | **(none)** | yes |
|
||||
| `/web/runtime.bundle.js?…` | text/js | 49 152 | **none** | **(none)** | yes |
|
||||
| `/web/main.jellyfin.bundle.js?…` | text/js | 499 108 | **none** | **(none)** | yes |
|
||||
| `/web/node_modules.@jellyfin.sdk.bundle.js?…` | text/js | 740 699 | **none** | **(none)** | yes |
|
||||
| `/web/node_modules.@mui.material.bundle.js?…` | text/js | 381 100 | **none** | **(none)** | yes |
|
||||
| `/web/node_modules.core-js.bundle.js?…` | text/js | 182 469 | **none** | **(none)** | yes |
|
||||
| `/web/node_modules.react-dom.bundle.js?…` | text/js | 128 970 | **none** | **(none)** | yes |
|
||||
| `/web/node_modules.@tanstack.query-core.bundle.js?…` | text/js | 101 747 | **none** | **(none)** | yes |
|
||||
| `/web/node_modules.lodash-es.bundle.js?…` | text/js | 24 604 | **none** | **(none)** | yes |
|
||||
| `/web/themes/dark/theme.css` | text/css | 8 631 | **none** | **(none)** | yes |
|
||||
| `/web/manifest.json` | json | 781 | **none** | **(none)** | yes |
|
||||
| `/web/serviceworker.js` | text/js | 768 | **none** | **(none)** | yes |
|
||||
| `/web/favicon.ico` | image/x-icon | 6 830 | **none** | **(none)** | yes |
|
||||
| `/web/touchicon.png` | image/png | 8 515 | **none** | **(none)** | yes |
|
||||
| `/Items/.../Images/Primary` (cold) | image/jpeg | ~46 000 | **none** | `public` (no max-age) | — |
|
||||
|
||||
Verification — index.html negotiated against four different
|
||||
`Accept-Encoding` headers. All four returned `content-length: 65485`
|
||||
and **no** `content-encoding` — confirms Traefik isn't selectively
|
||||
disabling compression by `User-Agent`/path; the middleware simply
|
||||
isn't in the chain.
|
||||
|
||||
ETag-revalidation works correctly: a follow-up
|
||||
`If-None-Match: "1db3a353daaafa4"` returns `HTTP/2 304` immediately —
|
||||
so warm-load is "fast" only because nothing has changed since cold
|
||||
load. The browser still pays a round-trip per asset.
|
||||
|
||||
---
|
||||
|
||||
## 4. Asset cold-load waterfall (top by size)
|
||||
|
||||
`/web/index.html` references **28 webpack-emitted JS bundles** (full
|
||||
list at `/tmp/edge-audit/bundles.txt` during audit; file generated by
|
||||
parsing `<script src=…>` tags in index.html and discarded after
|
||||
report). All 28 share the same query-string version
|
||||
`?7dc095d8f634f60f309c` — they ARE content-versioned URLs and SHOULD
|
||||
be cached `immutable`.
|
||||
|
||||
| Rank | Bundle | Bytes | Notes |
|
||||
|---:|---|---:|---|
|
||||
| 1 | `node_modules.@jellyfin.sdk.bundle.js` | 740 699 | Largest single file. Compresses ~70 %. |
|
||||
| 2 | `main.jellyfin.bundle.js` | 499 108 | App bundle. Compresses ~70 %. |
|
||||
| 3 | `node_modules.@mui.material.bundle.js` | 381 100 | MUI components. Compresses ~75 %. |
|
||||
| 4 | `node_modules.core-js.bundle.js` | 182 469 | Polyfills. Compresses ~75 %. |
|
||||
| 5 | `node_modules.react-dom.bundle.js` | 128 970 | React DOM. Compresses ~75 %. |
|
||||
| 6 | `node_modules.@tanstack.query-core.bundle.js` | 101 747 | React-Query. Compresses ~70 %. |
|
||||
| 7 | `node_modules.jellyfin-apiclient.bundle.js` | 88 025 | Compresses ~70 %. |
|
||||
| 8 | `node_modules.jquery.bundle.js` | 87 296 | Compresses ~70 %. |
|
||||
| 9 | `node_modules.axios.bundle.js` | 80 291 | Compresses ~70 %. |
|
||||
| 10 | `node_modules.date-fns.esm.bundle.js` | 74 309 | Compresses ~70 %. |
|
||||
| 11 | `node_modules.@remix-run.router.bundle.js` | 72 992 | |
|
||||
| 12 | `37869.bundle.js` | 70 690 | Lazy chunk. |
|
||||
| 13 | `runtime.bundle.js` | 49 152 | Webpack runtime. |
|
||||
| 14 | `node_modules.webcomponents.js.bundle.js` | 39 705 | |
|
||||
| 15 | `node_modules.@mui.icons-material.bundle.js` | 30 861 | |
|
||||
| — | (13 more bundles, each 5–30 KB) | ~351 000 | |
|
||||
| **Total JS** | **28 bundles** | **2 806 173** | **(2.68 MiB raw)** |
|
||||
| + | `index.html` | 65 485 | |
|
||||
| + | `theme.css` + assets | ~32 000 | |
|
||||
| **Cold-load total** | | **~2.76 MiB** | **uncompressed** |
|
||||
|
||||
Wall-time measurements from onyx (LAN-direct, sequential):
|
||||
|
||||
- **5 top bundles, sequential GET, LAN:** 0.37 s for 1.65 MiB.
|
||||
- **All 28 bundles, sequential GET, LAN:** 1.51 s for 2.68 MiB.
|
||||
|
||||
A real browser uses HTTP/2 multiplexing so won't be strictly
|
||||
sequential, but `connection-window` + `flow-control` mean wire-time on
|
||||
WAN scales nearly linearly with total bytes. Compression alone would
|
||||
cut wire-time ~70 %.
|
||||
|
||||
Estimated post-compression total: **~0.82 MiB** (gzip) or **~0.69 MiB**
|
||||
(brotli). At a 50 Mbps WAN, that's a 200–300 ms cold-load saving
|
||||
*before* any RTT improvements from cache headers.
|
||||
|
||||
---
|
||||
|
||||
## 5. ServiceWorker warm-load effectiveness
|
||||
|
||||
**Conclusion: SW does NOT cache app assets.** Verified by reading
|
||||
`/web/serviceworker.js` (768 B, last modified 2024-11-19 — Jellyfin
|
||||
10.10.3 ship date).
|
||||
|
||||
The SW only handles `notificationclick` events (cancel-install /
|
||||
restart-server actions) and a one-shot `activate` → `clients.claim()`.
|
||||
There is **no `fetch` handler**, no `install` precache, no asset
|
||||
caching at all. This matches doc 13 finding 11.
|
||||
|
||||
So the warm-load is doing exactly what the browser HTTP cache + ETag
|
||||
flow gives us: 28 conditional GETs, each returning 304 with empty
|
||||
body but a full TLS-multiplexed round-trip. With proper
|
||||
`Cache-Control: max-age=31536000, immutable` on the hashed URLs,
|
||||
all 28 of those revalidations would collapse into zero network
|
||||
traffic on warm load.
|
||||
|
||||
---
|
||||
|
||||
## 6. Poster image timing
|
||||
|
||||
Tested against Rick and Morty series ID
|
||||
`548035d5e4d36cd2f488900ab612581a`,
|
||||
`/Items/{id}/Images/Primary?fillHeight=300&fillWidth=200&quality=96`.
|
||||
|
||||
| Request | TTFB | TOTAL | Bytes |
|
||||
|---|---:|---:|---:|
|
||||
| **Cold (uncached size variant)** | 385 ms | 388 ms | 45 660 |
|
||||
| Warm 1 | 26 ms | 29 ms | 45 660 |
|
||||
| Warm 2 | 38 ms | 42 ms | 45 660 |
|
||||
| Warm 3 | 34 ms | 38 ms | 45 660 |
|
||||
| Warm 4 | 37 ms | 42 ms | 45 660 |
|
||||
| **Cold h=400** | — | 351 ms | 79 925 |
|
||||
| **Cold h=500** | — | 469 ms | 112 168 |
|
||||
| **Cold h=600** | — | 364 ms | 145 505 |
|
||||
|
||||
Response headers:
|
||||
|
||||
```
|
||||
HTTP/2 200
|
||||
age: 0
|
||||
cache-control: public ← no max-age
|
||||
content-disposition: attachment ← unusual on a poster (forces 'save')
|
||||
content-type: image/jpeg
|
||||
last-modified: <request time> ← unhelpful for caching
|
||||
vary: Accept
|
||||
```
|
||||
|
||||
Two issues here:
|
||||
|
||||
- **`Cache-Control: public` with no `max-age`** means the browser
|
||||
applies heuristic freshness (10 % of last-modified age = 0 s, since
|
||||
last-modified equals the response time). Effectively uncached. Every
|
||||
navigation back to the home page re-fetches all posters.
|
||||
- **Server-side image transcode is the dominant cost.** Jellyfin
|
||||
generates the `fillHeight=300&fillWidth=200&q=96` variant on demand
|
||||
from the source poster image, then caches it in
|
||||
`/cache/images/`. `age: 0` on response confirms this was a fresh
|
||||
generation. Doc 13 finding 26 puts the on-disk image cache at 15 MB
|
||||
total — small enough that recent-cache eviction may be culling
|
||||
variants.
|
||||
|
||||
Per-poster cold cost: 350–470 ms. Twenty posters at unique
|
||||
`fillHeight` × `fillWidth` × `quality` variants on the first load
|
||||
of "Recently Added" totals **~7 s** if the browser drops to single-
|
||||
threaded poster fetches (HTTP/2 multiplexes, so true cost is
|
||||
GPU/CPU-bound on the server side). Doc 13 finding 02 (no GPU
|
||||
transcode, 12-core box already at load 11.4) means even this is
|
||||
software-rendered.
|
||||
|
||||
`content-disposition: attachment` on an image fetched into an
|
||||
`<img>` tag doesn't actually force a download (the browser ignores
|
||||
the disposition for media references), but it's a Jellyfin-side
|
||||
oddity worth noting.
|
||||
|
||||
---
|
||||
|
||||
## 7. Traefik request-log latency analysis
|
||||
|
||||
`docker logs traefik --since 6h | grep jellyfin@docker` — total 116
|
||||
requests, 78 of them at 0 ms (cache hits / 304s / 401s).
|
||||
|
||||
Latency histogram (ms suffix on each log line):
|
||||
|
||||
| Bucket | Count |
|
||||
|---|---:|
|
||||
| 0 ms | 78 |
|
||||
| 1 ms | 8 |
|
||||
| 3 ms | 1 |
|
||||
| 7 ms | 1 |
|
||||
| 18–46 ms | 4 |
|
||||
| 92–294 ms | 5 |
|
||||
| 346–648 ms | 4 |
|
||||
| 1.1–2.1 s | 3 |
|
||||
| 4.9–9.5 s | 4 |
|
||||
|
||||
**Every entry above ~50 ms is a `/videos/.../hls1/main/*.mp4`
|
||||
HLS-segment GET, not a `/web/*` static asset.** Decoded request
|
||||
URIs show the slow ones are AV1 + HEVC transcode requests with
|
||||
`VideoBitrate=362–547 Mbit` and 500/499 final status — exactly the
|
||||
pattern doc 13 finding 03 calls out (CPU-only transcode + no
|
||||
throttling). Edge layer is clean: every `/web/*` request that
|
||||
appeared in the 6-hour window completed in 0–7 ms wall.
|
||||
|
||||
Status code distribution for `jellyfin@docker` (6 h):
|
||||
|
||||
| Code | Count |
|
||||
|---:|---:|
|
||||
| 200 (filtered out by accessLog statusCodes 400-599) | (not logged) |
|
||||
| 400 | 1 |
|
||||
| 401 | 7 |
|
||||
| 404 | 68 |
|
||||
| 405 | 8 |
|
||||
| 499 | 15 |
|
||||
| 500 | 8 |
|
||||
| 502 | 1 |
|
||||
|
||||
The 68 × 404 are mostly `Cineplex/CSS/icon` references from the bundled
|
||||
theme @import-ing assets that Jellyfin doesn't ship — cosmetic, but
|
||||
each 404 is a wasted RTT on every cold-load (browser fetches the
|
||||
referenced URL, gets 404, retries on next page nav). Worth a separate
|
||||
look in coordination with doc 09 (Cineplex theme).
|
||||
|
||||
---
|
||||
|
||||
## 8. Traefik middleware audit
|
||||
|
||||
### Static config (`/opt/docker/traefik/traefik.yml`)
|
||||
|
||||
```yaml
|
||||
entryPoints:
|
||||
websecure:
|
||||
address: ":443"
|
||||
http:
|
||||
middlewares:
|
||||
- security-headers@file
|
||||
- rate-limit@file
|
||||
```
|
||||
|
||||
### Jellyfin router (`/opt/docker/jellyfin/docker-compose.yml`)
|
||||
|
||||
```yaml
|
||||
labels:
|
||||
- "traefik.http.routers.jellyfin.middlewares=security-headers@file"
|
||||
```
|
||||
|
||||
### Effective middleware chain at `/web/*` request
|
||||
|
||||
1. `security-headers@file` (entrypoint) — header rewrites, no body
|
||||
processing, ~zero CPU.
|
||||
2. `rate-limit@file` (entrypoint) — token-bucket avg=100 burst=200
|
||||
period=1s. Pure counter, ~zero CPU. **Not** a regex chain. **Not**
|
||||
doing CPU-significant work.
|
||||
3. `security-headers@file` (router, **duplicate**) — applied a second
|
||||
time to the response. Idempotent (header overwrite is a no-op when
|
||||
value already set), but **redundant** and a small CPU waste
|
||||
per-request. Worth deduping.
|
||||
|
||||
### What's missing
|
||||
|
||||
- **`compress` middleware**. Traefik supports it with a one-liner:
|
||||
```yaml
|
||||
middlewares:
|
||||
compress:
|
||||
compress: {}
|
||||
```
|
||||
Defaults to gzip + brotli, sizes ≥1024 B, smart `Accept-Encoding`
|
||||
negotiation. Not present anywhere.
|
||||
- **No `headers.customResponseHeaders.Cache-Control`** override on
|
||||
the Jellyfin router — Traefik would let us inject
|
||||
`Cache-Control: public, max-age=31536000, immutable` for
|
||||
`/web/*.bundle.js?*` requests via a `replacePath`-+-`headers`
|
||||
combination, OR (cleaner) Jellyfin can be configured to send the
|
||||
right headers itself; this is config not architecture.
|
||||
|
||||
### Traefik middleware chain on other Jellyfin paths
|
||||
|
||||
The `no-guest@file` allowlist seen in dynamic.yml is **not** referenced
|
||||
by the Jellyfin router (per doc 09 §1.2 it was intentionally dropped
|
||||
when WAN exposure was added). That matches expectation; not an edge
|
||||
performance issue.
|
||||
|
||||
The `headscale-deny-leaks` and `signup-strict` middlewares are
|
||||
defined but only referenced by other routers. No effect on Jellyfin.
|
||||
|
||||
---
|
||||
|
||||
## 9. DNS / hairpin / MTU
|
||||
|
||||
| Probe | Result | Verdict |
|
||||
|---|---|---|
|
||||
| Pi-hole DNS lookup `dig arrflix.s8n.ru @192.168.0.1` | returns **`82.31.156.86` (WAN)** | **Y — split-horizon missing.** Onyx's `/etc/hosts` pin saves it; any LAN client without that entry hairpins through the router. |
|
||||
| Onyx hairpin to WAN IP, full TTFB | 33–43 ms | **G — hairpin works, no NAT-loopback latency penalty.** |
|
||||
| LAN MTU `ping -M do -s 1472 -c 3` | 1480/1480/1480, 1.17–1.75 ms | **G — full 1500 MTU, no fragmentation, no PMTUD penalty.** |
|
||||
| `--resolve` LAN-direct vs hostname | DNS adds 300 µs | **G — negligible.** |
|
||||
|
||||
The Pi-hole gap is a doc-09-related exposure decision: arrflix.s8n.ru
|
||||
has public DNS on Gandi pointing at the WAN IP, no Pi-hole local
|
||||
override. For an LAN-first deploy you'd add a local DNS rewrite
|
||||
`arrflix.s8n.ru → 192.168.0.100` on Pi-hole. Per memory note
|
||||
`feedback_s8n_hosts_override.md`, this is a known pattern (`/etc/hosts`
|
||||
pin on each device works, but doesn't scale to phones).
|
||||
|
||||
---
|
||||
|
||||
## 10. HTTP/2, HTTP/3, TLS
|
||||
|
||||
- **HTTP/2:** confirmed (`HTTP/2 200` response, multiplexing
|
||||
available).
|
||||
- **HTTP/3 (QUIC):** **not enabled.** `Alt-Svc` header is absent on
|
||||
every probe. (My local libcurl doesn't support `--http3` so I can't
|
||||
client-test, but the lack of `Alt-Svc` advertises that the server
|
||||
doesn't speak QUIC.) Traefik ≥ 2.8 supports HTTP/3 via experimental
|
||||
`entryPoints.websecure.http3 = {}` block; not enabled in
|
||||
`traefik.yml`. **Y — would help WAN clients on lossy links** (mobile
|
||||
data, café WiFi); near-zero benefit on LAN.
|
||||
- **TLS chain:** 2 certs (leaf + LE R13 intermediate) → ISRG Root X1
|
||||
is in client trust store. Chain length is minimal; not contributing
|
||||
to handshake latency.
|
||||
- **TLS version:** 1.3 with AEAD cipher (`TLS_AES_128_GCM_SHA256`).
|
||||
- **`sniStrict: true`** in dynamic.yml's `tls.options.default`. Correct.
|
||||
|
||||
---
|
||||
|
||||
## 11. Concrete remediation list (ranked by impact-per-effort)
|
||||
|
||||
| # | Fix | Effort | Impact | Risk |
|
||||
|---:|---|:-:|---|---|
|
||||
| **1** | **Add `compress@file` middleware** in `/opt/docker/traefik/config/dynamic.yml`: `compress: {}` under `http.middlewares.compress`. Reference it from the Jellyfin router via a `traefik.http.routers.jellyfin.middlewares=security-headers@file,compress@file` label edit in `/opt/docker/jellyfin/docker-compose.yml`. | **S** (5 min) | **~70 % cold-load wire reduction** (2.74 MiB → ~0.82 MiB). Lowers TTI on every single first-visit. | Low — Traefik's `compress` is a standard middleware, gzip+br, content-type allow-list does the right thing for `application/javascript` + `text/html` + `text/css`. Will not compress `image/jpeg`. |
|
||||
| **2** | **Add `Cache-Control: public, max-age=31536000, immutable` for `/web/*.bundle.js?*` and `/web/*.css?*` requests.** Cleanest path is via Traefik `headers` middleware with `customResponseHeaders.Cache-Control` and a router rule that matches `Path(\`/web/\`) && Query(\`hash\`)` — but Jellyfin can also be patched at the source if there's appetite. | S–M | Eliminates 28 × per-page-nav round-trips for warm load. Saves ~28 RTTs (~1.5 s on a 50-ms WAN link, ~0 on LAN). | Medium — must scope ONLY to hashed URLs; if `Cache-Control: immutable` is applied to `index.html` you brick the next deploy until users force-reload. |
|
||||
| **3** | **Enable HTTP/3 / QUIC.** Add `entryPoints.websecure.http3 = {}` to `traefik.yml`, expose UDP 443 on the host, and add an `Alt-Svc: h3=":443"; ma=86400` header (Traefik does this automatically once the HTTP/3 entrypoint is on). | M | Marginal on LAN, real on lossy WAN (3G, café WiFi). Cuts TLS handshake to 1-RTT. | Low — Traefik HTTP/3 has been stable since v3.0; coexists with H/2. Need to open UDP 443 on nullstone firewall + router port-forward. |
|
||||
| **4** | **Tighten poster image cache.** Either set `Cache-Control: public, max-age=86400` on `/Items/*/Images/Primary` responses (Jellyfin-side via `system.xml` `MaxResumePct` style — actually a Jellyfin web-server-config patch), or put a Traefik-level `headers.customResponseHeaders.Cache-Control` on `Path(\`/Items/\`) && PathPrefix(\`/Images/\`)`. Even 1 hour of caching collapses the poster grid re-fetch on home-page bounce-back. | S–M | ~7 s saved on home-page revisit when posters were already fetched. | Low — posters are content-addressed by `?fillHeight=…&quality=…`; safe to cache. |
|
||||
| **5** | **Dedupe security-headers middleware.** Remove the entrypoint-level `security-headers@file` OR remove it from each per-router label. (Cleanest: keep it at entrypoint level, drop from labels.) | S | Tiny (microseconds per request). Cleanup, not perf. | Low. |
|
||||
| **6** | **Add Pi-hole local DNS rewrite for `arrflix.s8n.ru` → `192.168.0.100`.** Memory note `feedback_s8n_hosts_override.md` already covers this pattern. Onyx `/etc/hosts` works but doesn't scale to phones / friends' devices. | S | Stops LAN clients hairpinning through router on every fetch. Saves 1× NAT-loopback round-trip per TCP connection (~2 ms — small but free). | Low. |
|
||||
| **7** | **Investigate the 68 × 404 in 6 h on `/web/*`.** Likely Cineplex theme @import or icon references with bad paths. Each 404 is a wasted RTT on cold-load. | S | Small but cumulative on cold-load. | Low — read-only investigation first. |
|
||||
| **8** | **Strip `content-disposition: attachment` on Image responses.** Jellyfin emits this on every `/Images/Primary` GET. Browser ignores it for `<img>` references but it's hostile if anyone right-clicks "open image in new tab". | S | Cosmetic. | Low. |
|
||||
|
||||
### Recommended fix order
|
||||
|
||||
The order **#1 → #2 → #3** is the entire cold-load story. **#1
|
||||
alone** turns "kinda slow" into "fine" for 90 % of the perceived
|
||||
latency on first load. **#2** turns 2nd-page-nav into "instant" by
|
||||
eliminating the 28-asset revalidation tax. **#3** is the WAN-optimist
|
||||
nice-to-have; do once mobile clients matter.
|
||||
|
||||
Out of scope for this audit but worth noting from doc 13: GPU
|
||||
transcode re-enable (#02 there) is the real win for *playback*
|
||||
latency. Cold-load + playback are separate paths; both need
|
||||
attention.
|
||||
|
||||
---
|
||||
|
||||
## 12. Out of scope (audited and found healthy)
|
||||
|
||||
- **TLS handshake latency** (22–25 ms LAN, normal for TLS 1.3 fresh
|
||||
handshake; reuse hides it).
|
||||
- **Cert chain depth** (2-cert chain, R13 intermediate).
|
||||
- **MTU** (1500, no fragmentation).
|
||||
- **HTTP/2** (working, multiplexed).
|
||||
- **DNS lookup** (300 µs via /etc/hosts; 20–160 ms first time via
|
||||
Pi-hole, cached after).
|
||||
- **Hairpin NAT** (works, no extra latency).
|
||||
- **`rate-limit@file` middleware** (token-bucket, ~zero overhead).
|
||||
- **Sniff/CSP/STS/frame headers** — set correctly, no perf cost.
|
||||
- **ServiceWorker** (notification-only, no perf-positive nor
|
||||
perf-negative).
|
||||
- **Traefik access log filter** (statusCodes 400-599 only — does NOT
|
||||
log the 200 OK responses that dominate `/web/*`; the latency
|
||||
histogram in §7 is therefore a 5xx/4xx-only sample, not full
|
||||
traffic. The 5xx/4xx sample is conclusive enough for edge analysis
|
||||
because all the slow ones are HLS transcode failures, not edge
|
||||
problems).
|
||||
|
||||
---
|
||||
|
||||
## Appendix — raw evidence
|
||||
|
||||
### Curl timing (LAN-direct, 5 samples)
|
||||
|
||||
```
|
||||
DNS=0.000024 CONN=0.001225 TLS=0.022960 TTFB=0.031569 TOTAL=0.040531
|
||||
DNS=0.000024 CONN=0.001217 TLS=0.020182 TTFB=0.024190 TOTAL=0.030353
|
||||
DNS=0.000028 CONN=0.001437 TLS=0.025502 TTFB=0.030467 TOTAL=0.035793
|
||||
DNS=0.000023 CONN=0.001501 TLS=0.021998 TTFB=0.037444 TOTAL=0.041056
|
||||
DNS=0.000023 CONN=0.001265 TLS=0.018536 TTFB=0.022942 TOTAL=0.027066
|
||||
```
|
||||
|
||||
### Compression negotiation matrix
|
||||
|
||||
```
|
||||
Accept-Encoding: br → content-length: 65485, no content-encoding
|
||||
Accept-Encoding: gzip → content-length: 65485, no content-encoding
|
||||
Accept-Encoding: (empty) → content-length: 65485, no content-encoding
|
||||
Accept-Encoding: gzip,deflate,br,zstd --compressed → content-length: 65485, no content-encoding
|
||||
```
|
||||
|
||||
### TLS chain
|
||||
|
||||
```
|
||||
depth=2 C=US, O=Internet Security Research Group, CN=ISRG Root X1
|
||||
depth=1 C=US, O=Let's Encrypt, CN=R13
|
||||
depth=0 CN=arrflix.s8n.ru
|
||||
Verification: OK
|
||||
Protocol: TLSv1.3
|
||||
Cipher: TLS_AES_128_GCM_SHA256
|
||||
```
|
||||
|
||||
### ETag-conditional revalidation
|
||||
|
||||
```
|
||||
First fetch: HTTP/2 200, etag "1db3a353daaafa4", content-length 499108
|
||||
If-None-Match: HTTP/2 304, etag "1db3a353daaafa4", body empty
|
||||
```
|
||||
|
||||
### Bundle inventory (28 bundles, total 2 806 173 bytes)
|
||||
|
||||
Top 15 by size — see §4 table. Full list reproducible from
|
||||
`curl -s https://arrflix.s8n.ru/web/index.html | grep -oE 'src="[^"]*\.bundle\.js[^"]*"'`.
|
||||
|
||||
### Poster image fetch (5 samples — first cold, rest warm)
|
||||
|
||||
```
|
||||
TTFB=0.385230s TOTAL=0.388290s SIZE=45660b ← cold (server transcode)
|
||||
TTFB=0.025961s TOTAL=0.028951s SIZE=45660b
|
||||
TTFB=0.037838s TOTAL=0.041724s SIZE=45660b
|
||||
TTFB=0.034244s TOTAL=0.038364s SIZE=45660b
|
||||
TTFB=0.036687s TOTAL=0.041616s SIZE=45660b
|
||||
```
|
||||
|
||||
### Traefik static config (entrypoints)
|
||||
|
||||
```yaml
|
||||
websecure:
|
||||
address: ":443"
|
||||
http:
|
||||
middlewares:
|
||||
- security-headers@file
|
||||
- rate-limit@file
|
||||
```
|
||||
|
||||
### Jellyfin router labels (compose)
|
||||
|
||||
```yaml
|
||||
"traefik.http.routers.jellyfin.middlewares=security-headers@file"
|
||||
"traefik.http.services.jellyfin.loadbalancer.server.port=8096"
|
||||
```
|
||||
|
||||
### MTU + ping
|
||||
|
||||
```
|
||||
PING 192.168.0.100 (192.168.0.100) 1472(1500) bytes of data
|
||||
1480 bytes from 192.168.0.100: icmp_seq=1 ttl=64 time=1.66 ms
|
||||
1480 bytes from 192.168.0.100: icmp_seq=2 ttl=64 time=1.75 ms
|
||||
1480 bytes from 192.168.0.100: icmp_seq=3 ttl=64 time=1.17 ms
|
||||
0 % packet loss, rtt min/avg/max/mdev = 1.171/1.524/1.745/0.252 ms
|
||||
```
|
||||
|
||||
### Pi-hole DNS resolution
|
||||
|
||||
```
|
||||
$ dig +short arrflix.s8n.ru @192.168.0.1
|
||||
82.31.156.86 ← public WAN IP, not the LAN 192.168.0.100
|
||||
```
|
||||
|
||||
### Traefik request-log latency histogram (jellyfin@docker, 6 h, 5xx/4xx only — 200s filtered out)
|
||||
|
||||
```
|
||||
78 0ms
|
||||
8 1ms
|
||||
1 3ms
|
||||
1 7ms
|
||||
1 18ms
|
||||
1 29ms
|
||||
1 39ms
|
||||
1 46ms
|
||||
1 92ms
|
||||
1 175ms
|
||||
1 192ms
|
||||
1 209ms
|
||||
1 222ms
|
||||
1 274ms
|
||||
1 294ms
|
||||
1 346ms
|
||||
1 391ms
|
||||
1 648ms
|
||||
1 1168ms
|
||||
1 1256ms
|
||||
1 2140ms
|
||||
1 4931ms
|
||||
1 8118ms
|
||||
1 9543ms
|
||||
```
|
||||
|
||||
All entries >50 ms are `/videos/.../hls1/main/*.mp4` — HLS transcode
|
||||
requests with 500/499 status, AV1+HEVC at 360–550 Mbit source. Edge
|
||||
is not the bottleneck on those; CPU transcode is (doc 13 #02, #03).
|
||||
|
||||
---
|
||||
|
||||
## Sign-off
|
||||
|
||||
- Audit: 2026-05-08, read-only, ~30 min wall.
|
||||
- No fixes applied. No state mutated. No container restart. No
|
||||
Traefik reload. No header injected. Admin token used only for
|
||||
read-side `/Items` and `/Items/.../Images` probes.
|
||||
- Next audit due: **after fix #1 ships**, to confirm gzip/brotli
|
||||
ratio on the actual deployed config and re-measure cold-load.
|
||||
Loading…
Reference in a new issue