NETWORK AND STORAGE

Storage policy is part of the pipeline architecture.

Vetrra can operate against local disks, attached arrays, and network-mounted libraries, but the path design has to be explicit. A bad storage layout will look like an application bug even when the real problem is a sloppy boundary between intake, staging, QC, and deploy.

  • intake/ for downloader or manual source arrival.
  • staging/ for intermediate encode and mux work.
  • qc/ for reports, review artifacts, and failed-job inspection.
  • deploy/ for final files presented to Plex or other downstream scanners.

Network share guidance

  • Prefer stable mapped paths or UNC paths that the desktop process can resolve consistently.
  • Do not mix intermittent network shares with temporary encode work if local scratch space is available.
  • Validate write permissions before enabling unattended deployment.

Throughput and reliability

  • Large remux or transcode workloads should use local scratch storage whenever possible.
  • Reserve network paths for source intake and final deploy if your array throughput is inconsistent.
  • Keep poster OCR caches and metadata writes on storage that does not stall under small-file churn.

Failure containment

  • If deployment fails, the job should still be recoverable from staging or QC.
  • If the network disappears, the pipeline should fail visibly rather than half-writing final library state.
  • If a library scanner reacts too early, tighten the boundary between QC and deploy instead of weakening file validation.

Operator rule

Treat every path as a contract. If the path layout is deterministic, the rest of the pipeline becomes much easier to reason about and much easier to debug.