
Software-defined
broadcast production
Vindral Composer replaces the traditional broadcast production stack with a single GPU-accelerated application. Compositing, audio mixing, graphics, and switching run on standard server hardware, controlled through REST and WebSocket APIs.
Built for remote production
Modern broadcast separates the production team from the venue. Cameras and microphones stay on-site. Production runs in a datacenter.
Composer is built for this setup. It subscribes to live SRT feeds, processes them on GPU servers, and delivers outputs as SRT, RTMP, NDI, or SDI.
Operators work remotely using web tools, Bitfocus Companion, or custom interfaces built on HTTP and WebSocket APIs.
Composer runs headlessly. Each instance is assigned to a single production. Configuration, management, and monitoring are handled through the API.

What broadcast teams deal with today
Remote production increases complexity across the workflow.
Production teams must manage:
On-site overhead
Equipment, travel, and setup increase cost for every production.
Multi-vendor stacks
Separate tools for switching, audio, graphics, encoding, and delivery create integration work and unclear ownership.
Audio complexity
Mix-minus, routing, and processing are often handled outside the main production pipeline.
Protocol fragmentation
SRT, NDI, SDI, RTMP, and other formats coexist. Each conversion adds configuration and risk.
Remote control
Operators need reliable control over production from a distance.
Signal reliability
IP contribution requires handling of packet loss, latency variation, and failover.
Composer handles this within a single application running in your datacenter.
SVT and Milano Cortina
In production with Composer
Background
Sveriges Television (SVT) used Composer as the core video and audio processing engine for its coverage of the 2026 Winter Games in Milano Cortina.
The deployment was part of Project NEO, SVT’s transition from hardware-based broadcast to software-defined remote production.
Technical implementation
Live SRT signals from Italy were routed to a datacenter in Stockholm, where multiple Composer instances ran in parallel.
Operators controlled each instance using web tools, Bitfocus Companion, and a custom web-based audio mixer built on Composer’s WebSocket API.
SVT’s orchestration system handled configuration and monitoring through the API without manual intervention.

Each Composer instance managed the full production chain:
“Vindral’s API-driven approach gave our teams direct control over the production workflow. Being able to produce where we want to and in the way we want to, without the need for specialist engineering at every step, is exactly the kind of outcome NEO was designed to deliver.”
One application. The full production chain.
Composer replaces what would traditionally require multiple broadcast systems.
Audio
Full mixer with channel strips, AUX/BUS routing, mix-minus, and EBU R128 loudness metering.
Audio processing includes EQ, compression, limiting, gating, delay, and Crystal Speech AI noise cancellation.
Video compositing
GPU-based compositing with keying, layering, and effects.
Supports multi-layer scenes and real-time video processing.
Graphics
HTML5-based graphics rendered via Chromium.
Used for lower thirds, scoreboards, timers, and data-driven overlays, controlled through the API.
Switching
Cut and crossfade transitions, multiviewer with up to 13 sources, overlay layers, and fade-to-black.
API control
Integration
All functionality is exposed through HTTP REST and WebSocket APIs.
Composer integrates into existing broadcast workflows and can be controlled through custom systems, third-party tools, or standard interfaces like Bitfocus Companion.
Automation
A scripting engine supports automation, callbacks, and custom logic.
Monitoring
Prometheus metrics are available for monitoring.
API capabilities
Proven in production
SVT built orchestration, control interfaces, and audio tooling on top of Composer’s APIs without modifying the core system.
Production at scale
Runs in your datacenter
Available on Windows or Linux, with desktop mode for setup and testing and headless mode for production. The same projects run in both environments.
Parallel production
Multiple Composer instances can run in parallel, each handling an independent production.
GPU-native processing
Compositing, encoding, and effects run on GPU, avoiding unnecessary CPU transfers and reducing latency.
Get in touch
Talk to us about your production workflow, or explore what Composer can do with a hands-on demo.