Skip to content

Composer R1 2026: Professional audio, SRT, and production-grade workflow for live broadcast

Proven in production at Sveriges Television’s Olympics coverage – professional audio mixer, SRT support, and 20+ new GPU operators for real-time compositing.

Vindral Composer R1 2026 (“Daytona”) is now available. This release introduces professional broadcast audio processing, full SRT input and output support, an expanded library of GPU-accelerated video operators, and significant workflow improvements—all developed to meet the demands of modern remote production environments.

Proven at scale: SVT’s Olympics production

This release reflects real-world production requirements from Sveriges Television’s Project NEO, where Composer is being used for coverage of the 2026 Winter Olympics. In this deployment, Composer processes 25 SRT signals from Italy in a datacenter environment in Sweden, handling the full production chain: audio routing with mix-minus submixes for commentators, dynamics processing, video compositing, alpha overlays, HTML graphics, and multi-format output—all controlled through APIs using Bitfocus Companion, Stream Decks, and custom web-based tools.

Each Composer instance outputs SRT streams at 1080p50 with 16 channels of audio over primary and backup paths, along with RTMP feeds for multiview monitoring and program output. The entire workflow operates remotely, with producers controlling production from wherever they are—demonstrating Composer’s capability for large-scale REMI (Remote Integration Model) deployments.

Professional audio inside the compositing engine

A complete broadcast-grade audio mixer is now integrated directly into Composer, eliminating the need for a separate audio infrastructure. The mixer provides 8-channel routing per channel strip, mix-minus submixes for commentator and reporter workflows, pre-fader and post-fader monitoring, and EBU R128 loudness metering with an FFT frequency analyzer.

Integrated multichannel audio mixer

Audio processing includes parametric EQ, compressor, limiter, noise gate, low-cut filter, and adjustable delay. Input trim allows gain adjustment before the fader without changing fader position—essential for consistent level management across multiple sources.

The entire mixer is controllable through the WebSocket API, making it possible to build custom web-based audio mixing interfaces tailored to specific production workflows. Performance improvements deliver up to 200% faster audio processing compared to the previous version.

SRT for remote production

Full support for SRT (Secure Reliable Transport) is now available for both inputs and outputs. SRT is the industry-standard protocol for low-latency video streaming over unreliable networks, making it essential for remote production workflows that involve signals traveling over the public internet.

Remote production using Composer

Composer supports both SRT Listener and Caller modes. SRT inputs accept up to 4 stereo pairs per stream. SRT outputs deliver up to 16 channels of AAC audio with configurable bitrate and resolution, and support primary and backup output paths for redundancy—critical for ensuring uninterrupted delivery in live production environments.

20+ new video operators for professional compositing

The GPU processing pipeline has been expanded with operators designed for broadcast production workflows:

Keying and Compositing

The HSV Keyer adds secondary key color support for improved keying in challenging lighting conditions, adjustable garbage matte positioning, and on-screen display of selected key colors. Spill Suppression reduces color spill in chroma key applications. Light Wrap improves the appearance of chroma key edges by simulating light wrapping around the subject. Difference Matte creates mattes based on the difference between two input sources—useful for motion detection and background subtraction.

Image Processing

Gaussian Blur for standard blur effects, Surface Blur for selective smoothing while preserving edges, Edge Detection for highlighting edges in video content, Film Grain for adding realistic film effects, Vignette for controlled darkening of frame edges, White Balance for color temperature and tint adjustment, Regional Contrast for enhancing local contrast, and UV Remap for remapping UV coordinates.

Utility

QR Code generation with multiple error-correction levels and customizable output options. All operators can now be saved as presets and loaded into other projects or layers, making it easy to standardize effects across multiple productions. A context menu option allows deleting all operators from a layer at once.

Integrated live preview views

Workflow improvements for complex productions

The input list has been redesigned with new iconography featuring customizable colors and text labels—making it easier to identify sources at a glance in productions with dozens of inputs. Inputs can be sorted by name, type, or status, and filtered using a search function with wildcard support. Playback status icons show the state of media inputs in real time.

A new preview function allows viewing the content of any input without adding it to a scene layer. Preview images can be saved to disk for reference or documentation purposes. UI components—inputs, operators, targets, and connectors—can be extracted to separate windows, enabling efficient multi-monitor workflows.

Autosave is now available with configurable save intervals, history limits, and the ability to restore projects from previous autosaves—providing protection against data loss during long production sessions.

Composite layers and new targets

Composite Layers are a new layer type that flattens all layers below it into a single image. Operators can then be applied to this flattened result, allowing multiple layers to be processed as if they were a single element. This is particularly useful for creating complex scene compositions that are then color-corrected or treated as a unit.

New target types include the URL Sequencer Target for scheduling HTTP calls to external APIs in a sequenced manner—useful for triggering actions in external systems based on production events—and the Snapshots Target for capturing images from scenes at specified intervals for archival or compliance purposes.

Enhanced API and scripting

WebSocket support has been added for real-time control and monitoring of the audio mixer and input audio properties—enabling external control surfaces and custom mixing interfaces to respond to changes instantly.

A Prometheus metrics endpoint is available for integration with observability platforms.

Performance and platform updates

On Windows, FFmpeg has been updated to a patched version 6.1 with improvements to SRT handling for packet loss and jitter—enhancing reliability for SRT-based remote production workflows.

Detailed performance statistics now report maximum, minimum, average, standard deviation, and jitter for all major processing components—providing visibility into system behavior for capacity planning and troubleshooting.

The Web Page Renderer (Ultralight) delivers up to 10x performance improvement for HTML5 graphics overlays, making it more responsive for data-driven graphics and live updates.

Additional capabilities

The Decklink Capture input now supports still store images—allowing quick switching between live video and a still image, or providing a fallback image when the video input signal is lost. An option to render an error message in case of a black video input signal helps identify signal path issues during setup.

An AI Speech Generator input, powered by the ElevenLabs API, enables realistic text-to-speech generation in multiple languages and voices—useful for automated announcements, accessibility features, or multi-language production workflows (requires a license from ElevenLabs).

An Audio Oscillator input generates test tones for audio testing and calibration, supporting sine, square, triangle, and sawtooth waveforms, as well as white noise.

Built for remote production at scale

Composer is designed to run headlessly on servers in data centers or cloud environments (verified on AWS GPU instances), with all functionality accessible via HTTP REST and WebSocket APIs. Multiple instances can operate in parallel, each handling an independent production, with lifecycle management—provisioning, configuration, launch, and monitoring—performed through the API.

This architecture enables broadcasters to centralize production in a data center and operate remotely via web-based tools, physical control surfaces like Stream Decks, or custom operator interfaces—reducing the equipment and personnel required at event venues while maintaining full control over every aspect of the output.

From production to delivery

Composer is developed alongside Vindral Live, an ultra-low-latency streaming platform from the same team. While each product works independently, they are designed to integrate seamlessly when needed.

Vindral Live handles internal production distribution—delivering sub-second latency streams to producers, operators, and monitoring systems with synchronized playout (<50ms drift). Composer’s native MOQ (Media over QUIC) output feeds directly into Vindral Live without requiring third-party infrastructure.

This combination is ideal for internal workflows: remote production monitoring, multi-location collaboration, quality control feeds, and contribution networks—where low latency and frame-accurate synchronization across all internal viewers are critical.

For production environments that need both real-time compositing and internal low-latency distribution, Composer and Live provide an integrated solution from a single vendor.

Composer R1 2026 is available for Windows and Linux in both desktop and headless runtime versions.

Read the full release notes

Request a demo