PyTrickle

High-performance Python framework for real-time video streaming and processing

Overview

PyTrickle provides a complete Python framework for real-time video and audio streaming with custom processing. Built on the trickle protocol, it enables you to:

Real-time Processing

Process live video streams with your custom Python functions using PyTorch tensors

Stream Management

Start, stop, and monitor streams via HTTP REST API

Dynamic Parameters

Update processing parameters in real-time without restarting

Audio Support

Handles mono, stereo, and multi-channel audio processing

Extensible

Easy to add custom frame processing algorithms and AI models

Reliable

Automatic reconnection and error recovery built-in

Installation

Prerequisites

Install PyTrickle

# Clone the repository
git clone https://github.com/livepeer/pytrickle.git
cd pytrickle

# Create a virtual environment (optional but recommended)
uv venv .virtualenv --python python3.11
source .virtualenv/bin/activate

# Install dependencies
uv pip install -r requirements.txt

# Install PyTrickle
uv pip install -e .

Verify Installation

# Test the CLI
pytrickle list

CLI Guide

PyTrickle provides a powerful CLI tool to scaffold streaming pipeline applications quickly. The CLI helps you create starter apps from pre-built templates.

List Available Templates

View all available templates with descriptions:

pytrickle list

This will display:

Available templates:

  grayscale_chipmunk   Example demonstrating decorator-based handlers that styli...
  passthrough          Minimal passthrough example for PyTrickle
  process_video        OpenCV Green Processor using decorator-based handlers

Usage: pytrickle init <app_name> --template <template_name>

Create a New Pipeline App

Use the pytrickle init command to scaffold a new application:

Basic Usage

# Create app from default template (passthrough)
pytrickle init my_app

# Create app from specific template
pytrickle init my_video_app --template process_video

# Specify custom port
pytrickle init my_app --template grayscale_chipmunk --port 9000

# Create in specific directory
pytrickle init my_app --out ./apps/ --force

Command Options

Option Description Default
name App name (used in service name and filename) Required
--template, -t Template to use passthrough
--port Port to bind the server 8000
--out Output file or directory ./ (current dir)
--force Overwrite target if exists False

Running Your App

Once created, run your app with:

python my_app.py

Available Templates

1. Passthrough Template

Use Case: Starting point for building custom processing pipelines

Features:

  • Minimal example that passes frames through unchanged
  • Demonstrates basic structure and decorator patterns
  • Includes model loader, video/audio handlers, parameter updates
  • Perfect for understanding the framework
pytrickle init my_passthrough --template passthrough

2. Grayscale Chipmunk Template

Use Case: Video and audio processing example

Features:

  • Converts video frames to grayscale using PyTorch
  • Applies pitch-shifting to audio (chipmunk effect)
  • Demonstrates real-time parameter updates
  • Shows how to work with both video and audio streams
pytrickle init my_effects --template grayscale_chipmunk

3. Process Video Template

Use Case: OpenCV-based video processing

Features:

  • Applies green tint effect to video frames
  • Uses OpenCV for image processing
  • Configurable processing intensity
  • Great for learning video only processing while audio is passthrough
pytrickle init my_processor --template process_video

Code Examples

Basic Pipeline Structure

All PyTrickle applications follow this decorator-based pattern:

from pytrickle import StreamProcessor, VideoFrame, AudioFrame
from pytrickle.decorators import (
    video_handler, audio_handler, model_loader,
    param_updater, on_stream_stop, on_stream_start
)

class MyHandlers:
    def __init__(self):
        self.config = {}
    
    @model_loader
    async def load(self, **kwargs):
        """Initialize resources when stream starts"""
        # Load your AI model here
        pass
    
    @video_handler
    async def handle_video(self, frame: VideoFrame) -> VideoFrame:
        """Process each video frame"""
        tensor = frame.tensor
        # Apply your processing
        return frame.replace_tensor(tensor)
    
    @audio_handler
    async def handle_audio(self, frame: AudioFrame) -> list[AudioFrame]:
        """Process each audio frame"""
        samples = frame.samples
        # Apply your processing
        return [frame.replace_samples(samples)]
    
    @param_updater
    async def update_params(self, params: dict):
        """Update parameters in real-time"""
        self.config.update(params)
    
    @on_stream_stop
    async def on_stop(self):
        """Cleanup when stream stops"""
        pass

    @on_stream_start
    async def on_start(self):
        """Initialize when stream starts"""
        pass

async def main():
    handlers = MyHandlers()
    processor = StreamProcessor.from_handlers(
        handlers, name="my-app", port=8000
    )
    await processor.run_forever()

if __name__ == "__main__":
    import asyncio
    asyncio.run(main())

BYOC integration usage

We introduce what is byoc and how to setup the gateway and orchestrator here ....Once your app is running, register the http endpoint url in your orchestrator.

Start Processing

When you start the livepeer-app-pipelines ui, you will be able to start streaming using your webcam and get the processed video back on the ui. .... link to the ui documentation and byoc setup...... will happen here

Creating a Custom Pipeline

Follow these steps to create your own video processing pipeline:

Step 1: Scaffold Your App

pytrickle init my_pipeline --template passthrough --port 8000

Step 2: Add Your Processing Logic

Edit the generated file and modify the handle_video method:

@video_handler
async def handle_video(self, frame: VideoFrame) -> VideoFrame:
    tensor = frame.tensor  # PyTorch tensor (H, W, C)
    
    # Your custom processing here
    # Example: Apply your AI model
    processed = your_model(tensor)
    
    return frame.replace_tensor(processed)

Step 3: Run Your Pipeline

python my_pipeline.py

Step 4: Test with local setup

We might need to add a local test setup that runs a tricle server and sends video to pytrickle for processing so the user can verify the validity of the pipeline without having to connect it to the orchestrator.

📚 Additional Resources