High-performance Python framework for real-time video streaming and processing
PyTrickle provides a complete Python framework for real-time video and audio streaming with custom processing. Built on the trickle protocol, it enables you to:
Process live video streams with your custom Python functions using PyTorch tensors
Start, stop, and monitor streams via HTTP REST API
Update processing parameters in real-time without restarting
Handles mono, stereo, and multi-channel audio processing
Easy to add custom frame processing algorithms and AI models
Automatic reconnection and error recovery built-in
# Clone the repository
git clone https://github.com/livepeer/pytrickle.git
cd pytrickle
# Create a virtual environment (optional but recommended)
uv venv .virtualenv --python python3.11
source .virtualenv/bin/activate
# Install dependencies
uv pip install -r requirements.txt
# Install PyTrickle
uv pip install -e .
# Test the CLI
pytrickle list
PyTrickle provides a powerful CLI tool to scaffold streaming pipeline applications quickly. The CLI helps you create starter apps from pre-built templates.
View all available templates with descriptions:
pytrickle list
This will display:
Available templates:
grayscale_chipmunk Example demonstrating decorator-based handlers that styli...
passthrough Minimal passthrough example for PyTrickle
process_video OpenCV Green Processor using decorator-based handlers
Usage: pytrickle init <app_name> --template <template_name>
Use the pytrickle init command to scaffold a new application:
# Create app from default template (passthrough)
pytrickle init my_app
# Create app from specific template
pytrickle init my_video_app --template process_video
# Specify custom port
pytrickle init my_app --template grayscale_chipmunk --port 9000
# Create in specific directory
pytrickle init my_app --out ./apps/ --force
| Option | Description | Default |
|---|---|---|
name |
App name (used in service name and filename) | Required |
--template, -t |
Template to use | passthrough |
--port |
Port to bind the server | 8000 |
--out |
Output file or directory | ./ (current dir) |
--force |
Overwrite target if exists | False |
Once created, run your app with:
python my_app.py
Use Case: Starting point for building custom processing pipelines
Features:
pytrickle init my_passthrough --template passthrough
Use Case: Video and audio processing example
Features:
pytrickle init my_effects --template grayscale_chipmunk
Use Case: OpenCV-based video processing
Features:
pytrickle init my_processor --template process_video
All PyTrickle applications follow this decorator-based pattern:
from pytrickle import StreamProcessor, VideoFrame, AudioFrame
from pytrickle.decorators import (
video_handler, audio_handler, model_loader,
param_updater, on_stream_stop, on_stream_start
)
class MyHandlers:
def __init__(self):
self.config = {}
@model_loader
async def load(self, **kwargs):
"""Initialize resources when stream starts"""
# Load your AI model here
pass
@video_handler
async def handle_video(self, frame: VideoFrame) -> VideoFrame:
"""Process each video frame"""
tensor = frame.tensor
# Apply your processing
return frame.replace_tensor(tensor)
@audio_handler
async def handle_audio(self, frame: AudioFrame) -> list[AudioFrame]:
"""Process each audio frame"""
samples = frame.samples
# Apply your processing
return [frame.replace_samples(samples)]
@param_updater
async def update_params(self, params: dict):
"""Update parameters in real-time"""
self.config.update(params)
@on_stream_stop
async def on_stop(self):
"""Cleanup when stream stops"""
pass
@on_stream_start
async def on_start(self):
"""Initialize when stream starts"""
pass
async def main():
handlers = MyHandlers()
processor = StreamProcessor.from_handlers(
handlers, name="my-app", port=8000
)
await processor.run_forever()
if __name__ == "__main__":
import asyncio
asyncio.run(main())
We introduce what is byoc and how to setup the gateway and orchestrator here ....Once your app is running, register the http endpoint url in your orchestrator.
When you start the livepeer-app-pipelines ui, you will be able to start streaming using your webcam and get the processed video back on the ui. .... link to the ui documentation and byoc setup...... will happen here
Follow these steps to create your own video processing pipeline:
pytrickle init my_pipeline --template passthrough --port 8000
Edit the generated file and modify the handle_video method:
@video_handler
async def handle_video(self, frame: VideoFrame) -> VideoFrame:
tensor = frame.tensor # PyTorch tensor (H, W, C)
# Your custom processing here
# Example: Apply your AI model
processed = your_model(tensor)
return frame.replace_tensor(processed)
python my_pipeline.py
We might need to add a local test setup that runs a tricle server and sends video to pytrickle for processing so the user can verify the validity of the pipeline without having to connect it to the orchestrator.