Local inference. Multi-Model orchestration. Edge/Cloud hybrid execution.
All in One SDK.
Easy Integration
Chain models together or run them standalone. Define complex workflows in simple YAML.
pipeline: voice-agent
steps:
- model: whisper-tiny
task: transcribe
- model: gpt-4
task: reason
fallback: cloud
- model: kokoro-82m
task: synthesizefinal result = await Xybrid.runPipeline(
'voice-agent',
input: audioBytes,
);Run a single model or chain them into complex pipelines.
Same YAML config works across all platforms
name: voice-assistant
stages:
- id: transcribe
model: whisper-tiny
route: device
- id: think
model: gpt-4o-mini
route: cloud
input: ${transcribe.text}
- id: speak
model: kokoro-tts
route: device
input: ${think.text}// Load the pipeline from assets
final loader = Xybrid.pipeline('assets/voice-assistant.yaml');
final pipeline = await loader.load();
// Create an audio envelope
final envelope = Envelope.audio(
audioBytes: audioBytes,
sampleRate: 16000,
channels: 1,
);
// Run the pipeline and play the result
final result = await pipeline.run(envelope: envelope);
await audioPlayer.play(result.audio);From speech recognition to text-to-speech, run powerful ML models anywhere.
Run ASR and TTS models directly on-device for low latency and offline support.
Seamlessly route to cloud APIs when device capabilities are insufficient.
Chain models together with YAML pipelines: ASR → LLM → TTS in one config.
Flutter, iOS, Android, and Rust SDKs with unified API.
Leverage CoreML, QNN, and Metal for maximum performance.
Track inference metrics and device capabilities across your fleet.
Join developers building the next generation of voice-enabled applications.