Audio
OpenRouter supports both sending audio files to compatible models and receiving audio responses via the API. This guide covers how to work with audio inputs and outputs.
Audio Inputs
Send audio files to compatible models for transcription, analysis, and processing. Audio input requests use the /api/v1/chat/completions API with the input_audio content type. Audio files must be base64-encoded and include the format specification.
Note: Audio files must be base64-encoded - direct URLs are not supported for audio content.
You can search for models that support audio input by filtering to audio input modality on our Models page.
Sending Audio Files
Here’s how to send an audio file for processing:
Supported Audio Input Formats
Supported audio formats vary by provider. Common formats include:
wav- WAV audiomp3- MP3 audioaiff- AIFF audioaac- AAC audioogg- OGG Vorbis audioflac- FLAC audiom4a- M4A audiopcm16- PCM16 audiopcm24- PCM24 audio
Note: Check your model’s documentation to confirm which audio formats it supports. Not all models support all formats.
Audio Output
OpenRouter supports receiving audio responses from models that have audio output capabilities. To request audio output, include the modalities and audio parameters in your request.
You can search for models that support audio output by filtering to audio output modality on our Models page.
Requesting Audio Output
To receive audio output, set modalities to ["text", "audio"] and provide the audio configuration with your desired voice and format:
Streaming Chunk Format
Audio output requires streaming (stream: true). Audio data and transcript are delivered incrementally via the delta.audio field in each chunk:
Audio Configuration Options
The audio parameter accepts the following options: