Soniox

Switch your LiveKit STT or TTS provider to Soniox

Step-by-step guide to to replace Deepgram, ElevenLabs, AssemblyAI, Cartesia, or OpenAI STT/TTS with Soniox in your LiveKit bot — config mapping and code diffs.

Overview

Each provider section below shows how to swap an STT or TTS service for Soniox in an AgentSession, plus how the source settings map across. The rest of your LiveKit setup (Agent, room, VAD, LLM, tools) stays the same.

For configuration depth, see the STT and TTS reference pages. For an end-to-end walkthrough, see Build a voice agent.

Test Soniox before you migrate

The fastest way to evaluate is on our live comparison pages: record a sample on the STT comparison page or enter text on the TTS comparison page, and see the result side-by-side with your current provider.

Cases to evaluate when building a reliable voice agent across multiple languages:

  • Mid-utterance language switching. Start a sentence in English and finish in Spanish, or drop a French name into an English question. Soniox detects and transcribes both.
  • Non-English, end-to-end. Run a full conversation in your target language. Both STT and TTS cover 60+ languages.
  • Tricky inputs. Order numbers, postal codes, emails, phone numbers, brand names, and product codes with non-English spellings. Soniox preserves them on the STT side and pronounces them correctly on the TTS side.
  • Low latency. Soniox leads on time-to-final-transcript so the LLM picks up the moment the user stops talking.

Before you migrate

Install the Soniox extra for LiveKit Agents:

pip install "livekit-agents[soniox]~=1.5"

Create an API key in the Soniox Console. The same key works for STT and TTS.

export SONIOX_API_KEY=...

LiveKit configures turn-taking on AgentSession rather than on the STT service. To use Soniox's built-in endpoint detection, pass turn_handling=TurnHandlingOptions(turn_detection="stt") to AgentSession. Keep a local VAD (e.g. Silero) in the session — AgentSession also uses it for interruption detection (catching when the caller starts speaking while the agent is talking), and interruption.mode accepts "adaptive" or "vad", not "stt". See endpoint detection.

From Deepgram

STT

Before:

from livekit.plugins import deepgram

stt = deepgram.STT(
    model="nova-3",
    language="en",
    keyterm=["Bright Smile Dental", "checkup", "cavity"],
    punctuate=True,
    interim_results=True,
    endpointing=25,
    enable_diarization=True,
)

After:

from livekit.plugins import soniox
from livekit.plugins.soniox import ContextObject, STTOptions

stt = soniox.STT(
    params=STTOptions(
        language_hints=["en"],
        context=ContextObject(terms=["Bright Smile Dental", "checkup", "cavity"]),
    ),
)
Deepgram settingSoniox equivalentNotes
languagelanguage_hints (list)A hint, not a filter. Soniox still transcribes other languages
interim_resultsAlways onSoniox streams non-final and final tokens by default
punctuateAutomatic in Soniox
smart_format, numeralsAutomatic in Soniox
keyterm / keywordscontext.termsWrap with ContextObject(terms=[...])
enable_diarizationenable_speaker_diarization
endpointing (ms)Session-level turn_handlingSet turn_detection="stt" on AgentSession; tune via max_endpoint_delay_ms

From ElevenLabs

TTS

Before:

from livekit.plugins import elevenlabs

tts = elevenlabs.TTS(
    voice_id="ODq5zmih8GrVes37Dizd",
    model="eleven_multilingual_v2",
    language="en",
)

After:

from livekit.plugins import soniox

tts = soniox.TTS(voice="Adrian", language="en")
ElevenLabs TTS settingSoniox equivalentNotes
voice_idvoiceSee Soniox voices
languagelanguage
modelmodelSoniox default

STT

Before:

from livekit.plugins import elevenlabs

stt = elevenlabs.STT(model_id="scribe_v2_realtime")

After:

from livekit.plugins import soniox
from livekit.plugins.soniox import STTOptions

stt = soniox.STT(
    params=STTOptions(
        language_hints=["en"],
        enable_language_identification=True,
    ),
)
ElevenLabs Realtime STT settingSoniox equivalentNotes
model_idmodel (default stt-rt-v4)
VAD / endpointing extra kwargsSession-level turn_handlingSet turn_detection="stt" on AgentSession; tune via max_endpoint_delay_ms
language_codelanguage_hints (list)

From AssemblyAI

STT

Before:

from livekit.plugins import assemblyai

stt = assemblyai.STT(
    model="u3-rt-pro",
    language_detection=True,
    speaker_labels=True,
    keyterms_prompt=["Bright Smile Dental", "checkup", "cavity"],
    format_turns=False,
    end_of_turn_confidence_threshold=0.4,
    min_end_of_turn_silence_when_confident=400,
    max_turn_silence=1280,
)

After:

from livekit.plugins import soniox
from livekit.plugins.soniox import ContextObject, STTOptions

stt = soniox.STT(
    params=STTOptions(
        enable_language_identification=True,
        enable_speaker_diarization=True,
        context=ContextObject(terms=["Bright Smile Dental", "checkup", "cavity"]),
    ),
)
AssemblyAI settingSoniox equivalentNotes
language_detectionenable_language_identificationOn by default in Soniox
keyterms_promptcontext.termsWrap with ContextObject(terms=[...])
speaker_labelsenable_speaker_diarization
format_turnsAutomatic in Soniox
end_of_turn_confidence_threshold,
min_end_of_turn_silence_when_confident,
max_turn_silence
Session-level turn_handlingSet turn_detection="stt" on AgentSession; tune via max_endpoint_delay_ms

From Cartesia

TTS

Before:

from livekit.plugins import cartesia

tts = cartesia.TTS(
    model="sonic-3",
    voice="f786b574-daa5-4673-aa0c-cbe3e8534c02",
    language="en",
)

After:

from livekit.plugins import soniox

tts = soniox.TTS(voice="Adrian", language="en")
Cartesia TTS settingSoniox equivalentNotes
modelmodelSoniox default tts-rt-v1-preview
voicevoiceSee Soniox voices
languagelanguage

Cartesia supports inline SSML-like tags (e.g. <spell>) in input text. Strip them before passing text to Soniox, otherwise the bot will read the tags aloud literally.

STT

Before:

from livekit.plugins import cartesia

stt = cartesia.STT(model="ink-whisper", language="en")

After:

from livekit.plugins import soniox
from livekit.plugins.soniox import STTOptions

stt = soniox.STT(params=STTOptions(language_hints=["en"]))
Cartesia STT settingSoniox equivalentNotes
modelmodel (default stt-rt-v4)
languagelanguage_hints (list)A hint, not a filter. Soniox still transcribes other languages

From OpenAI

OpenAI provides both STT and TTS services in LiveKit.

STT

Before:

from livekit.plugins import openai

stt = openai.STT(
    model="gpt-4o-transcribe",
    language="en",
    prompt="Expect medical terminology and patient names.",
    detect_language=True,
)

After:

from livekit.plugins import soniox
from livekit.plugins.soniox import ContextObject, STTOptions

stt = soniox.STT(
    params=STTOptions(
        language_hints=["en"],
        context=ContextObject(text="Expect medical terminology and patient names."),
    ),
)
OpenAI STT settingSoniox equivalentNotes
modelmodel (default stt-rt-v4)
languagelanguage_hints (list)A hint, not a filter. Soniox still transcribes other languages
detect_languageenable_language_identificationOn by default in Soniox
promptcontext.textWrap with ContextObject(text="...")

| use_realtime | Soniox is always realtime | |

TTS

Before:

from livekit.plugins import openai

tts = openai.TTS(
    model="gpt-4o-mini-tts",
    voice="ash",
)

After:

from livekit.plugins import soniox

tts = soniox.TTS(voice="Adrian")
OpenAI TTS settingSoniox equivalentNotes
voicevoicesee Soniox voices
modelmodelSoniox default tts-rt-v1-preview