Multiple streams
Run multiple concurrent Text-to-Speech streams over a single WebSocket connection using stream_id.
Overview
A single WebSocket connection can host up to 5 concurrent streams.
Each stream is an independent Text-to-Speech generation identified by a unique stream_id.
Use multiple streams when you need to:
- Generate speech for multiple users or conversations over one connection.
- Produce audio in different voices or languages simultaneously.
- Reduce connection overhead by reusing a single WebSocket instead of opening one per request.
Each stream has its own lifecycle: it is started with a config message, receives its own text chunks, and produces its own audio output. Streams do not interfere with each other — an error in one stream does not affect the others.
Core rules
- Start each stream with a config message that includes a unique
stream_id. - Use the same
stream_idfor all text chunks belonging to that stream. - You can run up to 5 active streams per connection.
- A stream error does not terminate other streams on the same connection.
- Keep the WebSocket connection alive with keepalive messages during idle periods.
- A stream can be canceled at any time.
Starting multiple streams
Send a config message for each stream before sending text for that stream. See Real-time generation for the full config payload and parameters.
Example: two streams on one connection
Stream A config:
{
"api_key": "<SONIOX_API_KEY|SONIOX_TEMPORARY_API_KEY>",
"model": "tts-rt-v1-preview",
"language": "en",
"voice": "Adrian",
"audio_format": "pcm_s16le",
"sample_rate": 24000,
"stream_id": "stream-A"
}Stream B config:
{
"api_key": "<SONIOX_API_KEY|SONIOX_TEMPORARY_API_KEY>",
"model": "tts-rt-v1-preview",
"language": "en",
"voice": "Adrian",
"audio_format": "pcm_s16le",
"sample_rate": 24000,
"stream_id": "stream-B"
}Important: Always send a stream's config message before sending any text for that stream.
Text for stream A:
{
"text": "This audio belongs to stream A.",
"text_end": false,
"stream_id": "stream-A"
}Text for stream B:
{
"text": "This audio belongs to stream B.",
"text_end": false,
"stream_id": "stream-B"
}Interleave text chunks across streams as long as each message carries the correct stream_id.
Route messages by stream ID
The server sends audio for all active streams over the same connection.
Use the stream_id field to route each message to the correct output:
- Inspect
stream_idon every incoming message. - Route
audiochunks to the matching output buffer or player. - Track termination state separately for each stream.
- Mark a stream as finished only after receiving
terminated: truefor thatstream_id.
Audio message:
{
"audio": "<base64-audio-bytes>",
"audio_end": false,
"stream_id": "stream-A"
}Termination message:
{
"terminated": true,
"stream_id": "stream-A"
}Error handling
Stream-level error
An error in one stream does not affect other streams on the connection. The server returns an error message followed by termination for that stream:
{
"stream_id": "stream-B",
"error_code": 400,
"error_message": "Stream stream-B already exists"
}{
"terminated": true,
"stream_id": "stream-B"
}Connection-level error
If the WebSocket connection itself is closed (e.g., network failure or server error), all active streams on that connection end immediately.
Example connection-close event:
WebSocket closed (connection-level failure): all active streams ended.Validation error
If a message fails validation (e.g., missing fields), the server returns an error for the affected stream but keeps the connection open for other valid streams.
Example validation error message:
{
"stream_id": "stream-A",
"error_code": 400,
"error_message": "Missing required field: model"
}Code example
Prerequisite: Complete the steps in Get started.
See on GitHub: soniox_sdk_realtime.py.
import argparse
import os
import threading
import time
from pathlib import Path
from uuid import uuid4
from soniox import SonioxClient
from soniox.errors import SonioxRealtimeError
from soniox.types import RealtimeTTSConfig
from soniox.utils import output_file_for_audio_format
VALID_SAMPLE_RATES = [8000, 16000, 24000, 44100, 48000]
VALID_BITRATES = [32000, 64000, 96000, 128000, 192000, 256000, 320000]
VALID_AUDIO_FORMATS = [
"pcm_f32le",
"pcm_s16le",
"pcm_mulaw",
"pcm_alaw",
"wav",
"aac",
"mp3",
"opus",
"flac",
]
DEFAULT_LINES = [
"Welcome to Soniox real-time Text-to-Speech. ",
"As text is streamed in, audio streams back in parallel with high accuracy, ",
"so your application can start playing speech ",
"within milliseconds of the first word.",
]
def get_config(
model: str,
language: str,
voice: str,
audio_format: str,
sample_rate: int | None,
bitrate: int | None,
stream_id: str | None,
) -> RealtimeTTSConfig:
config = RealtimeTTSConfig(
# Stream id for this realtime TTS session.
# If omitted, a random id is generated.
stream_id=stream_id or f"tts-{uuid4()}",
#
# Select the model to use.
# See: soniox.com/docs/tts/models
model=model,
#
# Set the language of the input text.
# See: soniox.com/docs/tts/languages
language=language,
#
# Select the voice to use.
# See: soniox.com/docs/tts/voices
voice=voice,
#
# Set output audio format and optional encoding parameters.
# See: soniox.com/docs/tts/api-reference/websocket-api
audio_format=audio_format,
sample_rate=sample_rate,
bitrate=bitrate,
)
return config
def run_session(
client: SonioxClient,
lines: list[str],
model: str,
language: str,
voice: str,
audio_format: str,
sample_rate: int | None,
bitrate: int | None,
stream_id: str | None,
output_path: str | None,
) -> None:
# Build a realtime Text-to-Speech session configuration.
config = get_config(
model=model,
language=language,
voice=voice,
audio_format=audio_format,
sample_rate=sample_rate,
bitrate=bitrate,
stream_id=stream_id,
)
sanitized_lines = [line.strip() for line in lines if line.strip()]
if not sanitized_lines:
raise ValueError("Text is empty after parsing.")
destination = (
Path(output_path)
if output_path
else output_file_for_audio_format(audio_format, "tts_realtime")
)
print("Connecting to Soniox...")
audio_chunks: list[bytes] = []
try:
with client.realtime.tts.connect(config=config) as session:
print("Session started.")
send_errors: list[Exception] = []
def send_worker() -> None:
try:
for line in sanitized_lines:
session.send_text_chunk(line, text_end=False)
time.sleep(0.1)
session.finish()
except Exception as exc:
send_errors.append(exc)
threading.Thread(target=send_worker, daemon=True).start()
# Receive streamed audio chunks from the websocket.
for audio_chunk in session.receive_audio_chunks():
audio_chunks.append(audio_chunk)
if send_errors:
raise RuntimeError(f"Failed to send realtime text: {send_errors[0]}")
print("Session finished.")
finally:
audio = b"".join(audio_chunks)
if audio:
destination.write_bytes(audio)
print(f"Wrote {len(audio)} bytes to {destination.resolve()}")
else:
print("No audio file was written.")
def main() -> None:
parser = argparse.ArgumentParser()
parser.add_argument(
"--line",
action="append",
default=None,
help="Line to send to realtime TTS (repeat --line for multiple lines).",
)
parser.add_argument("--model", default="tts-rt-v1-preview")
parser.add_argument("--language", default="en")
parser.add_argument("--voice", default="Adrian")
parser.add_argument("--audio_format", default="wav")
parser.add_argument("--sample_rate", type=int)
parser.add_argument("--bitrate", type=int)
parser.add_argument("--stream_id", help="Optional stream id.")
parser.add_argument(
"--output_path",
help="Optional output file path. If omitted, a timestamped path is generated.",
)
args = parser.parse_args()
if args.audio_format not in VALID_AUDIO_FORMATS:
raise ValueError(f"audio_format must be one of {VALID_AUDIO_FORMATS}")
if args.sample_rate is not None and args.sample_rate not in VALID_SAMPLE_RATES:
raise ValueError(f"sample_rate must be None or one of {VALID_SAMPLE_RATES}")
if args.bitrate is not None and args.bitrate not in VALID_BITRATES:
raise ValueError(f"bitrate must be None or one of {VALID_BITRATES}")
api_key = os.environ.get("SONIOX_API_KEY")
if not api_key:
raise RuntimeError(
"Missing SONIOX_API_KEY.\n"
"1. Get your API key at https://console.soniox.com\n"
"2. Run: export SONIOX_API_KEY=<YOUR_API_KEY>"
)
client = SonioxClient(api_key=api_key)
try:
run_session(
client=client,
lines=args.line or DEFAULT_LINES,
model=args.model,
language=args.language,
voice=args.voice,
audio_format=args.audio_format,
sample_rate=args.sample_rate,
bitrate=args.bitrate,
stream_id=args.stream_id,
output_path=args.output_path,
)
except SonioxRealtimeError as exc:
print("Soniox realtime error:", exc)
finally:
client.close()
if __name__ == "__main__":
main()# Generate speech with default settings (wav output)
python soniox_sdk_realtime.py --line "Hello from Soniox realtime Text-to-Speech."
# Generate raw PCM output
python soniox_sdk_realtime.py --audio_format pcm_s16le --sample_rate 24000 --output_path tts-output.pcmSee on GitHub: soniox_sdk_realtime.js.
import { RealtimeError, SonioxNodeClient } from "@soniox/node";
import fs from "fs";
import path from "path";
import { parseArgs } from "node:util";
import process from "process";
const VALID_SAMPLE_RATES = [8000, 16000, 24000, 44100, 48000];
const VALID_BITRATES = [32000, 64000, 96000, 128000, 192000, 256000, 320000];
const VALID_AUDIO_FORMATS = [
"pcm_f32le",
"pcm_s16le",
"pcm_mulaw",
"pcm_alaw",
"wav",
"aac",
"mp3",
"opus",
"flac",
];
const RAW_PCM_FORMATS = ["pcm_s16le", "pcm_f32le", "pcm_mulaw", "pcm_alaw"];
const DEFAULT_LINES = [
"Welcome to Soniox real-time Text-to-Speech. ",
"As text is streamed in, audio streams back in parallel with high accuracy, ",
"so your application can start playing speech ",
"within milliseconds of the first word.",
];
// Initialize the client.
// The API key is read from the SONIOX_API_KEY environment variable.
const client = new SonioxNodeClient();
// Resolve a concrete output file path.
// If the provided path has no extension, derive one from audio_format:
// * pcm_s16le -> .wav (we wrap the bytes in a WAV container below)
// * other pcm_* -> .pcm (raw, no container)
// * anything else -> the format name (e.g. .flac, .mp3, .opus)
function resolveOutputPath(outputPath, audioFormat) {
if (outputPath && path.extname(outputPath)) {
return outputPath;
}
const ext =
audioFormat === "pcm_s16le"
? "wav"
: RAW_PCM_FORMATS.includes(audioFormat)
? "pcm"
: audioFormat;
const base = outputPath || "tts_realtime";
return `${base}.${ext}`;
}
function pcmS16leToWav(pcm, { sampleRate, numChannels = 1 }) {
const bitsPerSample = 16;
const byteRate = sampleRate * numChannels * (bitsPerSample / 8);
const blockAlign = numChannels * (bitsPerSample / 8);
const dataSize = pcm.byteLength;
const header = Buffer.alloc(44);
header.write("RIFF", 0, "ascii");
header.writeUInt32LE(36 + dataSize, 4);
header.write("WAVE", 8, "ascii");
header.write("fmt ", 12, "ascii");
header.writeUInt32LE(16, 16);
header.writeUInt16LE(1, 20);
header.writeUInt16LE(numChannels, 22);
header.writeUInt32LE(sampleRate, 24);
header.writeUInt32LE(byteRate, 28);
header.writeUInt16LE(blockAlign, 32);
header.writeUInt16LE(bitsPerSample, 34);
header.write("data", 36, "ascii");
header.writeUInt32LE(dataSize, 40);
return Buffer.concat([header, Buffer.from(pcm)]);
}
// Build a realtime TTS stream config.
function getStreamConfig({
model,
language,
voice,
audioFormat,
sampleRate,
bitrate,
streamId,
}) {
const config = {
// Client-defined stream id (auto-generated if omitted).
...(streamId && { stream_id: streamId }),
// Select the model to use.
// See: soniox.com/docs/tts/models
model,
// Set the language of the input text.
// See: soniox.com/docs/tts/languages
language,
// Select the voice to use.
// See: soniox.com/docs/tts/voices
voice,
// Set output audio format and optional encoding parameters.
// See: soniox.com/docs/tts/api-reference/websocket-api
audio_format: audioFormat,
};
if (sampleRate !== undefined) config.sample_rate = sampleRate;
if (bitrate !== undefined) config.bitrate = bitrate;
return config;
}
async function runSession({
lines,
model,
language,
voice,
audioFormat,
sampleRate,
bitrate,
streamId,
outputPath,
}) {
const sanitizedLines = lines
.map((line) => line.trim())
.filter((line) => line.length > 0);
if (sanitizedLines.length === 0) {
throw new Error("Text is empty after parsing.");
}
const destination = resolveOutputPath(outputPath, audioFormat);
const config = getStreamConfig({
model,
language,
voice,
audioFormat,
sampleRate,
bitrate,
streamId,
});
console.log("Connecting to Soniox...");
const stream = await client.realtime.tts(config);
console.log("Session started.");
// Send text chunks in the background while receiving audio.
let sendError = null;
const sendPromise = (async () => {
try {
for (const line of sanitizedLines) {
stream.sendText(line);
// Sleep for 100 ms to simulate real-time streaming.
await new Promise((res) => setTimeout(res, 100));
}
stream.finish();
} catch (err) {
sendError = err;
}
})();
// Collect streamed audio chunks.
const audioChunks = [];
try {
for await (const chunk of stream) {
audioChunks.push(chunk);
}
} finally {
await sendPromise;
stream.close();
}
if (sendError) {
throw new Error(`Failed to send realtime text: ${sendError.message}`);
}
console.log("Session finished.");
const audio = Buffer.concat(audioChunks.map((c) => Buffer.from(c)));
if (audio.length > 0) {
// Wrap raw pcm_s16le in a WAV container so the .wav file plays everywhere.
const bytes =
audioFormat === "pcm_s16le" &&
path.extname(destination).toLowerCase() === ".wav"
? pcmS16leToWav(audio, { sampleRate })
: audio;
fs.writeFileSync(destination, bytes);
console.log(`Wrote ${bytes.length} bytes to ${path.resolve(destination)}`);
} else {
console.log("No audio file was written.");
}
}
async function main() {
const { values: argv } = parseArgs({
options: {
line: { type: "string", multiple: true },
model: { type: "string", default: "tts-rt-v1-preview" },
language: { type: "string", default: "en" },
voice: { type: "string", default: "Adrian" },
audio_format: { type: "string", default: "pcm_s16le" },
sample_rate: { type: "string" },
bitrate: { type: "string" },
stream_id: { type: "string" },
output_path: { type: "string" },
},
});
if (!VALID_AUDIO_FORMATS.includes(argv.audio_format)) {
throw new Error(
`audio_format must be one of ${VALID_AUDIO_FORMATS.join(", ")}`,
);
}
let sampleRate =
argv.sample_rate !== undefined ? Number(argv.sample_rate) : undefined;
if (sampleRate === undefined && RAW_PCM_FORMATS.includes(argv.audio_format)) {
sampleRate = 24000;
}
if (sampleRate !== undefined && !VALID_SAMPLE_RATES.includes(sampleRate)) {
throw new Error(
`sample_rate must be one of ${VALID_SAMPLE_RATES.join(", ")}`,
);
}
const bitrate = argv.bitrate !== undefined ? Number(argv.bitrate) : undefined;
if (bitrate !== undefined && !VALID_BITRATES.includes(bitrate)) {
throw new Error(`bitrate must be one of ${VALID_BITRATES.join(", ")}`);
}
try {
await runSession({
lines: argv.line && argv.line.length > 0 ? argv.line : DEFAULT_LINES,
model: argv.model,
language: argv.language,
voice: argv.voice,
audioFormat: argv.audio_format,
sampleRate,
bitrate,
streamId: argv.stream_id,
outputPath: argv.output_path,
});
} catch (err) {
if (err instanceof RealtimeError) {
console.error("Soniox realtime error:", err.message);
} else {
throw err;
}
}
}
main().catch((err) => {
console.error("Error:", err.message);
process.exit(1);
});# Generate speech with default settings (wav output)
node soniox_sdk_realtime.js --line "Hello from Soniox realtime Text-to-Speech."
# Generate raw PCM output
node soniox_sdk_realtime.js --audio_format pcm_s16le --sample_rate 24000 --output_path tts-output.pcmSee on GitHub: soniox_realtime.py.
import argparse
import base64
import json
import os
import threading
import time
from typing import Any
from websockets import ConnectionClosedOK
from websockets.sync.client import connect
SONIOX_TTS_WEBSOCKET_URL = "wss://tts-rt.soniox.com/tts-websocket"
MODEL = "tts-rt-v1-preview"
VALID_SAMPLE_RATES = [8000, 16000, 24000, 44100, 48000]
VALID_BITRATES = [32000, 64000, 96000, 128000, 192000, 256000, 320000]
VALID_AUDIO_FORMATS = [
"pcm_f32le",
"pcm_s16le",
"pcm_mulaw",
"pcm_alaw",
"wav",
"aac",
"mp3",
"opus",
"flac",
]
DEFAULT_LINES = [
"Welcome to Soniox real-time Text-to-Speech. ",
"As text is streamed in, audio streams back in parallel with high accuracy, ",
"so your application can start playing speech ",
"within milliseconds of the first word.",
]
def get_output_path(*, output_path: str, audio_format: str) -> str:
"""
Generates the resulting output path for the given audio format.
"""
if "." in os.path.basename(output_path):
return output_path
ext = "pcm" if audio_format in ("pcm_s16le", "pcm_s16be") else audio_format
return f"{output_path}.{ext}"
# Get Soniox TTS config.
def get_config(
api_key: str,
stream_id: str,
language: str,
voice: str,
audio_format: str,
sample_rate: int | None,
bitrate: int | None,
) -> dict:
config: dict[str, Any] = {
# Get your API key at console.soniox.com, then run: export SONIOX_API_KEY=<YOUR_API_KEY>
"api_key": api_key,
#
# Client-defined stream id to identify this realtime request.
"stream_id": stream_id,
#
# Select the model to use.
# See: soniox.com/docs/tts/models
"model": MODEL,
#
# Set the language of the input text.
# See: soniox.com/docs/tts/languages
"language": language,
#
# Select the voice to use.
# See: soniox.com/docs/tts/voices
"voice": voice,
#
# Audio format.
# See: soniox.com/docs/tts/audio-formats
"audio_format": audio_format,
}
if sample_rate is not None:
config["sample_rate"] = sample_rate
if bitrate is not None:
config["bitrate"] = bitrate
return config
def get_text_request(text: str, stream_id: str, text_end: bool) -> dict:
return {
"text": text,
"text_end": text_end,
"stream_id": stream_id,
}
# Stream text lines to the websocket.
def stream_text(lines: list[str], stream_id: str, ws) -> None:
for line in lines:
clean_line = line.strip()
if not clean_line:
continue
ws.send(json.dumps(get_text_request(clean_line, stream_id, text_end=False)))
# Sleep for 100 ms to simulate real-time streaming.
time.sleep(0.1)
# Send text_end=true after the last chunk.
ws.send(json.dumps(get_text_request("", stream_id, text_end=True)))
def send_requests(
ws,
api_key: str,
lines: list[str],
language: str,
voice: str,
audio_format: str,
sample_rate: int | None,
bitrate: int | None,
stream_id: str,
) -> None:
config = get_config(
api_key=api_key,
stream_id=stream_id,
language=language,
voice=voice,
audio_format=audio_format,
sample_rate=sample_rate,
bitrate=bitrate,
)
ws.send(json.dumps(config))
stream_text(lines, stream_id, ws)
def run_session(
api_key: str,
lines: list[str],
language: str,
voice: str,
audio_format: str,
sample_rate: int | None,
bitrate: int | None,
stream_id: str,
output_path: str,
) -> None:
print("Connecting to Soniox...")
with connect(SONIOX_TTS_WEBSOCKET_URL) as ws:
send_errors: list[Exception] = []
def send_worker() -> None:
try:
send_requests(
ws,
api_key,
lines,
language,
voice,
audio_format,
sample_rate,
bitrate,
stream_id,
)
except Exception as exc:
send_errors.append(exc)
# Send config and text in the background while receiving responses.
threading.Thread(
target=send_worker,
daemon=True,
).start()
print("Session started.")
audio_chunks: list[bytes] = []
try:
while True:
if send_errors:
raise RuntimeError(f"Failed to send realtime requests: {send_errors[0]}")
message = ws.recv()
res = json.loads(message)
# Error from server.
if res.get("error_code") is not None:
print(f"Error: {res['error_code']} - {res['error_message']}")
break
# Collect audio bytes from base64-encoded chunks.
audio_b64 = res.get("audio")
if audio_b64:
audio_chunks.append(base64.b64decode(audio_b64))
# Session finished.
if res.get("terminated"):
break
except ConnectionClosedOK:
# Normal, server closed after finished.
pass
except KeyboardInterrupt:
print("\nInterrupted by user.")
except Exception as e:
print(f"Error: {e}")
finally:
audio_data = b"".join(audio_chunks)
if audio_data:
destination = get_output_path(output_path=output_path, audio_format=audio_format)
with open(destination, "wb") as fh:
fh.write(audio_data)
print(f"Wrote {len(audio_data)} bytes to {destination}")
else:
print("No audio file was written.")
def main():
parser = argparse.ArgumentParser()
parser.add_argument(
"--line",
action="append",
default=None,
help="Line to send to realtime TTS (repeat --line for multiple lines).",
)
parser.add_argument("--language", default="en")
parser.add_argument("--voice", default="Adrian")
parser.add_argument("--audio_format", default="wav")
parser.add_argument("--stream_id", default="stream-1")
parser.add_argument("--output_path", default="tts-ws")
parser.add_argument("--sample_rate", type=int)
parser.add_argument("--bitrate", type=int)
args = parser.parse_args()
if args.audio_format not in VALID_AUDIO_FORMATS:
raise ValueError(f"audio_format must be one of {VALID_AUDIO_FORMATS}")
if args.sample_rate is not None and args.sample_rate not in VALID_SAMPLE_RATES:
raise ValueError(f"sample_rate must be None or one of {VALID_SAMPLE_RATES}")
if args.bitrate is not None and args.bitrate not in VALID_BITRATES:
raise ValueError(f"bitrate must be None or one of {VALID_BITRATES}")
api_key = os.environ.get("SONIOX_API_KEY")
if not api_key:
raise RuntimeError(
"Missing SONIOX_API_KEY.\n"
"1. Get your API key at https://console.soniox.com\n"
"2. Run: export SONIOX_API_KEY=<YOUR_API_KEY>"
)
run_session(
api_key=api_key,
lines=args.line or DEFAULT_LINES,
language=args.language,
voice=args.voice,
audio_format=args.audio_format,
sample_rate=args.sample_rate,
bitrate=args.bitrate,
stream_id=args.stream_id,
output_path=args.output_path,
)
if __name__ == "__main__":
main()# Generate speech with default settings (wav output)
python soniox_realtime.py --line "Hello from Soniox websocket Text-to-Speech."
# Generate raw PCM output
python soniox_realtime.py --audio_format pcm_s16le --sample_rate 24000 --output_path tts-outputSee on GitHub: soniox_realtime.js.
import fs from "fs";
import path from "path";
import WebSocket from "ws";
import { parseArgs } from "node:util";
import process from "process";
const SONIOX_TTS_WEBSOCKET_URL = "wss://tts-rt.soniox.com/tts-websocket";
const MODEL = "tts-rt-v1-preview";
const VALID_SAMPLE_RATES = [8000, 16000, 24000, 44100, 48000];
const VALID_BITRATES = [32000, 64000, 96000, 128000, 192000, 256000, 320000];
const VALID_AUDIO_FORMATS = [
"pcm_f32le",
"pcm_s16le",
"pcm_mulaw",
"pcm_alaw",
"wav",
"aac",
"mp3",
"opus",
"flac",
];
const RAW_PCM_FORMATS = ["pcm_s16le", "pcm_f32le", "pcm_mulaw", "pcm_alaw"];
const DEFAULT_LINES = [
"Welcome to Soniox real-time Text-to-Speech. ",
"As text is streamed in, audio streams back in parallel with high accuracy, ",
"so your application can start playing speech ",
"within milliseconds of the first word.",
];
// Resolve a concrete output file path.
// If the provided path has no extension, derive one from audio_format:
// * pcm_s16le -> .wav (we wrap the bytes in a WAV container below)
// * other pcm_* -> .pcm (raw, no container)
// * anything else -> the format name (e.g. .flac, .mp3, .opus)
function resolveOutputPath(outputPath, audioFormat) {
if (outputPath && path.extname(outputPath)) {
return outputPath;
}
const ext =
audioFormat === "pcm_s16le"
? "wav"
: RAW_PCM_FORMATS.includes(audioFormat)
? "pcm"
: audioFormat;
const base = outputPath || "tts-ws";
return `${base}.${ext}`;
}
function pcmS16leToWav(pcm, { sampleRate, numChannels = 1 }) {
const bitsPerSample = 16;
const byteRate = sampleRate * numChannels * (bitsPerSample / 8);
const blockAlign = numChannels * (bitsPerSample / 8);
const dataSize = pcm.byteLength;
const header = Buffer.alloc(44);
header.write("RIFF", 0, "ascii");
header.writeUInt32LE(36 + dataSize, 4);
header.write("WAVE", 8, "ascii");
header.write("fmt ", 12, "ascii");
header.writeUInt32LE(16, 16);
header.writeUInt16LE(1, 20);
header.writeUInt16LE(numChannels, 22);
header.writeUInt32LE(sampleRate, 24);
header.writeUInt32LE(byteRate, 28);
header.writeUInt16LE(blockAlign, 32);
header.writeUInt16LE(bitsPerSample, 34);
header.write("data", 36, "ascii");
header.writeUInt32LE(dataSize, 40);
return Buffer.concat([header, Buffer.from(pcm)]);
}
// Get Soniox TTS config.
function getConfig({
apiKey,
streamId,
language,
voice,
audioFormat,
sampleRate,
bitrate,
}) {
const config = {
// Get your API key at console.soniox.com, then run: export SONIOX_API_KEY=<YOUR_API_KEY>
api_key: apiKey,
// Client-defined stream id to identify this realtime request.
stream_id: streamId,
// Select the model to use.
// See: soniox.com/docs/tts/models
model: MODEL,
// Set the language of the input text.
// See: soniox.com/docs/tts/languages
language,
// Select the voice to use.
// See: soniox.com/docs/tts/voices
voice,
// Audio format.
// See: soniox.com/docs/tts/audio-formats
audio_format: audioFormat,
};
if (sampleRate !== undefined) config.sample_rate = sampleRate;
if (bitrate !== undefined) config.bitrate = bitrate;
return config;
}
function getTextRequest(text, streamId, textEnd) {
return {
text,
text_end: textEnd,
stream_id: streamId,
};
}
// Stream text lines to the websocket.
async function streamText(lines, streamId, ws) {
for (const line of lines) {
const cleanLine = line.trim();
if (!cleanLine) continue;
ws.send(JSON.stringify(getTextRequest(cleanLine, streamId, false)));
// Sleep for 100 ms to simulate real-time streaming.
await new Promise((res) => setTimeout(res, 100));
}
// Send text_end=true after the last chunk.
ws.send(JSON.stringify(getTextRequest("", streamId, true)));
}
function runSession({
apiKey,
lines,
language,
voice,
audioFormat,
sampleRate,
bitrate,
streamId,
outputPath,
}) {
return new Promise((resolve, reject) => {
console.log("Connecting to Soniox...");
const ws = new WebSocket(SONIOX_TTS_WEBSOCKET_URL);
const audioChunks = [];
const finalize = (err) => {
const destination = resolveOutputPath(outputPath, audioFormat);
if (audioChunks.length > 0) {
const audio = Buffer.concat(audioChunks);
// Wrap raw pcm_s16le in a WAV container so the .wav file plays everywhere.
const bytes =
audioFormat === "pcm_s16le" &&
path.extname(destination).toLowerCase() === ".wav"
? pcmS16leToWav(audio, { sampleRate })
: audio;
fs.writeFileSync(destination, bytes);
console.log(`Wrote ${bytes.length} bytes to ${path.resolve(destination)}`);
} else {
console.log("No audio file was written.");
}
if (err) reject(err);
else resolve();
};
ws.on("open", () => {
const config = getConfig({
apiKey,
streamId,
language,
voice,
audioFormat,
sampleRate,
bitrate,
});
// Send first request with config.
ws.send(JSON.stringify(config));
// Start streaming text in the background.
streamText(lines, streamId, ws).catch((err) => {
console.error("Text stream error:", err);
});
console.log("Session started.");
});
ws.on("message", (msg) => {
let res;
try {
res = JSON.parse(msg.toString());
} catch {
return;
}
// Error from server.
// See: https://soniox.com/docs/tts/api-reference/websocket-api#error-response
if (res.error_code) {
console.error(`Error: ${res.error_code} - ${res.error_message}`);
ws.close();
return;
}
// Collect audio bytes from base64-encoded chunks.
if (res.audio) {
audioChunks.push(Buffer.from(res.audio, "base64"));
}
// Session finished.
if (res.terminated) {
console.log("Session finished.");
ws.close();
}
});
ws.on("close", () => {
finalize(null);
});
ws.on("error", (err) => {
console.error("WebSocket error:", err.message);
finalize(err);
});
});
}
async function main() {
const { values: argv } = parseArgs({
options: {
line: { type: "string", multiple: true },
language: { type: "string", default: "en" },
voice: { type: "string", default: "Adrian" },
audio_format: { type: "string", default: "pcm_s16le" },
stream_id: { type: "string", default: "stream-1" },
output_path: { type: "string", default: "tts-ws" },
sample_rate: { type: "string" },
bitrate: { type: "string" },
},
});
if (!VALID_AUDIO_FORMATS.includes(argv.audio_format)) {
throw new Error(
`audio_format must be one of ${VALID_AUDIO_FORMATS.join(", ")}`,
);
}
let sampleRate =
argv.sample_rate !== undefined ? Number(argv.sample_rate) : undefined;
if (sampleRate === undefined && RAW_PCM_FORMATS.includes(argv.audio_format)) {
sampleRate = 24000;
}
if (sampleRate !== undefined && !VALID_SAMPLE_RATES.includes(sampleRate)) {
throw new Error(
`sample_rate must be one of ${VALID_SAMPLE_RATES.join(", ")}`,
);
}
const bitrate = argv.bitrate !== undefined ? Number(argv.bitrate) : undefined;
if (bitrate !== undefined && !VALID_BITRATES.includes(bitrate)) {
throw new Error(`bitrate must be one of ${VALID_BITRATES.join(", ")}`);
}
const apiKey = process.env.SONIOX_API_KEY;
if (!apiKey) {
throw new Error(
"Missing SONIOX_API_KEY.\n" +
"1. Get your API key at https://console.soniox.com\n" +
"2. Run: export SONIOX_API_KEY=<YOUR_API_KEY>",
);
}
await runSession({
apiKey,
lines: argv.line && argv.line.length > 0 ? argv.line : DEFAULT_LINES,
language: argv.language,
voice: argv.voice,
audioFormat: argv.audio_format,
sampleRate,
bitrate,
streamId: argv.stream_id,
outputPath: argv.output_path,
});
}
main().catch((err) => {
console.error("Error:", err.message);
process.exit(1);
});# Generate speech with default settings (wav output)
node soniox_realtime.js --line "Hello from Soniox websocket Text-to-Speech."
# Generate raw PCM output
node soniox_realtime.js --audio_format pcm_s16le --sample_rate 24000 --output_path tts-output