Voice AI Fundamentals: The 500ms Threshold

Param Harrison
6 min read

Share this post

In our previous text-based agents, a 2-second delay was acceptable. The user sees a "typing..." indicator and waits.

In Voice AI, 2 seconds is an eternity.

If you say "Hello" and the bot waits 2 seconds to reply, the illusion breaks immediately. You assume it didn't hear you, or you start talking over it. The conversation collapses.

This post is for engineers moving from text bots to Voice Agents. We will explore the unique architecture required to achieve sub-500ms latency, moving from standard HTTP request chains to WebRTC streaming pipelines.

The problem: The "HTTP Chain" is too slow

A naive approach to building a voice bot is to simply chain standard REST APIs together.

  1. Record Audio -> Save to WAV file.
  2. Upload to STT API (e.g., OpenAI Whisper). -> Wait for text.
  3. Send Text to LLM API (e.g., GPT-4o). -> Wait for token.
  4. Send Text to TTS API (e.g., ElevenLabs). -> Wait for audio file.
  5. Download and Play.
graph LR
    A[User Speaks] --> B(STT API)
    B --> C(LLM API)
    C --> D(TTS API)
    D --> E[User Hears]
    
    style A fill:#e3f2fd,stroke:#0d47a1
    style E fill:#e8f5e9,stroke:#388e3c
    style B fill:#ffebee,stroke:#b71c1c
    style C fill:#ffebee,stroke:#b71c1c
    style D fill:#ffebee,stroke:#b71c1c

The Math: 1s (Transcribe) + 1s (Think) + 1s (Synthesize) + Network Overhead = ~4s Latency.

In human conversation, the typical gap between turns is roughly 200-500ms. The HTTP chain is unusable.

The solution: The streaming pipeline (WebRTC)

To fix this, we need to stop thinking in "Files" and start thinking in "Streams."

We use LiveKit as our infrastructure layer. It handles WebRTC, allowing us to stream tiny packets of audio data in real-time, rather than waiting for full files.

We build a pipeline where every component streams:

  1. VAD (Voice Activity Detection): Detects when the user stops talking (in milliseconds).
  2. STT (Speech-to-Text): Streams partial transcripts while you speak.
  3. LLM: Receives text streams and outputs token streams.
  4. TTS (Text-to-Speech): Starts playing audio as soon as the first sentence is generated (not the whole paragraph).
graph TD
    A[User Audio Stream] --> B{VAD}
    B -- "Silence Detected: User stopped" --> C(STT Stream)
    C -- "Partial Text: Hello..." --> D(LLM Stream)
    D -- "Token Stream: Hi..." --> E(TTS Stream)
    E -- "Audio Byte Stream" --> F[User Hears]
    
    style B fill:#fff9c4,stroke:#fbc02d
    style F fill:#e8f5e9,stroke:#388e3c

The "How": Building with LiveKit agents

We don't need to write raw WebRTC code (which is notoriously difficult). We use the livekit-agents library in Python.

1. Define the pipeline

We connect the best-in-class providers for each step.

from livekit.agents import AutoSubscribe, JobContext, WorkerOptions, cli, llm
from livekit.plugins import deepgram, openai, silero

async def entrypoint(ctx: JobContext):
    # 1. Connect to the Room (Audio Only)
    await ctx.connect(auto_subscribe=AutoSubscribe.AUDIO_ONLY)

    # 2. Define the Assistant
    # This VoicePipelineAgent handles the complex buffering and 
    # threading between STT, LLM, and TTS automatically.
    agent = VoicePipelineAgent(
        vad=silero.VAD.load(),            # Voice Activity Detection (The Trigger)
        stt=deepgram.STT(),               # Speech-to-Text (The Ear)
        llm=openai.LLM(),                 # Large Language Model (The Brain)
        tts=openai.TTS(),                 # Text-to-Speech (The Mouth)
    )

    # 3. Start the Agent
    agent.start(ctx.room)
    
    # 4. Say Hello (Latency < 500ms)
    await agent.say("Hello! I'm your booking assistant. How can I help?", allow_interruptions=True)

if __name__ == "__main__":
    cli.run_app(WorkerOptions(entrypoint_fnc=entrypoint))

2. Handling interruptions (Barge-In)

The magic of this pipeline is Interruption Handling (often called "Barge-In").

If the bot is speaking a long paragraph: "I can certainly help you with that. First, I need to check the calendar for..."

And the user says: "Actually, wait."

  1. The VAD detects user speech.
  2. The Pipeline instantly kills the TTS stream.
  3. The Pipeline clears the LLM buffer.
  4. The Agent goes back to Listening mode.

This makes the bot feel human. In the code above, this is enabled simply by allow_interruptions=True.

Deep dive: The "VAD" (The hidden hero)

The most critical component isn't the LLM; it's the VAD (Voice Activity Detection). It acts as the "Enter Key" for voice.

If the VAD is too sensitive, the bot will interrupt you while you take a breath.

If the VAD is too slow, there will be awkward silence before the bot replies.

We use Silero VAD, which runs locally on the CPU for near-zero latency.

# Configuring VAD for a natural feel
vad=silero.VAD.load(
    min_silence_duration_ms=250, # Wait 250ms of silence before assuming user is done
    speech_pad_ms=30             # Add small buffer to avoid cutting off ends of words
)

Summary: Why streaming matters

Component HTTP Chain Streaming Pipeline
STT Wait for full audio file Stream partial transcripts
LLM Wait for complete text Stream tokens as generated
TTS Wait for full response Stream audio chunks
Total Latency ~4 seconds <500ms
User Experience Unnatural pauses Natural conversation

Challenge for you

Scenario: You are building a voice bot for a Drive-Thru. The environment is noisy (car engines, wind).

The Problem:

The standard VAD settings keep triggering on the sound of the car engine, causing the bot to say "I'm sorry, I didn't catch that" while the user is silent.

Your Task:

  1. Look at the silero.VAD or livekit documentation.
  2. Which parameter would you adjust to ignore background noise? (Hint: Look for threshold or noise_gate settings).
  3. How would you change min_silence_duration_ms? Should it be longer (to allow for thinking time while ordering) or shorter (for speed)?

Key takeaways

  • HTTP is for text, WebRTC is for voice: You cannot build a good voice agent using standard request/response APIs. You need streaming.
  • Latency is cumulative: Saving 100ms on STT, 100ms on LLM, and 100ms on TTS adds up to a massive difference in "feel."
  • Interruption is mandatory: A voice agent that cannot be interrupted feels like a lecture, not a conversation.
  • VAD is the trigger: Tuning your Voice Activity Detection is the difference between a snappy bot and an annoying one.
  • Streaming enables sub-500ms latency: By processing audio in real-time streams rather than waiting for complete files, you achieve natural conversation timing.
  • Barge-in makes conversations natural: Allowing users to interrupt the bot mid-sentence creates a human-like interaction pattern.
  • Local VAD reduces latency: Running VAD on CPU locally eliminates network round-trips for voice detection.

For more on real-time systems, see our streaming guide and our multi-agent coordination guide.


For more on building production AI systems, check out our AI Bootcamp for Software Engineers.

Share this post

Continue Reading

Weekly Bytes of AI

Technical deep-dives for engineers building production AI systems.

Architecture patterns, system design, cost optimization, and real-world case studies. No fluff, just engineering insights.

Unsubscribe anytime. We respect your inbox.