ServiceNow Insights PodcastServiceNow Insights Podcast

EVA - A Framework for Evaluating Voice Agents by ServiceNow

View descriptionShare
 

Voice AI agent evaluation — why it's fundamentally harder than text, how cascade failures derail conversations invisibly, and ServiceNow's open-source framework to establish industry evaluation standards. Featuring real audio examples showing authentication failures, leaked reasoning, and latency problems.

WHAT WE COVER 

TARA BOGAVELLI — Research Engineer, ServiceNow
Leading the open-source voice agent evaluation framework. Explains why existing benchmarks don't measure what matters and what ServiceNow is releasing to establish industry standards.

KATRINA STANKIEWICZ — Staff Machine Learning Engineer, ServiceNow
Cascade model architecture expert. Breaks down STT → LLM → TTS failure modes, named entity transcription challenges, and real audio example analysis.

GABRIELLE GAUTHIER MELANÇON — Staff Applied Research Scientist, ServiceNow
Multi-language evaluation specialist. Reveals why Large Audio Language Models lag behind, the native speaker requirement, and bot-to-bot simulation methodology. 

CHAPTERS
0:00 Introduction — The evaluation gap
1:11 ServiceNow's Open-Source Framework Announcement — Tara Bogavelli
2:43 Meet the Researchers
3:43 Voice-Specific Challenges — Tara Bogavelli
5:03 Cascade Architecture: STT → LLM → TTS — Katrina Stankiewicz
7:57 The Named Entity Problem — Katrina Stankiewicz
10:06 Evaluation Metrics: Accuracy vs Experience — Gabrielle Gauthier Melançon
11:23 Bot-to-Bot Testing at Scale — Gabrielle Gauthier Melançon 
14:30 The LALM Gap: Why Audio AI Judges Struggle — Tara Bogavelli
16:57 Real Audio Example: Flight Rebooking Gone Wrong
21:58 Breaking Down the Failures — Katrina Stankiewicz 28:30 Wrap-Up & Resources

KEY INSIGHTS

The Cascade Failure Problem: STT → LLM → TTS errors propagate invisibly Named Entity Transcription: The #1 enterprise blocker—names, confirmation codes, emails break authentication Accuracy vs Experience: Perfect task completion means nothing if users hang up due to poor experience LALM Gap: Large Audio Language Models lag behind text LLMs—human evaluators remain essential Latency Kills Conversations: Five-second pauses make users think the call dropped, breaking the experience even when tasks complete Open-Source Framework: ServiceNow releasing evaluation tools, metrics, and bot-to-bot simulation methodology for the industry.

LEARN MORE

Website: https://servicenow.github.io/eva/ GitHub:
https://github.com/servicenow/eva Blog Post:
https://huggingface.co/blog/ServiceNow-AI/eva Dataset: https://huggingface.co/datasets/ServiceNow-AI/eva

ABOUT

Hosted by Bobby Brill. ServiceNow Insights podcast explores AI research, real-world applications, and the people building the future of work. #VoiceAI #AIEvaluation #ServiceNow #MachineLearning #OpenSource #ConversationalAI #STT #TTS #LLM #VoiceAgents #AIResearch #Podcast

  • Facebook
  • X (Twitter)
  • WhatsApp
  • Email
  • Download

In 2 playlist(s)

ServiceNow Insights Podcast

ServiceNow Insights is your insider’s guide to the world of ServiceNow. Join us as we unpack the lat 
Social links
Follow podcast
Recent clips
Browse 37 clip(s)