About Resonance
Resonance is an adaptive AI conversational agent that detects emotional cues in real-time and self-corrects its behavior to maintain productive dialogue.
How It Works
You speak into the microphone (or capture a live call from Teams, Meet, Zoom, or any browser-based platform). Hume EVI transcribes the audio and returns prosody scores across 48 emotions.
The browser extracts confusion, doubt, and frustration metrics from each turn and sends them to the server-side Policy Engine.
The Policy Engine compares current scores against calibrated thresholds and tracks momentum across a rolling window of the last 10 turns.
If distress is detected, a new conversational strategy and system prompt are injected into the AI in real-time. The next response reflects the adaptation.
Strategy Catalog
Standard empathetic assistant. No distress detected.
Shorter sentences, step-by-step guidance, plain language. Triggered by confusion.
Confident tone, evidence-backed, decisive responses. Triggered by doubt.
Acknowledgment, calm pacing, solution-focused. Triggered by frustration.
Warm check-ins and open questions. Triggered by blended composite distress.
Pre-emptive clarification before distress peaks. Triggered by rising trend.
Tech Stack
ASP.NET Core MVC on .NET 10. Server-side Policy Engine with rolling momentum tracking.
Hume AI EVI v3 via WebSocket. Real-time prosody analysis across 48 emotion dimensions.
Vanilla JavaScript (ES modules). Web Audio API for microphone and tab capture. CSS Grid card-based layout.