METRICS-LAB · OpenAI gpt-realtime-2 demo

Voice-to-action: an AI analyst that drives a dashboard.

This is a working demo of OpenAI's gpt-realtime-2 voice model embedded in a fake product-analytics workspace called MetricLoop. You speak to it like a colleague and it operates the dashboard for you — applying filters, running a root-cause investigation, and drafting an engineering ticket — without you touching the mouse.

Important — the agent stays silent on purpose. Unlike most "talk to ChatGPT" demos, this agent doesn't narrate every action. You give it a task, it does it visually on the dashboard, and it stays quiet. Only when you explicitly ask for a verbal explanation does it actually speak. This is the whole point of the demo: voice as a control surface, not a chat partner.

What you'll see

  • A product-analytics dashboard for a fake company "Supply Co."
  • Left rail: nav. Center: filters, KPIs, funnel chart, voice-search conversion table, cohorts, session replays.
  • Right rail: a "Root-cause investigation" notebook artifact that fills in once you ask the model to investigate.
  • Bottom bar: assistant status + action count + a Stop button.

What to expect

  • The agent calls multiple tools in parallel (e.g. when you list 3 filters in one breath, you'll see 3 filter pills appear at once instead of one at a time).
  • Filter pills show up below the dropdown grid as the agent applies them. The action counter ticks up.
  • When you ask for an investigation, a "Root-cause artifact generated" callout appears and the right-rail notebook fills in.

Try this script (in order)

  1. 1 "Filter by Europe — that's what I'm hearing issues about." silent · 1 set_filter call
  2. 2 "Suggest other filters that look relevant." silent · suggest_filters then several parallel set_filter calls
  3. 3 "Filter by voice search, first-time shoppers, and footwear." silent · 3 parallel set_filter calls in one turn
  4. 4 "Kick off the root cause investigation. Compare Mobile Safari to Chrome." silent · run_root_cause_investigation, notebook appears
  5. 5 "MetricLoop, give me a two-sentence overview out loud explaining this so I can pass it to engineering." SPEAKS · summarize_investigation, ~2 sentences
  6. 6 "Generate the engineering ticket." silent · generate_engineering_ticket, ticket card appears

Tip: paraphrase freely. The model is steerable — interrupt or change your mind mid-sentence.

Privacy & cost

  • Your microphone audio streams to OpenAI in real time over WebRTC.
  • The OpenAI API key lives in AWS Secrets Manager and never reaches your browser. Each session uses a short-lived ephemeral token.
  • Nothing is recorded or persisted on our side.
  • Cost is paid by the demo owner — roughly a few cents per minute. Please hit Stop in the bottom bar (or close the tab) when you're done so it doesn't keep billing.

What's NOT real

  • The company "Supply Co.", the KPI numbers, the funnel data, the cohort table, the session replays, the "engineering ticket" — all static demo fixtures.
  • The "investigation" doesn't actually do data work. It just emits a pre-baked notebook. The interesting part is the model's orchestration of the tool calls.

Activation investigation

Trends Funnels Retention User Paths Stickiness Lifecycle SQL
ACTIVATION
16.8%
-19.3 pts
DROP-OFF STEP
Add to cart clicked
-19.1 pts
CORRELATED RELEASE
PDP size selector
May 4 correlation
INVESTIGATION
High
Evidence aligned
FUNNELS

Activation funnel before/after

Voice search / Europe / first-time shoppers / last 7 days
RELEASE CORRELATION

Activation and ticket spike

May 4 release
SEARCH INTENT

Voice search conversion

COHORTS

Activation by shopper segment

SESSION REPLAY

Representative evidence

Completed Warning Failed
ASSISTANT ● Idle 0 actions