K
Kixie Kixie AI Voice Agent

AI Sales Agent MVP
From Insight to Impact

The AI Sales Agent MVP used real call data to solve key sales problems and improve results.

This project focused on building the Kixie AI Voice Agent by analyzing over 100 million minutes of sales calls across thousands of reps. The team identified where human reps struggled, like talking too much before qualifying, missing buyer hesitation cues, and mishandling price objections. These insights shaped the first version of the agent. We didn’t guess. We listened, built, and improved based on real-world use.
1

We started with the data, our competitive advantage

We had over 100 million minutes of recorded sales calls across thousands of reps. I partnered with the data team to study what was actually happening in those conversations. Where reps stalled. Where buyers checked out. What triggered objections. We didn’t rely on dashboards. We listened.

We used Whisper for transcription and OpenAI models to surface early patterns. Claude Opus, though expensive, was a godsend. We weren’t chasing perfection, just enough signal, fast enough to build.

We discovered common points where reps fumbled:

  • Talking too much before qualifying
  • Missing tonal cues when buyers hesitated
  • Mishandling price objections

That gave us the raw input. We mapped it to training tasks and built our first agent.

Kixie's Data Foundation

100M+
Call Minutes Analyzed
3
Key Failure Points
100%
Data-Driven Approach
2

Built the MVP with constraints, not hypotheticals

The first version did what it needed to. It answered calls, followed a script, and handled simple objections. We didn’t try to boil the ocean. We focused on high-volume, low-stakes calls like top-of-funnel lead capture and first touches.

MVP Constraints

What it could do:

  • • Answer incoming calls
  • • Follow structured scripts
  • • Handle basic objections
  • • Capture lead information

What we focused on:

  • • High-volume scenarios
  • • Low-stakes conversations
  • • Top-of-funnel interactions
  • • First-touch experiences

After a week of research, I defined the requirements with engineering, stripped out anything that didn't serve that use case, and got it out for internal dogfooding within 3 weeks total.

MVP Development Timeline

Research
Requirements
Development
Dogfooding
3

Let the market tell us where to go

We ran structured surveys across five verticals, sent demo calls, and watched who leaned in. Hard Money Lending stood out immediately. They had compliance needs, heavy phone usage, and lean teams that couldn't scale reps.

I helped design the survey, ran working sessions with sales and CS, and pushed for a vertical-first go-to-market instead of a generic pitch.

We focused. Built objections that mattered to lenders. Added disclaimers. Tuned the tone.

Market Interest by Vertical

Hard Money
3.0x Higher Intent
Finance
1.0x Baseline
Insurance
0.8x
SMB Sales
0.9x
Services
0.7x

Hard Money Lending showed 3x higher engagement than any other vertical in our structured surveys.

4

Improved based on real usage

The first calls were brutal. Dead air. Clunky phrasing. Missed cues. The hours were long, but the energy was real. Everyone wanted to ship. We watched every call, flagged what broke, and fixed it fast.

After four weeks of iteration:

  • Call completion improved from 60% to 85%
  • Sentiment detection accuracy more than tripled
  • Objection handling jumped from 10% to 40% success

I ran the feedback loop directly. No middle layers. PM to engineer to call review to fix.

4-Week Improvement Trajectory

Call Completion Rate +25 points
60% (Week 1)
85% (Week 4)
Sentiment Detection Accuracy 3.2x improvement
Baseline (Week 1)
3.2x (Week 4)
Objection Handling Success +30 points
10% (Week 1)
40% (Week 4)

Weekly iteration cycles based on real call analysis drove measurable improvements across all key metrics.

5

Made hard calls when it helped the product

We tried to serve five verticals at once. It didn't work. The scripts got watered down. The model got noisy. The product felt unfocused.

I pushed for the pivot. Got alignment with GTM and leadership. We dropped the rest and focused on lending. Rewrote the flows from real lender calls. Trained on sharper data. The difference showed up fast.

The difference was immediate. Higher adoption, fewer errors, and real customer engagement.

Before: Multi-Vertical Approach

  • Generic scripts for 5 verticals
  • Confused AI model
  • Diluted value proposition
  • Low customer engagement

After: Lending-Focused

  • Specialized lending scripts
  • Focused AI training data
  • Clear value proposition
  • High customer adoption

Pivot Decision Process

Poor Results
Stakeholder Buy-in
Immediate Impact

Why it worked

We turned repetitive, error‑prone first‑touch calls into consistent outcomes. Mined 100M+ minutes, picked one job, and built only what moved the metric.

Shipped the MVP in 3 weeks… then iterated weekly from real calls. Completion 60→85%. Objection handling 10→40%. Sentiment accuracy 3.2×. Not magic. Focus, data, tight loops.

Closed alpha with select customers.

Ship an AI MVP that lifts a metric.

One narrow use case. Real usage. Weekly improvement. If you have volume and a clear problem, I’ll help you scope, ship in ~3 weeks, and iterate from calls.

Talk About Your Use Case
K
Kixie AI Voice Agent
Powered by 100M+ minutes of real sales conversations
Learn more at kixie.com