Conversational agent
Nov 2025 - present • Freed
One of the most important parts of the project is to design a human-like and a genuinely helpful conversational agent to collect patient information. I designed the user flow and defined principles to help define the agent behavior.
Picking the substrate
Before any prompt could work, I had to pick the substrate that made real-time conversation possible. We started on ElevenLabs Turbo for ease of use and its catalog of warm preset voices. Under HIPAA-routed production traffic the latency was bad enough that callers were hanging up. After three months we switched to OpenAI Realtime — ~3× the per-minute cost, fewer voices, none as warm — and traded warmth for presence.
Principles → prompt → behavior
I wrote nine behavioral principles for the voice agent. Three of them are below — paired with the exact prompt rule each became, and the production behavior that confirms the rule is doing its job.
#1 - Sound human
User quote
“I really don’t want a Walgreens robot for the patients.”
Dr. Salas, solo practitioner
Prompt change
### Personality and Tone
Be the warm, efficient person at the front desk — genuinely helpful, not robotic.
### Pronunciation
- **Dates:** say naturally ("January first, nineteen ninety") not formatted
- **Phone numbers:** group digits, skip +1 country code
- Example: `+1 (505) 123-4567` → "five zero five, one two three, four five six seven"Observed: “Dead giveaway” moments disappeared within one revision.
#2 - Never trap the patient.
User quote
“She repeats back like five times. Patients get frustrated and just hang up.”
Casey Cash, owner of the Iris Center
Prompt change
- **Unclear input / noise** — if the caller's audio is unclear, garbled, or you cannot understand what they said, ask them to repeat using a short, unique phrase each time. After 3 failed attempts, apologize for the audio trouble, let them know someone from the clinic will call back, and call `end_call`. NEVER respond to unclear audio by repeating your previous message.
Observed: Abandonment fell 32.6% → 10.6% over four revisions.
#3 - Front desk, not doctor.
User needs: The most consistent worry across 20+ clinic interviews was an AI giving medical advice.
Prompt change
**Symptoms and medical questions:** - Do NOT diagnose, assess, give medical advice, or ask clarifying medical questions - Do NOT proactively ask about emergency symptoms or screen for emergencies - ONLY mention 911 as an option if the patient EXPLICITLY describes a clear emergency unprompted — never instruct them to call - For everything else, simply take a message for the clinical team
Observed: Zero clinical-advice incidents in 10K+ calls.