Smiles Solutions

Making new‑patient booking details reliable

Smiles Solutions Hero Image

At a Glance

Smiles Solutions runs a multi-location dental practice in the Northeast US. Their schedule is the clinic's coordination hub: every clinician, assistant, and admin depends on it to know what's coming and how to prepare.

But new‑patient bookings were consistently unreliable. Wrong details, missing info, notes nobody could read the next morning. Patients sat in unprepared rooms. Staff scrambled. Nobody was careless. The workflow just made errors inevitable.

The fix started with understanding why the errors kept happening, and finding that the obvious explanation wasn't the whole story.

Overview

Company

  • Smile Solutions
  • Dental practice chain, Northeast US

Project

  • New‑patient scheduling
  • Intake context integrity

Team

  • Product Designer (Me)
  • PM
  • Devs x 3

My role

  • Product thinking: define the right problem, scope, and success metrics
  • UX: end‑to‑end workflow design + edge case handling
  • UI: admin workflow + patient intake form
  • Post‑launch: QA and iteration based on real usage

Status

  • Shipped

What I shipped

A two‑part experience that fixes how new‑patient details get into the schedule: move capture to the patient, then give admins a structured channel to verify and complete those details before anything reaches the practice management system.

Patient side (B2C)

A mobile intake form sent via SMS or email, completed on the patient's own time

Staff side (B2B)

An admin workflow (send → receive → review → finalize) that turns raw patient responses into a locked, schedule‑ready summary

Overview - full workflow with all four screens arranged left to right

Impact

The redesigned intake flow reduced appointment context incidents and shortened new-patient booking calls.

3 to 8 → 2 to 1

Context incidents / week

19.3 min → 2.5 min

Avg new-patient booking call time

The Problem

The schedule wasn't just wrong. It was trusted anyway. That's what made it dangerous.

The schedule is the clinic's nervous system

At Smile Solutions, the schedule is the document everyone works from. Clinicians, assistants, and admins all depend on it to know what's coming and how to prepare. When it's accurate, the clinic runs. When it isn't, everyone downstream pays the price.

Dependency diagram showing how the schedule flows downstream

All booking records, patient notes, and appointment summaries live in EagleSoft, the practice's management platform. It's the source of truth. And it's where unreliable data caused the most damage.

Existing EagleSoft new-patient booking view

New‑patient bookings were the consistent weak point: wrong procedure types, missing insurance details, shorthand notes that only made sense the moment they were typed. By the time someone needed that information (usually 8am on the morning of the appointment) the admin who wrote it had moved on to dozens of other calls since, and couldn't reconstruct what they meant, let alone what they left out.

A day‑of story

4-panel storyboard of a day-of scenario

The patient never finds out why. From where they're sitting, the practice just wasn't ready for them. It's their first visit. It's the impression they'll carry.

Nobody made a bad decision. The workflow made this outcome likely.

What this looked like across the practice

Stat callout graphic showing practice-wide impact

Staff estimates (not formally tracked at the time): ~60% wrong details / ~25% missing info / ~15% unclear or illegible notes.

The numbers made the cost legible. But they didn't explain the cause. For that, I needed to get closer.

Research

I went in assuming I knew what was broken. The research agreed with me, and then kept going.

What I did and why

The numbers pointed to an obvious explanation: long, pressure-heavy phone calls were causing the errors. What I didn't know yet was whether that was the full picture.

I ran interviews with front-desk admins, dental assistants, and a clinician. I asked each of them to walk me through a recent new-patient booking: not the ideal version, but what actually happens, including when it goes sideways. I also reviewed the schedule artifacts directly: existing EagleSoft notes, recurring error patterns, and how different admins used (or avoided) the same fields.

Research methods overview

Who I designed for first

Both admins and patients were affected, but I chose to focus on front‑desk admins first. They're the entry point for every new‑patient record, the point where most errors enter the record, and the people who have to fix things when context goes wrong. Solving for them would reduce the burden on everyone else downstream.

What I found, and what expanded the scope

The capture problem was what I expected: admins were collecting patient details over the phone, transcribing what they heard under time pressure, and the results were unreliable. But one admin said:

"I know the information is probably wrong when I hang up. But there's nowhere to go back."

The first half confirmed the capture problem. The second half clarified the scope: even if you fix capture, details will still come in wrong or incomplete sometimes. And right now, when that happens, the only options are to call the patient back (poor service, they often don't pick up) or ask again when they walk in (too late). The solution would need to fix capture and give the clinic a way to close gaps before the appointment.

Before and after mental model diagram

Without a channel to catch those gaps before appointment day, the clinic ends up right back where it started.

The failure modes sorted into three consistent buckets:

Research synthesis card showing three failure mode buckets

Together, these patterns pointed to a bigger design problem: the clinic needed a way to improve capture and a way to close gaps before those details became schedule truth.

HMW Statement

How might we make the capture of new-patient details more reliable, and give the clinic a way to verify and complete those details before they become schedule truth?

Design Principles

The research surfaced two problems, not one. Any direction needed to improve how details get captured and give the clinic a structured way to verify and complete them before they reach the schedule. With that framing, I set three principles to evaluate directions against.

1. Reduce pressure on the capture moment

The research showed that most wrong and unclear details traced back to the same condition: admins transcribing what they heard under time pressure. Any direction should move as much capture as possible to a lower-pressure context.

2. Verification should be done by someone who can judge completeness

The admin who said "there's nowhere to go back" had the awareness to catch errors. She knew details were probably wrong when she hung up. The person verifying needs that same ability to judge whether information is sufficient for a clinician prepping a room, not just whether it matches what was said on the phone.

3. The verification process should be reliable at scale

Missing details persisted because the only options were call back (unreliable) or ask day-of (too late). Both depended on someone remembering to act. The process needs structure that surfaces what's missing and makes follow-up a built-in step.

Ideation

Each direction attempted to meet the design principles differently, and that's what separated them.

I explored three directions. All three reduced pressure on the capture moment and created some form of verification. What separated them was how well each held against the second and third principles.

Three-option comparison

Options 2 and 3 each addressed part of the problem, but both left gaps the principles were designed to catch: the wrong person verifying, or no structure to surface what's missing. Option 1 was the only direction that scored strong across all three. The remaining question was whether the effort to build it was proportional to the impact.

The principle ratings narrowed the field. The next question was whether the strongest direction was worth the added effort to build.

Effort-impact matrix

A structured intake form, a review tool, and a finalize step take more effort to build than a post-call confirmation or an SMS thread. But the principle ratings made the tradeoff clear: the options that required less effort also produced weaker results on the problems that mattered most.

With the direction chosen, the next step was designing how the staff-controlled intake workflow would actually work.

Solution: The staff-controlled intake workflow

Option 1 scored strongest against all three principles. Here's how it works in detail.

The workflow splits into two phases. In the first, capture moves off the phone and onto the patient. In the second, the admin has a structured channel to verify what came back, close gaps, and lock a clean summary before anything reaches EagleSoft.

End-to-end flow diagram

Phase 1: Capture. Move it to the patient

The stub keeps the phone moment minimal and hands capture to the person who actually has the information. The patient fills out the form on their own time, producing data the admin can review rather than data the admin has to reconstruct.

Phase 2: Verify. Give the admin a structured channel

This phase didn't exist before. Previously, once details were in EagleSoft, they were effectively final. Now the admin can see what came back, close gaps asynchronously, and lock a clean summary before anything reaches the schedule.

From workflow to prototype

The workflow held at a high level. What I needed to test next was whether the first screen-level version of it would actually work in practice.

How I explored UI directions with AI

I used Uizard and Galileo AI to generate multiple layout directions quickly before investing time in any single one. Both tools let me describe a screen in plain language and get a rendered result in seconds. Uizard was faster for generating multi-screen flows; Galileo produced higher-fidelity individual screens I could pull directly into Figma. The goal in using both was the same: compress exploration time by spending less of it generating and more of it evaluating.

Variant exploration - direction 1 Variant exploration - direction 2

The more useful discovery was not about layout. It was about content fidelity. Once I used AI-generated realistic patient details, insurance questions, and error states in the prototype, participants stopped reacting to the wireframe and started reacting to the actual experience. That made the feedback more specific and more useful.

Content fidelity comparison - placeholder vs realistic copy

Screen 1: Admin – Create stub + send intake link

Goal: keep the phone moment fast. The less that happens here, the less that can go wrong.

The central decision in this screen was restraint: specifically, what not to ask. Early explorations tried to collect too much during the call, which recreated the original problem in a new interface. Filtered through the first design principle (reduce pressure on the capture moment), the stub settled on three fields: name, contact, and a one-line reason for visit. Everything else defers to the patient.

Key screen - Create stub and send intake link

Screen 2: Patient – Mobile intake form

Goal: collect structured, trustworthy details without phone pressure.

This screen shifts capture into the environment where patients are most likely to answer accurately: on their own time, with their information in front of them. The design goal was not just to collect more data, but to reduce pressured guessing by making straightforward answers easy, uncertainty acceptable, and sensitive questions easier to understand.

Key screen - Patient mobile intake form

Screen 3: Admin – Review intake + resolve gaps

Goal: give admins a view of everything the patient submitted so they can verify before it enters the schedule.

This is the core of the verification channel. The patient has submitted their information. Now the admin needs to see what came back and decide whether it's ready for EagleSoft.

The screen shows every field organized by section, with edit buttons for direct corrections and a way to contact the patient about anything left blank. Format validation has already filtered out obviously wrong entries. What reaches this screen has passed that check.

Key screen - Staff review intake and resolve gaps

Screen 4: Admin – Finalize + lock summary

Goal: give admins a final check before the record enters EagleSoft.

This screen sits at the end of the verification channel. The admin has reviewed the intake, followed up on any gaps, and is ready to push a clean summary into EagleSoft. The design shows a structured summary of the record with a Finalize button at the bottom, positioned at the close of the workflow once all prior steps are complete.

Key screen - Finalize and lock summary

Validation & Iteration

With the first version in place, I tested whether each phase worked under realistic booking scenarios.

What I focused on

I put the screens from the previous section in front of front-desk admins with realistic booking scenarios and observed how clinicians downstream used the output. I tested four things, chosen because they were the closest proxies to whether both phases were actually working:

Admin speed

Time from end of call to intake link sent. A proxy for whether the stub was actually fast enough to keep the phone moment light

Patient completion

Drop‑off points, confusing fields, time to complete. A proxy for whether capture was producing trustworthy data

Review quality

How quickly admins could identify what needed attention. A proxy for whether the verification channel was doing its own work

Summary trust

Could clinicians prep from the final summary without a follow‑up call? The ultimate proxy for whether clean, verified information was actually reaching the schedule

What we found

What testing revealed - overview of findings

Finding 1: Patients who wanted to complete the form couldn't, because real life got in the way.

Form completion rate was lower than expected. The helper text was doing its job: patients understood what was being asked. But many hit a required field they simply couldn't answer right now. Their insurance card was at home. They were on a spouse's plan and didn't know the group number. They weren't sure whether their medical and dental coverage were separate. Some hesitated on fields that felt too personal for a practice they hadn't visited yet.

The original form gave these patients no way forward. Required fields blocked the next step. The only options were to abandon the form or guess. Most abandoned.

Finding 1 - form completion barriers

The redesign changed the form's job from forcing completion to capturing honest progress. Patients could move forward without guessing, and admins received enough context to decide whether to remind, clarify, or collect the information later.

Finding 2: Format-valid is not the same as correct.

Skip for now solved the completion problem. But patients who did fill in every field weren't necessarily giving the admin trustworthy data.

The original review screen treated every format-valid entry as Complete. If the patient entered something and it passed validation, the field showed green. But in practice, patients entered information that looked right and wasn't. A patient on a family plan entered the subscriber's member ID instead of their own. Another entered their medical insurance details instead of dental. The format was correct. The data was wrong. And the review screen gave the admin no reason to question it.

Finding 2 - format-valid vs correct data

The review screen needed a third state beyond Complete and Incomplete. Some entries require the admin to verify against an external system, like running insurance eligibility in EagleSoft, before they can be trusted. Combined with the "Skip for now" items from Finding 1, the review screen now has three states: Ready (entered, validated, admin has verified), Needs verification (entered, passed format validation, but requires admin to check against EagleSoft), and Skipped (patient tapped "Skip for now," reason attached, admin follows up based on reason type).

Finding 3: A checkpoint that doesn't feel like one won't function as one.

The finalize screen passed every design review. It was structurally correct. It sat at the right point in the workflow. And it wasn't changing admin behavior. Admins clicked through it the same way they'd always clicked through EagleSoft saves: quickly, without pausing. Submissions with unverified items still got through. Not because admins were careless, but because nothing in the experience signaled that this moment was different from any other save action.

Finding 3 - finalize checkpoint behavior

The redesign worked by changing the meaning of the moment. Finalize no longer felt like the last click in a workflow. It felt like committing a record that someone else would rely on the next morning.

Learnings

1. Correctness is not the same as effectiveness.

The finalize screen had everything it needed and still didn't work. A design can have the right content, the right structure, and the right placement, and still fail because the person using it doesn't experience it as new. They experience it as familiar. And familiar means automatic. The fix wasn't adding warnings or extra steps. It was showing the admin the generated EagleSoft note: the actual output a clinician would read the next morning. If you need someone to pause, the design has to break the pattern they're bringing in, not just meet the requirements you wrote down.

2. The problems that matter most in testing are the ones you didn't design for.

Every finding in this project came from a situation I hadn't anticipated. A patient whose insurance card was in a drawer at home. A patient on a family plan entering the subscriber's ID instead of their own. An admin clicking through a structurally correct checkpoint with years of built-up muscle memory. None of these were edge cases I could have reasoned my way to at a whiteboard. They only became visible when real people used the screens under real conditions. The most important work on this project happened after the designs looked done.

3. Simple problems attract solutions that look right and break later.

This problem looked easy to solve at first glance. It invited obvious fixes: a paper form, a mandatory checklist, an SMS thread, etc. Each one addressed something real. But each one also moved the problem somewhere else: the paper form still required transcription under pressure, the checklist had no way to follow up on gaps, the SMS thread depended on both parties being available. The real work was not coming up with something that sounded plausible. It was following each direction far enough to see where it would fail.

Thanks for taking the time to read!

Explore more