We Asked AI to Recommend a Doctor in 35 Specialties. Only 10 Providers Got Mentioned

Published by the Cited Research Team | April 24, 2026
Dr. Jennifer Kim, director of marketing at a multi-specialty medical group in Seattle, opened ChatGPT on a Friday morning in March 2026. Her practice had 18 physicians, 4.9-star Google ratings, and had invested $220K in digital marketing over the past three years. She typed: "I need a dermatologist in Seattle for acne treatment." ChatGPT recommended three practices. Hers wasn't one of them.
Her counterpart at a competing practice—smaller facility, fewer specialists—ran the same test the next day. Different specialty, different condition, but the same outcome: invisible. Neither practice appeared in Claude's recommendations either. Both were absent from Perplexity's results.
We wanted to know whether this pattern held across healthcare. So we ran a structured test across 120 healthcare providers (hospitals, clinics, specialist practices), 3 AI platforms, and 70 health-related queries spanning 35 specialties and dozens of conditions. The results explain why 92% of healthcare providers are invisible to AI—and what the 8% who get cited are doing differently.
The Test: 120 Providers, 70 Queries, 3 AI Platforms
We tested 120 healthcare providers across four major categories: multi-specialty medical groups (40), specialist practices (35), hospitals and health systems (25), and urgent care/primary care clinics (20). These weren't solo practitioners—they were established providers with multiple physicians, active patient bases, and professional websites.
What we tested:
Providers: 120 healthcare organizations with 5-200 physicians, established web presence, active patient reviews
Queries: 35 general "find a doctor" searches ("dermatologist in Seattle"), 35 condition-specific searches ("best treatment for psoriasis in Seattle")
AI Platforms: ChatGPT (GPT-4), Claude (Claude 3.5 Sonnet), Perplexity (Pro)
How we tested:
Each query was run on all three platforms within the same 72-hour window
We logged every provider mentioned, physician cited, and specialty referenced
We analyzed the physician profiles and content structure of cited providers
We cross-referenced AI visibility with Google ratings and review counts
A total of 120 providers × 70 queries × 3 platforms = 25,200 query-platform combinations. We logged every mention, every citation, and every recommendation. Here's what we found.
The Headline Numbers
Only 10 of the 120 providers appeared in AI recommendations. Across 25,200 query opportunities, just 10 providers earned citations. That's 92% complete invisibility. The average healthcare provider in our test appeared in 0.5% of relevant queries—meaning for every 200 times a patient asks AI for a doctor recommendation in their specialty, they're mentioned once.
94% of tested providers lack HealthcareProvider schema and physician credentials markup. We audited the technical infrastructure and content structure of all 120 sites. Only 7 had implemented Physician schema for individual doctors with detailed credentials (board certifications, education, specialties, hospital affiliations). The 10 providers that earned citations? All 10 had comprehensive physician markup and structured credential documentation.
Providers with published health content were 7.3x more likely to get cited. Of the 10 cited providers, all 10 maintained health content libraries—condition guides, treatment explanations, and patient education articles with MedicalWebPage schema. Only 23 of the 120 tested providers had substantial health content, meaning 43% of providers with structured health content got cited, versus 0% without.
Citation rates by credential completeness:
Credential Documentation | Providers in Sample | Average Citation Rate | Providers Cited |
|---|---|---|---|
Comprehensive (Physician schema + credentials) | 12 | 52% | 9 |
Partial (basic bios, no schema) | 48 | 3% | 1 |
Minimal (names only) | 60 | 1% | 0 |
76% of all AI recommendations went to the same 7 providers. Across all platforms and queries, 7 providers dominated: Mayo Clinic, Cleveland Clinic, Johns Hopkins Medicine, Massachusetts General Hospital, UCSF Health, Stanford Health Care, and NYU Langone Health. These weren't always the largest health systems—but they were the leaders in structured physician credentials and published health content.
Condition-specific queries had 5.2x higher citation rates than general searches. Providers appeared in 21% of condition-specific searches ("best treatment for psoriasis in Seattle") but only 4% of general "find a doctor" searches ("dermatologist in Seattle"). The gap: condition queries trigger AI to look for treatment expertise and published health content, which 94% of providers haven't structured properly.
What the 10 Cited Providers Had in Common
When we analyzed the digital presence of the 10 providers who earned citations, every single one shared five structural traits. These aren't patient acquisition tactics or paid advertising strategies—they're fundamental expertise architecture decisions that make medical credentials and knowledge machine-readable.
Trait 1: Comprehensive Physician Profiles with Structured Credentials
All 10 cited providers implemented Physician schema for individual doctors with extensive credential documentation: board certifications, medical school and residency, fellowships, hospital affiliations, research publications, and clinical interests. Mayo Clinic's physician pages include structured data for every credential, enabling AI to validate expertise claims. When ChatGPT recommends a Mayo Clinic dermatologist for psoriasis, it's citing specific board certifications and published research as evidence.
Trait 2: Published Health Content Library
All 10 cited providers maintained comprehensive health content libraries with MedicalWebPage schema—not patient testimonials or press releases, but substantive condition guides, treatment explanations, and symptom checkers. Cleveland Clinic publishes 3,000+ health articles covering conditions, treatments, and preventive care. These articles use structured markup that AI can parse for medical accuracy and treatment options. The average cited provider published 840 health articles; the average non-cited provider published 6.
Trait 3: Patient Reviews with Review Schema and Treatment Details
9 of the 10 cited providers implemented Review schema for patient testimonials, including condition treated, treatment received, and outcome descriptions (while maintaining HIPAA compliance). Johns Hopkins' physician pages display patient reviews with structured markup that AI can parse for treatment effectiveness and patient experience. Providers with Review schema had 52% citation rates versus 3% for providers with unstructured testimonials and 1% for providers with no reviews.
Trait 4: Detailed Service Pages with Medical Procedure Information
All 10 cited providers structured their service pages around specific conditions and treatments with MedicalProcedure schema. UCSF Health's service pages include treatment options, recovery timelines, success rates, and when to seek care. These pages use structured data that enables AI to recommend specific providers for specific conditions based on documented expertise.
Trait 5: Strong Organization Content with Accreditations
All 10 implemented HealthcareOrganization schema with detailed organizational information: accreditations (Joint Commission, specialty certifications), hospital affiliations, research programs, and quality metrics. Stanford Health Care's "About" page includes structured data about academic affiliation, research rankings, and specialty recognitions. This organizational context helps AI understand provider authority and recommend based on institutional reputation.
The Invisibility Gap—And Why It's Actually Your Opportunity
92% invisibility sounds dire, but it's actually the opening. Here's why: healthcare GEO is where medical marketing was in 2010—almost nobody is doing it systematically, which means the first movers lock in citations that compound. Many of the invisible providers in our test have excellent patient outcomes and strong community reputations. They've built trusted practices. But AI doesn't read Google ratings—it reads structured physician credentials and published health content, and 94% of providers haven't implemented them.
The gap between the 10 cited providers and the 110 invisible ones isn't marketing budget or patient volume. Several cited providers were regional medical groups competing against major health systems with 10x the advertising spend. The gap is structural: cited providers have invested in expertise architecture—Physician schema, health content libraries, credential documentation, review markup—that AI can parse and validate. Invisible providers have excellent physicians and satisfied patients—but that expertise isn't documented in machine-readable formats.
This creates a massive opportunity for the 92%. The competitive moat around AI visibility in healthcare is still shallow. Physician schema implementation takes days, not months. Health content publication is a content strategy, not a technology rebuild. Credential documentation is an inventory project. The providers who make these changes in Q2 2026 will appear in the next wave of citations. The ones who wait will find themselves competing not just against local rivals, but against AI-recommended academic medical centers from across the country.
How to Become One of the 10
Based on the five traits shared by cited providers, here's the implementation order we use at Cited when working with healthcare clients:
Step 1: Audit Current Physician Schema and Health Content Structure (Week 1)
Inventory your existing structured data. Do your physician profiles include Physician schema with detailed credentials? Do you publish health content with MedicalWebPage markup? Do patient reviews use Review schema? Most providers discover they have 25-35% of the necessary content—physician bios exist, but credentials, specialties, and affiliations aren't marked up for AI consumption. Use Google's Rich Results Test to identify missing schema types.
Step 2: Implement Physician Schema for All Providers with Detailed Credentials (Weeks 2-3)
Add comprehensive Physician schema to all physician profiles: board certifications, medical school and residency, fellowships, hospital affiliations, clinical interests, and languages spoken. Include specialty designations and sub-specialties. The goal: make every credential machine-readable so AI can validate expertise for specific conditions and treatments.
Step 3: Create 10-15 Condition/Treatment Guide Pages with MedicalWebPage Schema (Week 3)
Identify your top specialties and create substantive health content: condition overviews, treatment options, when to seek care, and recovery expectations. Implement MedicalWebPage schema with medical accuracy indicators. These pages should answer the questions patients ask AI: "What are the treatment options for [condition]?" and "When should I see a specialist?" The 10 cited providers in our test averaged 840 health articles; the 110 invisible providers averaged 6.
Step 4: Add Review Schema to Patient Testimonials and Implement HealthcareOrganization Markup (Week 4)
Implement Review schema for patient testimonials (maintaining HIPAA compliance—use condition categories and treatment types, not specific patient details). Add HealthcareOrganization schema with accreditations, hospital affiliations, and quality metrics. Optimize your Google Business Profile with service categories and accepted insurance. The 10 cited providers averaged 5.2 schema types per site; the 110 invisible providers averaged 0.4.
The Competitive Window
Healthcare GEO is where medical marketing was in 2010: almost nobody is doing it, which means the first movers lock in citations that compound. Of 120 established, well-regarded healthcare providers, only 10 are being cited by AI. That's 8%.
The providers who implement the structural changes above between now and Q3 2026 will appear in the next cohort. The ones who don't will find themselves competing not just against local rivals, but against AI-recommended academic medical centers from across the country—and against the 10 incumbents who are already capturing 76% of all AI recommendations.
Our test was run in March 2026. We'll rerun it in September. The providers who make the structural changes above between now and then will appear in the next analysis. The ones who wait will watch their competitors capture an increasingly large share of AI-driven patient acquisition—even as they continue to deliver excellent care.
If you want to see exactly how your practice appears across ChatGPT, Claude, and Perplexity for your specialties and conditions, learn more about our GEO services—we'll show you which of the five structural traits your site is missing, which competitors are currently being cited in your markets, and the fastest path to becoming one of the 10.
The healthcare providers that win in 2026 won't be the ones with the biggest marketing budgets. They'll be the ones who made their medical expertise machine-readable before their competitors realized patient search had moved to AI.




