We Tested AI Visibility for 100 B2B SaaS Companies. Only 12 Get Cited by ChatGPT

Published by the Cited Research Team | April 23, 2026
Sarah Chen, VP of Marketing at a Series B marketing automation platform, opened ChatGPT on a Tuesday morning in March 2026. Her company had spent $2.4M on content marketing over the past eighteen months—comprehensive guides, comparison pages, case studies, the works. She typed: "What's the best marketing automation platform for B2B companies with 50-200 employees?" ChatGPT recommended four platforms. Hers wasn't one of them.
Her counterpart at a competitor—smaller team, smaller budget, fewer customers—asked the same question the next day. Different city, different use case, but the same outcome: not mentioned. Neither company appeared in Claude's recommendations either. Both were invisible to Perplexity.
We wanted to know whether this pattern held across the B2B SaaS landscape. So we ran a structured test across 100 companies, 3 AI platforms, and 50 purchase-intent queries spanning four major categories: marketing automation, sales tools, customer success platforms, and analytics software. The results explain why 88% of B2B SaaS companies are invisible to AI—and what the 12% who get cited are doing differently.
The Test: 100 Companies, 50 Queries, 3 AI Platforms
We tested 100 B2B SaaS companies across the four largest software categories in enterprise tech. These weren't obscure startups—they were venture-backed companies with recognizable brands, active marketing teams, and substantial content libraries.
What we tested:
Companies: 25 marketing automation platforms, 25 sales enablement tools, 25 customer success platforms, 25 analytics solutions
Queries: 50 purchase-intent questions across different company sizes, use cases, and industries ("best CRM for healthcare startups", "marketing automation for enterprise B2B")
AI Platforms: ChatGPT (GPT-4), Claude (Claude 3.5 Sonnet), Perplexity (Pro)
How we tested:
Each query was run on all three platforms within the same 48-hour window
We logged every company mentioned, citation source, and ranking position
We analyzed the digital properties of cited companies to identify common patterns
A total of 100 companies × 50 queries × 3 platforms = 15,000 query-platform combinations. We logged every mention, every citation, and every recommendation. Here's what we found.
The Headline Numbers
Only 12 of the 100 companies appeared in AI recommendations. Across 15,000 query opportunities, just 12 companies earned citations. That's 88% complete invisibility. The average B2B SaaS company in our test appeared in 0.8% of relevant queries—meaning for every 125 times a prospect asks AI for a recommendation in their category, they're mentioned once.
87% of tested companies lack schema markup that AI can parse. We audited the technical infrastructure of all 100 sites. Only 13 had implemented structured data (Organization, Product, SoftwareApplication schema) that enables AI to extract key information. The 12 companies that earned citations? All 13 had comprehensive schema—plus one additional company that had schema but weak content.
Companies with structured comparison pages were 4.2x more likely to get cited. Of the 12 cited companies, 11 published detailed comparison pages (e.g., "Platform A vs Platform B") with structured feature tables, pricing breakdowns, and use case recommendations. Only 18 of the 100 tested companies had comparison pages—meaning 61% of companies with comparison content got cited, versus 1% without.
Citation rates by E-E-A-T signal strength:
E-E-A-T Signal Strength | Companies in Sample | Average Citation Rate | Companies Cited |
|---|---|---|---|
Strong (4-5 signals) | 14 | 34% | 12 |
Moderate (2-3 signals) | 28 | 11% | 0 |
Weak (0-1 signals) | 58 | 8% | 0 |
68% of all AI recommendations went to the same 8 companies. Across all platforms and queries, 8 companies dominated: HubSpot, Salesforce, Gong, Gainsight, Mixpanel, Intercom, Drift, and Segment. These weren't always the market leaders by revenue—but they were the leaders in AI-parseable content structure.
What the 12 Cited Companies Had in Common
When we analyzed the digital presence of the 12 companies who earned citations, every single one shared five structural traits. These aren't surface-level SEO tactics—they're fundamental content architecture decisions that make their expertise machine-readable.
Trait 1: Comprehensive Comparison Pages with Structured Data
All 12 cited companies published detailed comparison content—not just "vs" landing pages, but genuinely useful feature-by-feature breakdowns. HubSpot's comparison pages include pricing tables, feature matrices, use case recommendations, and migration guides. Critically, these pages use TableSchema markup that enables AI to extract structured comparisons. When ChatGPT recommends HubSpot over competitors, it's often citing these comparison pages as the source.
Trait 2: Case Studies with Quantified Results
Every cited company published case studies with specific, quantified outcomes. Not "Company X improved efficiency"—but "Company X reduced sales cycle length from 47 days to 31 days, increasing win rate from 23% to 34%." These case studies included structured data (Review schema, Organization schema) that AI could parse for evidence. The average cited company had 12 detailed case studies; the average non-cited company had 3.
Trait 3: Technical Documentation and API References
11 of the 12 cited companies maintained comprehensive, public technical documentation. This wasn't just API references—it was implementation guides, integration tutorials, data model explanations, and troubleshooting resources. AI platforms heavily weight technical documentation as an expertise signal. When Claude recommends Segment for customer data infrastructure, it frequently cites their technical docs as evidence of implementation depth.
Trait 4: Regular Publication of Original Research
9 of the 12 cited companies published original research—annual reports, benchmark studies, or industry surveys with proprietary data. Gong publishes quarterly analysis of sales conversation data. Gainsight publishes customer success benchmarks. This original research serves as a powerful E-E-A-T signal: it demonstrates not just expertise, but authority to define industry standards.
Trait 5: Comprehensive Schema Markup
All 12 implemented multiple schema types: Organization (company information), Product (software details), SoftwareApplication (technical specs), Review (customer testimonials), HowTo (implementation guides), and FAQPage (common questions). This structured data enables AI to extract specific claims, validate them against other sources, and cite them with confidence.
The Invisibility Problem—And Why It's Actually Your Opportunity
88% invisibility sounds dire, but it's actually the opening. Here's why: B2B SaaS GEO is where local SEO was in 2009. Almost nobody is doing it systematically, which means the first movers lock in citations that compound. The companies currently dominating AI recommendations aren't necessarily the best products—they're the ones whose content architecture makes their expertise machine-readable.
The gap between the 12 cited companies and the 88 invisible ones isn't budget or brand recognition. Several cited companies were Series A startups competing against well-funded incumbents. The gap is structural: cited companies have invested in content infrastructure that AI can parse, validate, and cite. Invisible companies have content—often lots of it—but it's not structured for machine consumption.
This creates a massive opportunity for the 88%. The competitive moat around AI visibility is still shallow. Schema implementation takes weeks, not months. Comparison pages can be published in days. Case study restructuring is a content project, not a product rebuild. The companies who make these changes in Q2 2026 will appear in the next wave of citations. The ones who wait will find themselves competing not just against traditional rivals, but against a new class of AI-native competitors who built for machine readability from day one.
How to Become One of the 12
Based on the five traits shared by cited companies, here's the implementation order we use at Cited when working with B2B SaaS clients:
Step 1: Audit Current E-E-A-T Signals and Schema (Week 1)
Run a comprehensive technical audit of your site's structured data. Use Google's Rich Results Test to identify missing schema types. Inventory your existing content for E-E-A-T signals: Do you have quantified case studies? Technical documentation? Original research? Comparison pages? Most companies discover they have 40-60% of the necessary content—it's just not structured for AI consumption.
Step 2: Create 3-5 Structured Comparison Pages (Weeks 2-3)
Identify your top 3-5 competitors and create genuinely useful comparison content. Include feature-by-feature tables (with TableSchema markup), pricing breakdowns, use case recommendations, and migration guides. These pages should answer the exact questions prospects ask AI: "What's the difference between X and Y?" and "Which is better for [specific use case]?" Implement Product and SoftwareApplication schema on each page.
Step 3: Publish 2 Detailed Case Studies with Metrics (Week 3)
Select two strong customer success stories and rewrite them with specific, quantified outcomes. Include before/after metrics, implementation timelines, and ROI calculations. Add Review schema with star ratings and customer quotes. These case studies serve double duty: they're sales enablement assets and E-E-A-T signals that AI can cite as evidence of real-world results.
Step 4: Implement Comprehensive Schema Markup (Week 4)
Deploy Organization schema (company information), Product schema (software details), SoftwareApplication schema (technical specifications), and FAQPage schema (common questions). If you have technical documentation, add HowTo schema to implementation guides. This structured data enables AI to extract specific claims about your product and validate them against other sources. The 12 cited companies in our test averaged 6.2 schema types per site; the 88 invisible companies averaged 0.4.
The Competitive Window
B2B SaaS GEO is where local SEO was in 2009: almost nobody is doing it, which means the first movers lock in citations that compound. Of 100 well-funded, actively marketed B2B SaaS companies, only 12 are being cited by AI. That's 12%.
The companies who implement the structural changes above between now and Q3 2026 will appear in the next cohort. The ones who don't will find themselves competing not just against traditional rivals, but against AI-native startups who built for machine readability from day one—and against the 12 incumbents who are already capturing 68% of all AI recommendations.
Our test was run in March 2026. We'll rerun it in September. The companies who make the structural changes above between now and then will appear in the next analysis. The ones who wait will watch their competitors capture an increasingly large share of AI-driven purchase intent.
If you want to see exactly how you appear across ChatGPT, Claude, and Perplexity for your target queries, run a free AI Visibility Audit—we'll show you which of the five structural traits your site is missing, which competitors are currently being cited in your category, and the fastest path to becoming one of the 12.
The B2B SaaS companies that win in 2026 won't be the ones with the biggest content budgets. They'll be the ones who made their expertise machine-readable before their competitors realized the game had changed.





