How a Global Travel Tech Platform Achieved a 410% Increase in AI Citations Through Structured Entity Mapping

Industry: Travel Tech / Booking Platform
To protect client confidentiality, specific company names and identifying details have been anonymized in this case study.
Executive Summary
Challenge: A leading global travel booking platform was losing market share in high-intent discovery queries on AI search engines, despite maintaining top positions in traditional organic search results. Their complex, JavaScript-rendered inventory was invisible to LLM crawlers, and their brand entities were mathematically ambiguous, resulting in a 4% AI citation rate for their highest-value query segments.
Solution: The client partnered with a specialized generative engine optimization agency to rebuild their digital infrastructure from the ground up, transitioning from a page-based SEO model to an entity-based Knowledge Graph architecture with API-first crawler delivery.
Results: Over a 12-month engagement, the platform achieved a 410% increase in their overall AI citation rate, secured a 65% Share of Voice (SOV) in complex multi-destination queries, reduced their JavaScript rendering failure rate from 42% to 0.5%, and saw a 22% reduction in customer acquisition costs (CAC) for high-value bookings.
Company Background and Initial Challenge
The client is a multinational travel technology company that operates one of the world's largest online booking platforms for flights, hotels, and experiences. With over 15 million active listings across 120 countries and a product catalog spanning budget hostels to ultra-luxury private villas, they have historically dominated traditional search engine results pages (SERPs) through aggressive content marketing, technical SEO, and a robust link-building program spanning more than a decade.
However, in late 2024, their analytics team noticed a disturbing trend. While their traditional organic traffic remained stable at approximately 42 million monthly visits, their overall market share for complex, high-intent discovery queries was declining at a rate of 3.2% per quarter. These queries—such as "Plan a 10-day family itinerary in Japan focusing on cultural heritage and boutique hotels under $5,000" or "What is the best island in Greece for a honeymoon with private beach access?"—represent the highest-value segment of their business, generating an average booking value 4.7x higher than generic flight searches.
An internal audit revealed the root cause: travelers were increasingly bypassing traditional search engines for these complex, conversational queries, turning instead to Large Language Models (LLMs) like ChatGPT and Claude. When their product team manually tested 200 high-intent travel queries across the three major LLMs, the client's platform was cited in fewer than 8 of those responses. The LLMs consistently preferred to synthesize information from niche travel blogs, specialized boutique aggregators, and editorial publications. The client recognized that their legacy SEO tactics were structurally incompatible with this new paradigm, prompting them to seek out a specialized generative engine optimization agency to diagnose and resolve the issue.
The GEO Audit: What We Found
As their chosen generative engine optimization agency, Cited conducted a comprehensive 6-week technical audit of the client's digital infrastructure. The findings highlighted a fundamental and systemic mismatch between how the client structured their data and how LLMs ingest, process, and cite information. The audit covered 3 primary dimensions: content architecture, technical infrastructure, and E-E-A-T signal quality.
Content Architecture Issues: The client's content was almost entirely locked within unstructured, human-readable prose. While individual hotel descriptions were beautifully written by professional copywriters, the specific amenities, pricing tiers, proximity to landmarks, and unique selling points were not mathematically defined as discrete, machine-readable facts. An LLM attempting to answer "Which hotels in Kyoto have a traditional onsen and are within walking distance of Fushimi Inari?" could not reliably extract these specific facts from the client's prose, even though the information technically existed somewhere in their 15 million listings.
Technical Infrastructure Gaps: The platform relied heavily on complex JavaScript Single-Page Application (SPA) frameworks to render dynamic pricing and availability data. AI crawlers, which operate with strict latency budgets of typically under 5 seconds, frequently abandoned the crawl before the JavaScript could execute and hydrate the DOM. Our audit measured a 42% JavaScript rendering failure rate for AI user agents—meaning nearly half of all pages were being indexed as blank shells with no meaningful content.
E-E-A-T Signal Deficiencies: Despite being a globally recognized brand with a decade of domain authority, the platform lacked explicit, machine-readable disambiguation. Their internal entities—hotels, airlines, destinations, and travel experiences—were not programmatically linked to external authoritative databases like Wikidata, IATA registries, or official national tourism boards. This forced the LLMs to make probabilistic guesses about entity identity, dramatically increasing the risk of hallucination and reducing the model's confidence in citing the source.
Metric | Baseline (Month 0) | Industry Average | Gap |
|---|---|---|---|
AI Citation Rate (General Queries) | 12% | 18% | -6% |
AI Citation Rate (Complex Queries) | 4% | 15% | -11% |
JavaScript Rendering Failure Rate | 42% | 15% | +27% |
Entity Disambiguation Score | 1.2/10 | 4.5/10 | -3.3 |
Structured Data Coverage (% of listings) | 18% | 45% | -27% |
Implementation Strategy
To systematically address each of the identified gaps, we designed a three-phase implementation strategy with clear milestones, measurable deliverables, and a dedicated joint engineering team of 8 people (4 from Cited, 4 from the client's platform engineering group).
Phase 1: Semantic Ontology Definition (Months 1-3)
We began by completely redefining the client's data architecture. Instead of optimizing individual pages, we mapped their entire inventory as a network of interconnected entities. A "Hotel" entity was no longer just a name and a location; it was a formally defined object with typed relationships to "Nearby Attraction" entities (with a distanceInMeters property), "Amenity" entities (each with a amenityType and isAvailable property), and "PricingTier" entities (with currency, minNightlyRate, and seasonalModifier properties). We developed a custom, SHACL-validated JSON-LD schema library containing 47 distinct entity types and over 380 defined property relationships. This schema was deployed directly into the HTML <head> of every listing page, bypassing the need for LLMs to perform NLP extraction from prose.
Phase 2: API-First Crawler Delivery (Months 4-7)
To permanently solve the JavaScript rendering failure problem, we completely decoupled the data delivery layer for AI user agents. Working alongside the client's platform engineering team, we deployed a dedicated middleware service that intercepted incoming requests from known AI crawler user agents (GPTBot, ClaudeBot, PerplexityBot, and 11 others). When an AI crawler requested any page, the middleware bypassed the SPA framework entirely and served a pre-rendered, high-density JSON-LD payload directly from a Redis cache layer with a p95 latency of under 180 milliseconds. This eliminated the 42% rendering failure rate and guaranteed 100% structured data ingestion for all 15 million listings within 90 days of Phase 2 completion.
Phase 3: Authoritative Entity Linking (Months 8-12)
In the final phase, we focused on building mathematical trust through systematic disambiguation. We implemented an automated pipeline that matched the client's internal hotel, airline, and destination entities against Wikidata, the IATA airline code registry, and 23 official national tourism board databases. For each matched entity, we injected sameAs properties into the corresponding JSON-LD schema, creating an explicit, machine-verifiable link between the client's data and a globally recognized authoritative source. By the end of Month 12, 94% of the client's top 500,000 listings had at least one verified sameAs link, compared to 0% at baseline.
Results and Business Impact
The transition from a traditional page-based SEO model to a structured GEO architecture yielded transformative and measurable results across every key performance indicator, validating the strategic investment in a specialized generative engine optimization agency.
AI Visibility Metrics: The most dramatic improvement was seen in complex, multi-variable queries—the exact queries that drive the highest-value bookings. By Month 12, the client's citation rate for these queries had increased from 4% to 65%, representing a 1,525% improvement. Their overall AI citation rate across general travel queries improved from 12% to 61%, a 408% increase. Share of Voice in the "luxury travel planning" query cluster reached 71%, compared to 6% at baseline.
Business Impact: The increased AI visibility directly translated to measurable revenue outcomes. Because users arriving from LLM citations had already received a detailed, personalized answer to their complex travel query, their intent was highly qualified. The conversion rate for this traffic segment was 3.4x higher than the client's traditional organic search traffic. Combined with the 22% reduction in Customer Acquisition Cost (CAC), the 12-month GEO program delivered a calculated return on investment of 680%.
Metric | Baseline (Month 0) | Post-Implementation (Month 12) | Change |
|---|---|---|---|
AI Citation Rate (General Queries) | 12% | 61% | +408% |
AI Citation Rate (Complex Queries) | 4% | 65% | +1,525% |
JavaScript Rendering Failure Rate | 42% | 0.5% | -98% |
Structured Data Coverage | 18% | 97% | +439% |
Customer Acquisition Cost (CAC) | $45.00 | $35.10 | -22% |
LLM-Driven Booking Conversion Rate | 1.2% | 4.1% | +242% |
Key Lessons and Broader Implications
This engagement produced several critical insights that are broadly applicable to any large-scale platform operating in a high-consideration, conversational industry.
What Worked:
Bypassing the DOM for AI Crawlers: The decision to serve structured data via a dedicated middleware layer rather than relying on SPA rendering was the single most impactful technical change. It guaranteed data ingestion at scale and was the prerequisite for every subsequent optimization.
Micro-Fact Structuring: Breaking down complex, prose-based hotel descriptions into atomic, typed, machine-readable facts was the foundational architectural shift. It transformed the client's content from something LLMs had to interpret into something they could directly ingest and confidently cite.
Mathematical Disambiguation via
sameAs: Linking internal entities to external authoritative registries eliminated LLM hallucination risk and dramatically increased the model's confidence in citing the client as a primary source.
Broader Implications for the Travel Tech Industry:
The travel industry is uniquely vulnerable to LLM disruption because travel planning is inherently conversational, complex, and high-consideration. Users do not want a list of 10 blue links when planning a honeymoon; they want a synthesized, personalized recommendation. Platforms that continue to rely on traditional, page-based SEO will rapidly lose visibility in this highest-value query segment to competitors who have structured their data for AI ingestion. The future of travel discovery belongs to those who provide the cleanest, most authoritative, and most machine-readable data to the engines that generate the answers.
Conclusion
By recognizing the fundamental structural shift in search behavior early and partnering with a specialized generative engine optimization agency to execute a rigorous, phased technical transformation, this global travel tech leader successfully converted a critical vulnerability into a durable competitive advantage. The 680% ROI achieved over 12 months demonstrates that GEO is not a marketing expense—it is a strategic infrastructure investment with compounding returns. To learn how your organization can achieve similar results through structured data architecture and entity-based optimization, learn more about our GEO services.



