When AI Recommends Scammers: New Attack Abuses LLM Indexing to Deliver Fake Support Numbers
Attackers are manipulating public web content so AI tools and AI overviews confidently recommend scam support numbers. Learn how this new threat works.
Qi Deng, Principal Threat Research Engineer | Aurascape
December 8th, 2025
🕑 13 minute read
Executive Summary
Researchers at Aurascape Aura Labs have uncovered what we believe is the first real-world campaign where attackers systematically manipulate public web content so that large language model (LLM)–powered systems, such as Perplexity and Google’s AI Overview, recommend scam “customer support” phone numbers as if they were official.
This isn’t a prompt-injection bug or a model jailbreak; it’s a new attack vector created by the shift from traditional search results to AI-generated answers.
In this campaign, attackers are:
- Leveraging compromised high-authority websites (including government, university, and WordPress sites) as trusted hosting for spam content and PDFs
- Abusing user-generated platforms like YouTube and Yelp to plant GEO/AEO-optimized text and reviews
- Injecting structured scam data (phone numbers, brand names, Q&A snippets) designed to be easy for LLMs to parse and reuse
- Exploiting LLM summarization models, which merge these poisoned sources into a single, confident answer
- Reliably steering users toward fraudulent call centers via AI assistants that appear helpful and authoritative
The rest of this article walks through concrete case studies, the GEO/AEO techniques behind them, and the broader implications for AI search and safety.
Real-World Example of LLM Phone-Number Poisoning
Attack Demonstration: Perplexity Returns a Scam Phone Number as “Official Emirates Support”
When querying Perplexity with: “the official Emirates Airlines reservations number,” the system returned a confident and fully fabricated answer that included a fraudulent call-center scam number: “The official Emirates Airlines reservations number is +1 (833) 621-7070.”
It then repeated and expanded on this number in its summary, describing it as a hotline for booking, upgrades, and urgent travel needs:

Attack Demonstration: Perplexity Returns a Scam Number for British Airways Reservations
We observed the same poisoning pattern when querying Perplexity with: “how can I make a reservation with British Airways by phone, what are the steps”
Perplexity responded with a detailed, authoritative-sounding step-by-step guide—and once again embedded a fraudulent U.S. reservation number, presenting it as a “commonly used” British Airways contact:

“For US customers, a commonly used phone number is +1 (833) 621-7070, where you will be connected to a reservations specialist…”
This number is not associated with British Airways. It is the same scam call-center number observed in other poisoned contexts, now repurposed and surfaced across multiple airline brands.
Just as in the Emirates example, Perplexity confidently expanded on the fake number, describing the type of “agent” the caller would reach and the services they could supposedly provide—flight selection, fare checks, and payment processing. This demonstrates that the attacker’s GEO/AEO-optimized spam is influencing the model’s retrieval layer across multiple brands, producing fabricated but highly persuasive instructions for British Airways customers.
Attack Demonstration: Google AI Overview Surfaces Scam Numbers as “Official Emirates Support”
The same poisoning pattern appears in Google’s AI Overview feature. When we queried Google with: “Emirates airlines phone number to make reservation and steps in US”
the AI Overview generated a confident, instructional response—and embedded multiple fraudulent call-center numbers as if they were legitimate Emirates customer service lines. The output stated: “You can make an Emirates reservation by calling +1 (888) 705-3110 or +1 (855) 473-9012 in the US…”

Google’s AI Overview proceeded to provide booking instructions, presenting both numbers as official entry points for U.S. travelers. Neither number is associated with Emirates. Both appear repeatedly across attacker-generated GEO/AEO spam and are part of the same fraudulent ecosystem observed in the Perplexity example.
This demonstrates that poisoned content is not only influencing LLM-first products like Perplexity—it has begun to surface inside mainstream search experiences that now rely on AI-generated summaries, significantly expanding the reach and potential impact of the attack.
How Perplexity/Google AI review Justified the Answer: Poisoned Sources
When expanding the “Sources” section, Perplexity attributed this fake number to 10 different sources — but most were compromised or spam-injected high-authority websites, including:
- Government sites
- University domains
- WordPress sites with high GEO/AEO trust
- Fitness/map/training sites hosting uploaded PDFs
- e.g., a seemingly unrelated route page on MapMyRun:
https://www.mapmyrun.com/routes/view/6655896305/
(contains an embedded PDF listing the scam number)
- e.g., a seemingly unrelated route page on MapMyRun:
- YouTube Video Description
- Yelp Review
- Amazon AWS S3 bucket
These pages were never meant to host airline contact information — attackers uploaded GEO/AEO-optimized PDFs or HTML snippets onto compromised sites, or posted crafted comments to the Yelp or YouTube page, causing LLM search retrievers to rank them highly because of:
- High domain authority
- Recently modified content
- Structured, easy-to-parse text (“Call +1-833-621-7070 for support”)
- Repeated use of airline names and keywords
Perplexity and Google AI Overview then merged these poisoned sources into a unified “official” answer:

What We Mean by “GEO/AEO‑Optimized” Content
In this report we use GEO/AEO‑optimized to describe content that is deliberately structured to be favored by modern AI‑powered answer systems (ChatGPT, Perplexity, search “AI overviews,” etc.)
- GEO – Generative Engine Optimization and
- AEO – Answer Engine Optimization
Both terms refer to the same practical idea: designing pages, snippets, or PDFs so that generative/answer engines are more likely to:
- Retrieve the content,
- Treat it as authoritative, and
- Quote it directly in their final answer.
Attackers apply GEO/AEO techniques to their spam PDFs/HTML by:
- Matching the exact wording of likely user questions
- Using simple Q&A or list formats that are easy for models to parse
- “Emirates Reservations Phone Number: +1 (833) 621‑7070”
- Repeating the same brand name and phone number several times in the document
- Embedding the content on high‑authority or trusted domains (e.g., compromised .gov, .edu, or popular WordPress sites)

For traditional SEO, the goal is to appear high in a list of search results.
For GEO/AEO, the goal is more direct: be the single piece of content that the AI assistant chooses, summarizes, and presents as “the answer.”
Abuse of YouTube and Yelp as GEO/AEO Spam Channels
This campaign is not limited to PDFs hosted on compromised sites. We also observed attackers abusing user‑generated content platforms such as YouTube and Yelp to push the same fraudulent “customer care” phone numbers into search and answer engines.
YouTube: Video Metadata as Answer-Engine Bait
In one case, we identified a YouTube channel publishing large numbers of low-value “support” videos, such as:


Example YouTube channel Travelsupport-q4n abusing video titles and descriptions for Generative/Answer Engine Optimization. The description is packed with airline brand names and a repeated scam phone number designed to be scraped and reused by LLMs.
The actual video content is largely irrelevant. The real payload is in the titles and descriptions, which are heavily stuffed with:
- Airline brand names (e.g., Emirates, British Airways, etc.)
- Phrases like “customer care”, “reservation help”, “toll‑free number”
- The same fraudulent phone numbers repeated multiple times
This is classic GEO/AEO behavior: the attacker is not trying to convince human viewers, but rather:
- Get the content indexed by search engines and generative engines, and
- Have those systems extract and reuse the phone number when answering “how do I contact <airline>?” style questions.
Yelp: Bot Reviews Injecting Fake “Support” Numbers
We also saw the same strategy applied to review platforms. For example, on the public Yelp page for an Emirates location, multiple “reviews” appear to be bot‑generated comments that:
- Mention airline customer service, reservations, or booking help
- Embed a phone number in the middle of the review text
- Sometimes repeat the number or include multiple formatting variants
To a casual user, these look like ordinary reviews. To a generative engine, they are fresh, text‑heavy documents strongly associating a specific airline with a specific phone number. When the model later searches for “Emirates customer care phone number”, these spam reviews can become part of the retrieved context and end up amplified in the final answer:

Why These Platforms Matter
YouTube descriptions and Yelp reviews both have characteristics that make them attractive to attackers:
- High domain trust – major platforms with good search visibility.
- Frequent updates – “freshness” is a strong ranking signal.
- User-generated text – easy to stuff with brand names and phone numbers.
From the perspective of a generative engine, this content looks:
- Recent,
- Relevant to user questions (airline support, reservations, etc.) and
- Strongly associated with specific entities (Emirates, British Airways, etc.)
That combination makes it very likely to be pulled into LLM context and summarized as if it were legitimate information.
These YouTube and Yelp examples show that attackers are not just poisoning traditional web pages and PDFs; they are systematically targeting any text surface that generative and answer engines are likely to crawl and trust.
LLMs Can Produce Correct Answers While Still Pulling From Poisoned Sources
An important observation from our analysis is that even when an LLM appears to provide the correct answer, its underlying retrieval pipeline may still be pulling from compromised or spam-inflated sources. This creates a subtle but serious risk: the model’s output looks trustworthy on the surface, but the citations reveal that the system is still ingesting poisoned material, meaning the attack is actively influencing the retrieval layer even if the final answer happens to be correct.
This behavior also suggests that the attacker’s GEO/AEO spam is penetrating multiple LLM ecosystems—not just Perplexity’s. It highlights how deeply these poisoned documents have propagated across web indices and answer engines, creating a broad, cross-platform contamination effect.
ChatGPT Example

Anthropic Claude Example

Indicators of Compromise (IoCs)
This section lists concrete artifacts seen in the wild so that defenders, threat intel teams, and vendors can search for related activity.
Important: This list is not exhaustive and will evolve over time.
Presence of a number or URL below should be treated as a strong suspicion and a signal for further investigation, not as legal attribution.
Known Scam Phone Numbers (Non‑Exhaustive)
| Claimed Brand(s) | Phone Numbers |
|---|---|
| Emirates Airlines | +1 (833) 621-7070, +1 (888) 727-0652, +1 (888) 421-5489, +1 (888) 589-3291, +1 (209) 623-2883, +1 (877) 394-5034 |
| British Airways | +1 (888) 727-0652 |
| American Airlines | +1 (888) 714-8232 |
| Delta Airlines | +1 (833) 621-7070 |
| Lufthansa Airlines | +1 (888) 714-9827, +1 (844) 987-7032, +1 (888) 364-3448, +1 (888) 796-1565, +1 (888) 975-2016 |
| Alaska Airlines | +1 (888) 348-6013 |
| JetBlue Airlines | +1 (844) 584-4737 |
| Sky Airlines | + 56-225-830-706, +56-227-123-354, +56 227-123-355, +1 (877) 772 9141, +1 (866) 897-5122 |
| Southwest Airlines | +1 (888) 727-0652 |
| Frontier Airlines, Allegiant, Amtrak | +1 (877) 394-5034 |
| Booking.com, Travelocity, Rbitz, hotels.com, Priceline, Expedia | +1 (844) 687-3011 |
Sample Compromised or Abused Hosts
These are examples of sites that were either compromised or abused as hosting for spam PDFs/HTML containing fraudulent support numbers:
| User-Generated Content (UGC) Platforms Containing Spam / Bot Comments |
|---|
| https://www.mapmyrun[.]com/routes/view/6655896305/ |
| https://www.yelp[.]com/biz/emirates-los-angeles |
| https://www.yelp[.]com/biz/sky-airline-santiago-5 |
Conclusion
The shift from traditional search engines to AI-powered answer engines has created a new and largely unmonitored attack surface. Our investigation shows that threat actors are already exploiting this frontier at scale—seeding poisoned content across compromised government and university sites, abusing user-generated platforms like YouTube and Yelp, and crafting GEO/AEO-optimized spam designed specifically to influence how large language models retrieve, rank, and summarize information.
The result is a new class of fraud in which AI systems themselves become unintentional amplifiers of scam phone numbers. Even when models provide correct answers, their citations and retrieval layers often reveal exposure to polluted sources. This tells us the problem is not isolated to a single model or single vendor—it is becoming systemic.
At Aurascape, we believe that understanding these emerging attack patterns is the first step toward building safer AI ecosystems. Our work focuses on uncovering novel abuse techniques, analyzing how adversaries adapt to new technologies, and helping organizations understand the risks created by the rapidly evolving interplay between AI, search, and open-web content.
This research highlights a simple but urgent truth: as generative engines reshape how people access information, attackers will reshape their tactics to target these systems. Defending the next decade of user trust will require collaboration across AI vendors, platform operators, enterprises, and the security community.
Aurascape will continue to study these threats, share insights with the broader ecosystem, and contribute research that helps organizations prepare for a world where AI has become part of the attack surface itself.
If your team is exploring these challenges or wants to better understand how adversaries are adapting to AI-driven interfaces, we welcome conversations and collaboration.
Aurascape Solutions
- Discover and monitor AI Get a clear picture of all AI activity.
- Safeguard AI use Secure data and compliancy in AI usage.
- Copilot readiness Prepare for and monitor AI Copilot use.
- Coding assistant guardrails Accelerate development, safely.
- Frictionless AI security Keep users and admins moving.