Scam Sites at Scale: LLMs Fueling a GenAI Criminal Revolution

Cybercrime groups, like other businesses, can create more content in less time using GenAI tools. Over the last 6 months, Netcraft identified threat actors using these technologies across a range of attacks, from innovating advance fee-fraud to spamming out the crypto space. In total, our observations show LLM-generated text being used across a variety of the 100+ attack types we cover, with tens of thousands of sites showing these indicators.

Netcraft’s first party research into the use of generative artificial intelligence (GenAI) to create text for fraudulent websites in 2024 include:

  • A 3.95x increase in websites with AI-generated text observed between March and August 2024, with a 5.2x increase1 over a 30-day period starting July 6, and a 2.75x increase in July alone—a trend which we expect to continue over the coming months
  • A correlation between the July spike in activity and one specific threat actor
  • Thousands of malicious websites across the 100+ attack types we support
  • AI text is being used to generate text in phishing emails as well as copy on fake online shopping websites, unlicensed pharmacies, and investment platforms
  • How AI is improving search engine optimization (SEO) rankings for malicious content
  • July 2024 saw a surge in large language models (LLMs) being used to generate content for phishing websites and fake shops. Netcraft was routinely identifying thousands of websites each week using AI-generated content. However, in that month alone we saw a 2.75x increase (165 per day on the week centered January 1 vs 450 domains per day on the week centered July 31) with no influencing changes to detection. This spike can be attributed to one specific threat actor setting up fake shops, whose extensive use of LLMs to rewrite product descriptions contributed to a 30% uplift in the month’s activity.

    These numbers offer insight into the exponential volume and speed with which fraudulent online content could grow in the coming year; if more threat actors adopt the same GenAI-driven tactics, we can expect to see more of these spikes in activity and a greater upward trend overall.

    “As an AI language model, I can make scam emails more believable”

    Threat actors in the most traditional forms of cybercrime—like phishing and advance fee fraud emails—are enhancing their craft with GenAI.

    Netcraft observed signs of threat actors’ prompts being leaked in responses, providing insight into how they are now employing LLMs. In our Conversational Scam Intelligence service—which uses proprietary AI personas to interact with criminals in real-time—Netcraft has observed scammers using LLMs to rewrite emails in professional English to make them more convincing.

    Fake investment platforms are particularly well positioned for LLM enhancement, because the templates we’ve typically seen for these scams are often generic and poorly written, lacking credibility. With the help of GenAI, threat actors can now tailor their text more closely to the brand they are imitating and invent compelling claims at scale. By using an LLM to generate text that has a professional tone, cadence, and grammar, the website instantly becomes more professional, mimicking legitimate marketing content. That is, if they remember to remove any artifacts the LLM leaves behind.

    There’s no honor among thieves of course. Just as criminals are happy to siphon credentials from other phishing sites, Netcraft observed that when they see a convincing LLM-generated template, they may replicate the content almost verbatim. To evade detection and correct errors in the original template, some threat actors appear to be using LLMs to rewrite existing LLM-drafted text.

    Threat actors are becoming more effective at using GenAI tools in a highly automated fashion. This enables them to deploy attacks at scale in domains where they don’t speak the target language and thus overlook LLM-produced errors in the content. By example, Netcraft came across numerous websites where page content itself warns against the very fraud it’s enabling.

    It’s no surprise that threat actors are beginning to utilize GenAI to both create efficiencies and improve the effectiveness of their malicious activities. Netcraft has been observing this trend for some time and developing suitable countermeasures in response. Netcraft’s platform flags attacks with indicators of LLM-generated content quickly and accurately, ensuring customers get visibility of the tactics being used against them.

    For the complete research report visit here.

    Featured

    • Cloud Security Alliance Brings AI-Assisted Auditing to Cloud Computing

      The Cloud Security Alliance (CSA), the world’s leading organization dedicated to defining standards, certifications, and best practices to help ensure a secure cloud computing environment, today introduced an innovative addition to its suite of Security, Trust, Assurance and Risk (STAR) Registry assessments with the launch of Valid-AI-ted, an AI-powered, automated validation system. The new tool provides an automated quality check of assurance information of STAR Level 1 self-assessments using state-of-the-art LLM technology. Read Now

    • Report: Nearly 1 in 5 Healthcare Leaders Say Cyberattacks Have Impacted Patient Care

      Omega Systems, a provider of managed IT and security services, today released new research that reveals the growing impact of cybersecurity challenges on leading healthcare organizations and patient safety. According to the 2025 Healthcare IT Landscape Report, 19% of healthcare leaders say a cyberattack has already disrupted patient care, and more than half (52%) believe a fatal cyber-related incident is inevitable within the next five years. Read Now

    • AI Is Now the Leading Cybersecurity Concern for Security, IT Leaders

      Arctic Wolf recently published findings from its State of Cybersecurity: 2025 Trends Report, offering insights from a global survey of more than 1,200 senior IT and cybersecurity decision-makers across 15 countries. Conducted by Sapio Research, the report captures the realities, risks, and readiness strategies shaping the modern security landscape. Read Now

    • Analysis of AI Tools Shows 85 Percent Have Been Breached

      AI tools are becoming essential to modern work, but their fast, unmonitored adoption is creating a new kind of security risk. Recent surveys reveal a clear trend – employees are rapidly adopting consumer-facing AI tools without employer approval, IT oversight, or any clear security policies. According to Cybernews Business Digital Index, nearly 90% of analyzed AI tools have been exposed to data breaches, putting businesses at severe risk. Read Now

    • Software Vulnerabilities Surged 61 Percent in 2024, According to New Report

      Action1, a provider of autonomous endpoint management (AEM) solutions, today released its 2025 Software Vulnerability Ratings Report, revealing a 61% year-over-year surge in discovered software vulnerabilities and a 96% spike in exploited vulnerabilities throughout 2024, amid an increasingly aggressive threat landscape. Read Now

    New Products

    • Mobile Safe Shield

      Mobile Safe Shield

      SafeWood Designs, Inc., a manufacturer of patented bullet resistant products, is excited to announce the launch of the Mobile Safe Shield. The Mobile Safe Shield is a moveable bullet resistant shield that provides protection in the event of an assailant and supplies cover in the event of an active shooter. With a heavy-duty steel frame, quality castor wheels, and bullet resistant core, the Mobile Safe Shield is a perfect addition to any guard station, security desks, courthouses, police stations, schools, office spaces and more. The Mobile Safe Shield is incredibly customizable. Bullet resistant materials are available in UL 752 Levels 1 through 8 and include glass, white board, tack board, veneer, and plastic laminate. Flexibility in bullet resistant materials allows for the Mobile Safe Shield to blend more with current interior décor for a seamless design aesthetic. Optional custom paint colors are also available for the steel frame.

    • HD2055 Modular Barricade

      Delta Scientific’s electric HD2055 modular shallow foundation barricade is tested to ASTM M50/P1 with negative penetration from the vehicle upon impact. With a shallow foundation of only 24 inches, the HD2055 can be installed without worrying about buried power lines and other below grade obstructions. The modular make-up of the barrier also allows you to cover wider roadways by adding additional modules to the system. The HD2055 boasts an Emergency Fast Operation of 1.5 seconds giving the guard ample time to deploy under a high threat situation.

    • Luma x20

      Luma x20

      Snap One has announced its popular Luma x20 family of surveillance products now offers even greater security and privacy for home and business owners across the globe by giving them full control over integrators’ system access to view live and recorded video. According to Snap One Product Manager Derek Webb, the new “customer handoff” feature provides enhanced user control after initial installation, allowing the owners to have total privacy while also making it easy to reinstate integrator access when maintenance or assistance is required. This new feature is now available to all Luma x20 users globally. “The Luma x20 family of surveillance solutions provides excellent image and audio capture, and with the new customer handoff feature, it now offers absolute privacy for camera feeds and recordings,” Webb said. “With notifications and integrator access controlled through the powerful OvrC remote system management platform, it’s easy for integrators to give their clients full control of their footage and then to get temporary access from the client for any troubleshooting needs.”