Dive Brief:
- Companies’ scattershot adoption of AI constitutes a major security risk, according to a report the security firm Netskope published on Tuesday.
- Many employees continue using AI tools through personal accounts that lack the proper security guardrails and fall outside the purview of their organizations’ IT teams, creating opportunities for hackers to manipulate those tools and breach corporate networks.
- “This combination of novel AI-driven threats and legacy security concerns defines the evolving threat landscape for 2026,” Netskope said in its report.
Dive Insight:
Shadow AI has been a known issue for years, but it remains a persistent challenge for organizations that are racing to incorporate AI into their workflows.
Nearly half (47%) of people using generative AI platforms are doing so through personal accounts that their companies aren’t overseeing, according to Netskope’s report, which is based on cloud security analytics from October 2024 to October 2025. Unmonitored AI use creates gaps in companies’ security defenses that hackers could exploit.
“A substantial share of employees are relying on tools such as ChatGPT, Google Gemini and Copilot, using credentials not associated with their organization,” Netskope said.
The data paints a mixed picture of trends in personal AI use. On one hand, the percentage of people using personal AI apps (47%) dropped significantly from the prior year, when it was 78%. Similarly, the percentage of people using company-approved accounts increased from 25% to 62%. On the other hand, the percentage of people switching between personal and enterprise accounts increased slightly year over year, from 4% to 9%. That finding, Netskope said, indicated that companies “still have work to do to provide the levels of convenience or features that users desire.”
Personal AI use in corporate environments creates multiple risks, including incomplete regulatory compliance and unsecured API connections between external AI services and internal company servers. Data exposure remains one of the most common consequences of unvetted AI use, and Netskope said it had observed a year-over-year doubling in “the number of incidents of users sending sensitive data to AI apps,” with the average company experiencing 223 such incidents per month.
Security experts say the best way for organizations to crack down on shadow AI use and prevent such incidents is by prioritizing the adoption of AI governance processes.
“The shift toward managed [AI] accounts is encouraging,” Netskope said, “yet it also highlights how quickly employee behavior can outpace governance.” The company recommended that organizations implement “clearer policies, better provisioning, and ongoing visibility into how AI tools are actually being used across the workforce.”