Dive Brief:
- Hackers are increasingly using AI in their attacks and defenders should follow suit, Check Point Software Technologies said in a report published Wednesday.
- The company’s AI security report, announced at the 2025 RSAC Conference in San Francisco, also found that one in 13 generative AI prompts contained potentially sensitive information, and one in every 80 prompts posed “a high risk of sensitive data leakage.”
- Unauthorized AI tools, data loss, and AI platform vulnerabilities topped the list of AI risks for enterprises, according to Check Point.
Dive Insight:
Check Point’s report represents a road map for businesses interested in developing AI tools to streamline their operations. It also offers warnings to companies that aren’t interested in using AI — namely, that poor access controls and software management could allow their employees to use AI tools without permission.
Whether authorized or unauthorized, AI tools can expose corporate secrets, user data, and other sensitive information. The Check Point report catalogs the risks that companies need to be aware of. For example, it warns that unapproved AI software “can lead to security vulnerabilities, compliance issues, and inconsistent data management, exposing a company to operational risk or data breaches.”
Regarding data loss, the report urges businesses to “assess AI applications for data protection and industry best practices” before adopting them, noting that some applications store data, share it with third parties or fail to protect it from hackers.
The report also offers a sample of Check Point researchers’ findings about hackers’ use of AI. “Currently, ChatGPT and OpenAI’s API are the most popular models for cyber criminals, while others like Google Gemini, Microsoft Copilot, and Anthropic Claude are quickly gaining popularity,” the report said, citing information gathered on the dark web.
At the same time, Check Point said, the AI software “landscape is changing with the launch of open-source models like DeepSeek and Qwen by Alibaba,” which “have minimal usage restrictions.” In addition, the rise of bespoke hacking-focused AI programs — such as WormGPT and the FunkSec ransomware group’s AI-powered DDoS tool — represents an important development in the cybercrime ecosystem to which cyber defenders should pay attention.
Check Point’s report makes the case that defenders should begin incorporating AI — including in the form of the company’s own anomaly detection platform — into their defensive strategies to counter the accelerating effects of offensive AI.
“The use of AI in cyber crime is no longer theoretical,” the researchers wrote. “It’s evolving in parallel with mainstream AI adoption, and in many cases, it’s moving faster than traditional security controls can adapt.”