Dive Brief:
- Executives and workers are misusing AI tools and disregarding company protocols, contributing to higher security risks, according to a CalypsoAI survey published earlier this month. The AI security provider surveyed more than 1,000 U.S.-based office workers.
- More than half of workers said they would ignore policies if an AI tool made their job easier, and one-quarter of the workforce surveyed admit using AI without confirming whether tools are allowed. Nearly 30% of workers have sent AI-generated content without review.
- Higher-ups aren’t following the rules either. More than one-quarter of C-suite executives admitted to submitting proprietary company information into AI tools, and the same proportion said they used the technology to access sensitive data and documents. More than two-thirds of C-suite leaders said they’d use AI to make their job easier even if it clashed with internal policies.
Dive Insight:
Enterprises that have tried to safely implement AI typically invest in governance and update companywide policies to provide better guidance.
But if employees at all levels dismiss AI policies at work, these efforts are effectively sabotaged.
“We're seeing executives racing to implement AI without fully understanding the risks, frontline employees using it unsupervised, and even trusted security professionals breaking their own rules,” CalypsoAI CEO Donnchadh Casey said in a release accompanying the report. “We know inappropriate use of AI can be catastrophic for enterprises, and this isn't a future threat — it's already happening inside organizations today."
Most businesses have already run into at least one type of problematic AI incident, often resulting in financial losses, according to an Infosys report published earlier this month. Enterprises that are struggling to rein in shadow AI and misuse face further fallout.
It’s a film IT leaders have seen before.
“When cloud adoption took off, many enterprises rushed in without the controls, visibility, or shared responsibility models needed to manage it safely,” Casey said in the report. “AI is even more disruptive.”
The rise of AI agents only adds to the complexity.
CIOs and their C-suite partners continue to rely on training and other controls to ensure employees know the risks of misuse and what it could mean for the organization. Executives are playing to a receptive audience: 3 in 5 employees would welcome more education on the risks involved in AI use, according to a ManageEngine survey published last month.