Dive Brief:
- While one-third of IT decision-makers believe it’s critical to quickly adopt AI coding tools, those working directly with the solutions have security concerns, according to a Snyk survey published Tuesday.
- Nearly 3 in 5 C-suite tech decision-makers cited security fears associated with AI coding tools, but application security teams and developers were three times more likely to rate AI-generated code security as bad.
- Only 1 in 5 organizations run proofs of concept prior to introducing AI coding options, and more than half of organizations didn’t provide developers with tool-related training, according to the survey of 406 global IT professionals.
Dive Insight:
AI coding tools are becoming more popular, but time is ticking for companies to adopt protocols that ensure safe, secure use.
From big banks to large retailers, software engineers are warming up to the technology. Nearly two-thirds of organizations are in pilot or deployment stages with AI coding tools, according to a Gartner survey published in April. The analyst firm expects AI coding tool use to snowball in the next four years.
“The GenAI Era has arrived, and there is no ‘putting the genie back in the bottle,’” Danny Allan, CTO at Snyk, said in the report. “We believe it's now incumbent on the cybersecurity industry to recommend clear guidelines that will allow all of us to benefit from this increased productivity, without the associated security sacrifices.”
Snyk’s report recommends enterprises establish formal proof of concept processes for all new AI technologies, prioritize feedback from security teams, document and audit AI-generated code, invest in secure tech and augment companywide AI training.