AI's ability to write software has been steadily building. GitHub's Copilot, AWS' CodeWhisperer, Tabnine and other platforms offer assistance to developers through natural language interfaces, using open-source code to produce software.
Analyst firm Forrester refers to solutions like these as TuringBots, AI-powered software that can help plan, design, build, test and deploy application code. But the leap in generative AI interest is leading to questions about what impact the technology could have on the software creation process.
For enterprises aiming to leverage AI's capabilities in software production, clear guardrails are needed to keep applications safe and processes running smoothly. Since adoption has already started, even at an experimental stage, CIOs must create policies to shape how they will be added to the development lifecycle.
"Shutting them down I don't think is the right policy," said Mike Gualtieri, VP, principal analyst at Forrester, speaking on a panel last month.
Instead, executives need to stay informed about the latest developments from the vendor landscape, figure out what works within the current ecosystem and make adoption decisions based on that, Gualtieri said.
Testing is key
AI tools can generate code, suggesting line after line even with the simplest prompt. But a protective layer must exist between machine-made code and production.
Adoption has grown, as shown in February, when GitHub's Copilot tool generated an average of 46% of code when developers used the tools, up from 27% in June.
It's important to remember AI writes code based on what's previously been built by humans, said Diego Lo Giudice, VP and principal analyst at Forrester.
"Is all the source code that exists out there perfectly secure and with no vulnerabilities? No, it's not," said Lo Guidice. "You still need to do your step of security checking and running security scanning tools."
Human involvement will be key to how generative AI will shape the software development lifecycle.
"You can never blame ChatGPT, you can never blame a TuringBot," said Gualteri. "You're still responsible."
Most organizations are still in the experimentation stage of their generative AI journeys. And despite potential data privacy risks and unknowns associated with generative AI, an emerging technology, executives think the benefits outweigh the risks.
"Each company might have a different approach, but have to start working with this technology, and understanding quickly how it can make our developers and development teams, and even the entire IT [unit], much more efficient," Lo Giudice said.