Generative AI is expected to help write secure code, improve code analysis, create tests, write documentation, and assist with many other DevSecOps functions. But the technology is still in its infancy, and early results are mixed.
The optimistic view is that by training the AI on libraries of clean and secure code, teaching it best practices, and exposing it to a company’s internal policies and frameworks, all of its code suggestions would be secure right from the start. Plus, generative AI can also be used for finding security problems in existing code, for debugging, for generating tests, for writing documentation, and many other tasks related to DevSecOps.
The danger, however, is that generative AI could instead generate insecure code, and do so quickly and authoritatively, creating more problems for companies down the line.
So, how many developers are already using generative AI? According to most industry surveys the majority. A CoderPad survey of more than 13,000 developers released in January found that 67% of tech professionals say that they already use AI as part of their job, with ChatGPT being the top tool, followed by GitHub Copilot — a generative AI development tool — and Bard. Nearly 59% said they use it for code assistance, more than half said they use it for learning and tutorials, and around 45% said they use it for code generation.