
While AI continues to dominate popular conversation, historically, experts have focused on two prominent schools of AI: Machine learning (ML) and symbolic AI. ML is effective in ambiguous situations but requires abundant training data and doesn’t operate with much transparency.
Symbolic AI, due to the level of human direction, can operate with transparency but does not shine in ambiguous scenarios. Often seen as the “old school” iteration of AI (dating back to the mid-1970s), symbolic reasoning is now widely used in software and application development for specialized problem-solving.
Enter hybrid AI. By adopting multiple forms of AI/ML technology, hybrid AI creates the path of least resistance for open-source coding, scales the developer experience and is the most secure development approach. Let’s look at a few key factors necessary to leverage a hybrid AI approach successfully.
Beware Software Hallucinations
One of the issues with Generative ML is that it suffers from hallucinations. This occurs when a machine provides a literal (but incorrect) output that differs from the expected result.
For example, a recent case study found that the code was usually near perfect when asking AI to do routine software development tasks. However, when the developers implemented a common input void of specific context, the AI took the input literally and was not able to provide an output in the right context. These types of hallucinations will always be a byproduct of AI solutions like generative AI/ML. However, the better the input, the more likely it is that the output is void of hallucinations.
Combining generative ML with symbolic AI can protect against hallucinations and provide the best chance of success. When developers implement both symbolic and generative AI solutions simultaneously while leveraging human experts to cross-check and oversee development, these tools dramatically boost efficiency and effectiveness across security teams.
Don’t Rush Due to AI Hype
Given the decentralized nature of development today, teams should not rush to incorporate the newest code generation tools simply because they are new.
With Generative AI any code crafted through generative AI tools will almost certainly contain flaws. In fact, 61% of organizations believe that automation has increased false positives and hinders developer productivity. Generative AI may index every public website as part of predictive analytics, parceling content it then uses as data information. However, 60% of businesses report using generative AI tooling without a secondary form of verification. It’s the modern-day version of Googling a question and copy-pasting the response.
Like an unverified Google search, unchecked Generative AI can often lead to inaccuracies and legal concerns. For example, if employees use ChatGPT, it could potentially leverage accessible confidential information to find solutions, exposing sensitive company data. This is why human intervention is so important.
The Human Matters (More)
Although AI is often used to replace mundane human tasks, it’s important always to create space for human intervention in AI processes — software development is no different.
Human brains are designed to solve problems quickly by pulling information from a deep pool of knowledge. AI does just that, built to pull information from various online and pre-programmed data to generate outputs systematically. However, regarding software and security, AI-generated code will usually contain flaws and vulnerabilities, no matter how much a given input is tailored to its target problem.
The developer community is excited about the advancement of AI as it will help automate time-consuming processes to allow more time for larger and more critical business projects. In fact, autonomous agents — AI that could create its own applications based on learned information — could streamline software development and ease some of the human responsibilities.
However, even with this future technology, AI-generated code won’t be completely secure enough to operate independently; a human perspective is still and will continue to be, an essential piece of the developer security puzzle. To best support the humans working alongside AI solutions, organizations should work with their developers and security teams to use and understand findings from AI, adding metadata and insights into findings at every step.
The Answer Lies Within the Hybrid Approach
Trustworthy security fixes rely on purpose-built AI. Very few organizations today have the proper logic AI in place or layers of language models to recognize a coding flaw in real time before real damage is done.
Organizations can truly secure AI-generated code, avoid hallucinations and developer vulnerabilities, but it requires a deep understanding of AI’s impact on software development and the active use of best practices at every step along the way.