How to Harness AI for Effective Security Operations

Here are three ways security teams can harness AI to strengthen their security operations by going beyond just detection and response.

data-security

AI has been buzzing around in the industry lexicon, so much so that we’ve become desensitized to it. But as much as the word is overhyped, the actual technology is underapplied. Most AI use cases involve detection and response, sent as a laundry list of alerts. However, it isn’t yet widely leveraged to understand an organization’s unique environment and defense capabilities. Leveraging these insights is a major step towards continual security maturity. Below are three ways security teams can harness AI to strengthen their Security Operations by going beyond just detection and response.

Fight AI with AI – Creating automated attacks

In a world where AI is being used increasingly by threat actors, AI needs to be core to security. AI has been used by threat actors to generate very realistic and effective attacks, from targeted phishing emails to sophisticated malware code. Rather than playing hide-n-seek with these attacks, security teams can learn from them, studying the types of threats impacting them the most to create training data for their security tools. Large language models (LLM) can create phishing attacks to better train phishing detectors or for employee training exercises. These LLMs also hold powerful code generation capabilities, allowing for the creation of malware variants to better train malware detection tools, or by making artificial network data for Network Detection and Response (NDR) tools. By taking a step back and intently studying the types of attacks against your networks and infrastructure, you can avoid the cat-and-mouse game that is leaving SecOps teams high and dry.

Localize your insights – Asset criticality and contextual defense

In a similar vein, AI is a powerful tool to prioritize which vulnerabilities and security gaps need immediate attention or investment, and which present less of a business-shattering risk. AI algorithms are quick to learn from past exploits and system behavior which vulnerabilities are most likely to be exploited so that patching is prioritized. In this way, AI can be used to tailor SecOps to the operational and structural specifics of an organization.

According to Ponemon Institute research, 45% of security professionals say the lack of knowledge of applications and assets across security and IT teams causes a major delay in the vulnerability patching process. Risk scoring is only beneficial when looked at through the context of the assets within the business. A vulnerability given a high CVE score may not present a huge business risk if that vulnerability is isolated from business-critical services. With AI, contextual analysis becomes “smarter” as the AI collects historical data about the assets in your localized network, which then provides rich context for vulnerability prioritization. Automatic and continuous identification of critical assets sets a foundation for effective, tailored vulnerability management.

Get a copilot – AI as a force multiplier for human intelligence

At this point in time, LLMs can’t sit in the driver’s seat when it comes to decision making. However, they can come alongside security analysts to enhance and streamline their work. AI is a powerful force multiplier for human intelligence, giving analysts a natural language interface to data needed to be more efficient at threat hunting, incident investigation, malware analysis, and other security activities. AI can be used to understand an environment so that humans are better able to employ the right remediation activities. One example is how Microsoft is connecting GPT to security tools like Defender, Sentinel, InTune, and more through Security Copilot, which is informed by 65 trillion daily signals.

The most innovative organizations embrace the reality that AI and human intelligence working together is powerful. With data scientists, automation engineers and security experts consistently providing feedback to a core AI platform, organizations can remain strides ahead of cyber criminals leveraging new technologies in their attacks. However, in an attempt to stay ahead of the trend, make sure not to throw too much responsibility onto the back of AI programs. Letting it handle too much without human oversight can lead to dangerous, easily avoidable mistakes.

Not all AI is created equal. There are as many applications of AI in security as there are opinions about the technology in the news. According to IBM’s Global AI Adoption Index, 33% of organizations are using AI for the automation of IT processes (AIOps), with 29% using it for security and threat detection. It’s important for a security team to do their research and figure out where in their localized environment they need AI for more efficient SecOps. Simply piling on AI that will help detect threats will only create security debt, unless it is acting based on a contextualized understanding of their IT network. As an industry, we don’t know for certain the role AI will play in cybersecurity, but we do know how dramatically one new technology can have in a short amount of time. For the time being, businesses should look beyond the hype to extract the most efficient way to blend both artificial and human intelligence into their SecOps strategies.

Craig Jones

Craig Jones is Vice President of Security Operations for Ontinue. A provider of AI-powered extended managed detection and response (MXDR) services, Ontinue is on a mission to be the most trusted, 24/7, always-on security partner that empowers customers to embrace the future by using AI to operate more strategically, at scale, and with less risk.


Craig Jones is Vice President of Security Operations for Ontinue. A provider of AI-powered extended managed detection and response (MXDR) services, Ontinue is on a mission to be the most trusted, 24/7, always-on security partner that empowers customers to embrace the future by using AI to operate more strategically, at scale, and with less risk.