Is Generative AI a Huge Cyber Threat? Right now, the Hype Outweighs the Risk.

Is AI a solution to cybercrime or just another tool criminals will use against you and your customers?


While many of us have lain awake thinking about the future of AI in cybersecurity – sometimes fearful, sometimes hopeful – the simple truth is that we don’t yet know where AI is going. In other words, the hype currently outweighs the risks. As with any emerging technology, AI certainly presents new challenges for organizations to navigate, but it is not yet a game changer in terms of the quality of AI-related attacks. Note that I said quality not quantity – more on that later.

There are a number of reasons for the current atmosphere of fear, the biggest being the self-replicating nature of hype itself. Because the technology has received so much buzz, executives and boards are paying extra attention – and so are IT teams and vendors. Numerous tools are taking advantage of generative AI’s ability to make it easier to navigate complex UIs and enhance the efficacy of security practitioners by taking over monotonous tasks.  It hasn’t just been the security teams seeing the benefit either. The promise of making AI-driven tools more accessible to non-technical teams like HR, for instance, is already being realized.

This isn’t to say that bad actors haven’t been using generative AI for nefarious purposes – they of course, have been. Let’s take a look at where the risks currently reside.

The Current Risk of AI

One of the main concerns about AI is that it could be used to create malware or cyberattacks that are more adaptive, evasive, and effective than human-generated ones. Some researchers have demonstrated how AI chatbots can be jailbroken, allowing tools like ChatGPT to be used for malicious purposes. While this can be effective in helping bad actors increase the speed at which they work, the attacks generated by jailbroken generative AI tools aren’t as sophisticated or creative as those created by skilled hackers. The advantage is more found in helping non-native speakers craft better phishing text or for inexperienced coders to cobble together malware more quickly.

When I first started reading about “jailbroken” large language models (LLMs), I was far more worried that we’d be hearing about malicious actors compromising the AI-driven chatbots that are becoming ubiquitous on legitimate websites. I think that’s a far greater hazard to the common consumer than a phishing email with better grammar. Imagine a scenario where you’re trying to find out the status of an order you’ve actually placed, but either you’re redirected to a fake website running a malicious chatbot, or the chatbot on the legitimate website has been compromised. Since you’ve gone to that site on your own, you would think nothing of providing your credentials, personal details, and perhaps even your credit card number to “look up your order.”

This isn’t to say that GPT-style AIs aren’t an advanced phishing threat – they most certainly are, but for the moment, the risk is more about the quantity of attacks these tools allow for vs. the quality of the attacks. As AI becomes more accessible and affordable, it allows bad actors to launch cyberattacks with less effort and cost. The attacks are still based on the same well-known techniques and tools that we were familiar with before ChatGPT became ubiquitous – phishing, ransomware, or denial-of-service. The method relies more on the quantity of attacks that probe organizations for human errors or system flaws. AI may help automate or optimize some aspects of these attacks, but it does not fundamentally change their nature or impact – this means organizations don’t need to overthink their response to its use.

How Organizations Should Respond to AI

The constant buzz around AI in cybersecurity can create a false sense of urgency among organizations, leading them to overestimate the threat or underestimate their own capabilities. Instead of falling into the hype, organizations should respond to AI threats with a balanced and rational approach.

One key step is investing in training and education for employees and stakeholders. This includes raising awareness about the potential risks and benefits of AI in cybersecurity and teaching them how to identify and prevent common cyberattacks. For example, organizations can train their employees to recognize phishing emails by looking for signs like mismatched domains or suspicious links. Instead of just focusing on the risks, organizations should look for ways they can use AI to help speed up their work in defending the organization. Just like when internet search engines revolutionized the practice of quickly and easily gaining access to information, AI provides the ability to distill and summarize large data sets quickly. Employees need to develop skills in crafting prompts and interpreting the data returned to quickly discern errors or misinformation so that they can effectively work alongside the AI. There are already many free courses available on “prompt engineering” available for consumers and developers alike.

The other critical aspect is that organizations need to invest in tools that give them a deeper understanding of the entirety of their IT environment. Given AI is being leveraged to help malicious actors identify flaws and mistakes more efficiently, it is more important than ever that organizations can see and manage everything that is on the network or connecting to it. They will gain far better defensive advantage from making sure those devices are patched and properly configured than they will from snatching up shiny new tools to combat overly hyped AI threats. At the end of the day, whether an attack gets in via malware written by a human or by AI, the attackers are still “inside the house.”

The Bottom Line

While generative AI’s recent popularity has lowered the barrier of entry into cybercrime for new and inexperienced criminals, we haven’t yet seen any data that it is more effective than human-generated equivalents. At this point, we seem to be in relative balance as criminals experiment with utilizing AI in their attacks and security teams continue implementing generative AI in their cyber defenses. While this balance can change, with continued investment and research into the use of generative AI, it’s arguably more likely that the current advantage organizations have can be expanded upon. Perhaps generative AI can be a technology that makes the future safer rather than bleaker. In the meantime, however, we can’t get too caught up in the potentials of AI, for our benefit or detriment.

The cyberthreats that are currently coming out of the use of generative AI are best combatted by optimizing the tools and approaches that allow IT teams to address critical security controls through visibility and active management. Tools that themselves are now being enhanced with the use of generative AI. Organizations need to secure their vulnerabilities regardless of the source or avenue of attack, as generative AI has not changed the game – yet.