How ChatGPT Can Be Misused by Threat Actors

There are already reports of ChatGPT being used for malicious purposes, and there will inevitably be more.

payment-security

ChatGPT (Chat Generative Pre-trained Transformer) is an intelligent chatbot that was launched as a prototype by OpenAI on November 30, 2022. Companies are using ChatGPT for a variety of reasons such as writing codes, writing job descriptions, drafting interview requisitions, for copywriting and content creation, customer support and preparing meeting summaries. Some users are even using ChatGPT to assist in writing their bio for their company’s webpage.

To say that ChatGPT has taken the world by storm is an understatement. Reuters reports that the popular chatbot reached 100 million users in January, just two months after its launch. The app had 13 million daily unique visitors in January, making it the fastest-growing consumer app in history. ChatGPT showcases the influence artificial intelligence (AI)-generated content will potentially have in the future, and it’s exciting to explore the possibilities. Unfortunately, there are already reports of the chatbot being used for malicious purposes, and there will inevitably be more.

ChatGPT can be misused in a myriad of ways

Despite all of the cool things ChatGPT can be used for, it can also be used to create malicious code. However, it’s not quite at the stage to really make an impact just yet—which is a good thing. Currently, auto-coding AI programs such as ChatGPT are not capable of being used to develop mission-critical code, such as programs capable of locating software vulnerabilities.

Much more interesting is ChatGPT’s incredible capability to generate relatively ‘cookie-cutter’ text that could plausibly have been written by a human. In fact, the biggest commercial use case for ChatGPT-like tools is generating marketing content—since such content is typically based on simple templates and standard wording.

Unfortunately, phishing content or spam also follows the marketing theme of ‘sounding like a human’’ wrote it, feeling relatively personalized and carrying a clear, simple message.

ChatGPT could be used by attackers to craft slightly different variations of the same phishing email, in any language in the world. A detection system that heavily relies on natural language processing, such as traditional spam filters, or uniformity of mass content, would be extremely vulnerable to such attacks.

ChatGPT-like tools could also be used to convincingly replicate existing, legitimate websites, such as the login pages of banks, social media, and other trusted brands, evading security detection through very small variations.

Now let’s look at more sophisticated ways ChatGPT might be misused.

Adversaries often need to interact with their targets. For example, in a business email compromise attack, the adversary might impersonate the Chief Executive Officer (CEO) by asking the Chief Financial Officer (CFO) to do a wire transfer.

What happens if the CFO responds with a question? Well, it turns out that ChatGPT can provide a very plausible answer and conduct an entire conversation in a professional manner, convincing the CFO that they are indeed speaking with the actual CEO and not with a bot.

Unless the CFO physically talks to the CEO or calls them, they will have no way of 100% verifying their identity.

Taking it a step further, an attacker can train the model on a large amount of text written by a specific person—for example their emails—and then write emails, contracts, or letters in the style of that person, asking for sensitive information, a wire transfer, etc., which is difficult to detect.

The attackers would be familiar with the context of the relationship with the intended recipient and would be able to generate very plausible messages.

Open source ChatGPT

It will not be long before anyone can use this technology; many versions of large language models (LLM) used in AI are already open source (e.g., the recently released models from Meta), and there are various groups working on making ChatGPT open source clones.

Once these models are available, anyone can access this technology and potentially disable some guardrails put up by OpenAI.

The controls and monitoring systems that are currently in place to prevent the malicious use of ChatGPT will unlikely stop attackers, even in the short term, as the ‘close-enough’ source code of open-source variations will let attackers run the model themselves.

What can security teams do?

The current spotlight is on what attackers could potentially do by leveraging ChatGPT; but, AI cuts both ways. As the functionality of ChatGPT-like tools increases, cybersecurity vendors and IT professionals need to ensure they have their own deep, AI-based measures in place to detect ever-stealthier and more convincing attacks, and to keep enhancing and developing these technologies.


SHARE
Asaf Cidon is a professor of electrical engineering and computer science at Columbia University and a Barracuda advisor. Previously, Asaf served as vice president of content security services at Barracuda Networks, and in this role, he was one of the leaders for Barracuda Sentinel, the company’s AI solution for real-time spear phishing and cyber fraud defense. He’s also the cofounder and previous CEO of Sookasa, a cloud storage security startup that was acquired by Barracuda. Asaf hold a PhD and MS in Electrical Engineering from Stanford, and BSc in Computer Engineering from the Technion.