Should We Hit Pause on the AI Race?

The dangers of AI currently outweigh the benefits as we're not equipped to handle widespread adoption without regulation.

AI-programming-CODEX

Artificial intelligence has never been a hotter topic than it is right now, and we are at a very transformative period in our lives. The explosion of ChatGPT and large language models (LLMs) marks a turning point for the technology-humankind relationship as we know it…and we’re wildly unprepared to deal with the negative fallout of AI.

In fact, more than 31,000 individuals have shown forethought and responsibility by signing an open letter published by the Future of Life Institute in support of its call to halt AI development. This technology has some incredible potential and undoubtedly has features that can be used to make the world a better place. However, right now, the dangers of AI outweigh the benefits because we are not equipped to handle technology with capabilities on such a massive scale without regulation.

While there are several reasons to halt the development of artificial intelligence, they can be summarized by four buckets:

AI Lends Hackers a Helping Hand

Perhaps cybersecurity professionals’ most popularly cited reason for halting the development of AI is the many advantages AI gives hackers. After watching the usage of ChatGPT over the past several months, it’s clear that generative AI and LLMs are a master at automating tasks for hackers that, if not run through this technology, may make executing successful attacks significantly more difficult.

For example, ChatGPT can write extremely convincing phishing emails that lack the typical telltale signs of malicious activity, such as improper grammar and misspellings. The ethical limits put on this tool that are intended to prevent it from writing phishing emails are extremely easy to bypass. These generative AI tools can also be used to develop malicious code, and they also allow less sophisticated hackers, also known as “script kiddies,” to learn how to write malicious code.

Executing targeted attacks also becomes a much simpler task when using AI to do so. Tools like ChatGPT can scan the web in seconds and reveal information about any intended victim that hackers can use to customize their attacks, making the language in phishing messages more convincing.

AI speeds up or even removes several of the steps that hackers would once need to take to execute successful attacks, meaning they can increase their attack volumes exponentially. Phishing, malware and automating research are just the start of a long list of reasons that we need to halt AI development until theseLLMs can be accurately trained and their ethical limits can have a meaningful positive impact.

AI Poses a Major Threat to Data Management

We have already listed a few major ways that AI can be used to make hackers’ lives easier. And the scary part is that adversaries have likely only hit the tip of the iceberg in figuring out how this technology can benefit them. Add on top of all of these factors the risk AI poses to data storage, and we have got ourselves a perfect storm.

Data breaches have been a problem for far longer than AI has been popular, but AI is undoubtedly going to make breaches more common and more severe. Public and private sector companies must ensure their data protection processes are as bulletproof as possible, and this includes training their employees on how to handle sensitive data properly.

The problem-solving feature in tools like ChatGPT is very attractive to employees worldwide. They put information into the program, ask it a question and hope to get an accurate answer. Companies must ensure the information they input isn’t sensitive, as AI tools become very popular targets for attackers due to the large amounts of data they store and how easily their ethical guidelines can be bypassed.

Even the largest companies struggle with ensuring their data management processes are up to par, let alone smaller businesses that severely lack the resources that make data protection possible. AI poses a major risk to the businesses that serve as the backbone to the global economy.

The Importance of Humans in the Loop

While AI tools that monitor AI training systems are a valuable way to protect against attacks on training models and can pick up on anomalies faster and more accurately than humans, it is still helpful to have a human involved. In fact, it is crucial. When monitoring models see an anomaly in the data and set off an alert, a human is still needed to put this in context and figure out if the change in the data is due to a cyber infiltration or some other reason. For example, a data stream for a cybersecurity platform could see many changes in the hours after an exploit is publicly released, or data at a utility company could vary widely due to weather—and the monitoring model would see this. Both of these are examples of scenarios that could catch the attention of a monitoring system but are not actually cause for alarm.

Ultimately, a monitoring model alone is short-sighted. But with human involvement, it can become a very powerful tool.

The World Needs Time to Regulate

Last but not least, the world needs time. Time to train AI tools, time to educate their citizens on the proper usage of these tools, and, perhaps most importantly, time to write and enforce regulations on AI. Ideally, the world’s major cyber powers would have time to collaborate and establish a harmonized approach to dealing with AI.

The bottom line is that AI is more dangerous than helpful right now, and putting a pause on its development is the only way defenders are going to have enough time to even catch up with attackers, let alone get ahead. AI needs to be deployed with cybersecurity in mind, especially as machines control more critical processes in our everyday lives. We can only hope they will identify that the threat posed for their people and governments outweighs the gains and apply proper regulation.


SHARE

Gil Cohen is Head of Application Security for CYE. CYE’s cybersecurity optimization platform enables businesses to assess, quantify, and mitigate cyber risk. CYE serves Fortune 500 and mid-market companies in multiple industries around the world.