Guardrails are Needed for Generative AI

As we continue to dive into the world of generative AI, it’s critical to put key guardrails in place today to protect against the fast-evolving threats of tomorrow.

AIOps

At Black Hat 2023, the annual hacking and cybersecurity conference, generative AI took obvious center stage with discussions about how the hot technology is reshaping the cyber industry. Since ChatGPT entered the spotlight, enterprises have been zealously looking for ways to integrate it into new product innovations, business operations, and IT workflows. As with any new technology, however, the bad guys have also been at the forefront. Securing generative AI and leveraging it against cybercriminals is an undertaking multiple hands must take part in. At the show, the Department of Defense’s Defense Advanced Research Projects Agency (DARPA) introduced the AI Cyber Challenge” (AIxCC), a two-year competition calling computer scientists and developers to create AI-powered cybersecurity tools. As we continue to dive into the world of generative AI, it’s critical to put key guardrails in place today to protect against the fast-evolving threats of tomorrow.

Train the Weak Link

Humans often remain the weakest link when it comes to emerging technology, and generative AI is no exception. Neglecting to train your employees on how to best engage with generative AI can result in an even greater attack surface. It’s essential that organizations communicate to employees the danger of feeding company information into any generative AI model. Much like one wouldn’t share sensitive code or proprietary information with friends who work for competitor companies or strangers at a coffee shop, employees should be cautious of what data they provide to large language models like ChatGPT.

Company leadership should prioritize education and regular Q&As to develop a crystal-clear understanding of the risks and limitations of the technology. Educate employees on what information is highly sensitive and show them how to treat that data. Companies should also consider investing in a non-public model to use for their intellectual property. This is because LLMs grow by using information entered by users. It then becomes possible to talk the AI into giving up information in the next model. Using a privately trained model will ensure the sensitive data stays securely within your organization.

Fervently Protect Your Software Supply Chain

Cybercriminals have been increasing their software supply chain attacks over the years. Between January to October 2022, the open-source repository npm found 7,000 malicious package uploads, a nearly 100x increase from 75 uploads in 2020. It should come as no surprise, then, that threat actors are using GenAI as a delivery mechanism to spread malicious packages into developers’ environments. Recent research has uncovered “AI package hallucination,” in which an attacker will find a recommendation for an unpublished package that ChatGPT pulled from the internet, then publish their own malicious package in its place to execute supply chain attacks.

As with any scenario involving the compromise of supply chains through shared third-party libraries, enterprises should thoroughly test and review code intended for use in production environments. Any code you intend to run should be evaluated for security and should have private copies. Developers should never blindly trust any library found on the internet, let alone in a chat with an AI bot. For example, security researcher Adrian Wood recently shared a presentation at Defcon on how Machine Learning Models could be weaponized to hack organizational supply chains. Through red teaming and bounty hunting research, he targeted machine learning pipelines to compromise the target using supply chain attacks via open-source platform Hugging Face. By getting engineers to upload repositories into the underutilized, under documented machine learning model, he demonstrated how attackers could fully gain read/write privileges.

Beyond exercising caution when it comes to shared libraries, organizations should look to invest in a comprehensive software bill of materials (SBOM) to achieve granular visibility into all the applications in your environment and every open-source library. This will give developers the ability to quickly find any instance of a vulnerability and, if needed, close off any impacted endpoints.

Continue to Focus on Strong Security Controls

Criminals know how to capitalize on emerging technology to advance their own goals, and this is especially true when considering GenAI. Consider for instance the recent examples of WormGPT and FraudGPT, a chatbot that allows wannabee threat actors to write malicious code, draft phishing emails, and create undetectable malware. While FraudGPT and tools like it may not enable script kiddies to launch sophisticated threat campaigns, they can serve to make business email compromise more persuasive and harder to detect. The ability to create crafty phishing emails easily and quickly has lowered the barrier to entry for anyone looking to become a novice cybercriminal. The pool of potential victims is also much larger given language is no longer an obstacle. The good news is the tactics needed to defend against BEC – employee training, email security, payment authorization controls, and identity and access management – remain the same.

While generative AI is a powerful tool that will undoubtedly change the IT landscape, nefarious applications of the technology are inevitable, making caution crucial. At the end of the day, AI will never replace humans, as it lacks the ability to understand the wider business context and effectively manage risk by taking the full picture into account. Humans can also adapt to various situations and see the ethical implications of complex issues – all concepts generative AI may not comprehend. Instead of blindly jumping on the bandwagon, organizations should prioritize rolling out the technology securely and strategically.