
You would have to hide underneath a rock in Mordor not to hear about Generative AI and ChatGPT.
Generative AI and Large Language Models (LLM) are disrupting the market, and companies are scrambling to implement this revolutionary new technology in every aspect of their business. When a new technology comes out of the adoption curve, it leaves larger organizations waiting to see how the market reacts. Companies aren’t standing by this time. They are excited to see how Generative AI will light up the night sky.
However, before we set the tent on fire with fireworks because we are so excited, let’s pause, breathe and take a half step back.
Whenever an industry decides to throw caution to the wind and adopt technology rapidly, there will be a price to pay. As an evangelist of security and technology, the price will likely be stability, safety, and security.
What are the Most Significant Security Risks in Generative AI?
The greatest risk around Generative AI is how Generative AI is used unrestrictedly in the wild. Meaning your end user is your most significant threat. Generative AI can be turned against itself through prompt exploits and used as a tool to create malicious content and software.
Prompt Exploit Examples:
Malicious Tool Examples:
How to Defend Your Projects
So what can an engineer, product, or business professional do to help embed secure, safe, GenerativeAI into their platform during a land grab without getting scorched like Merry and Pippen?
Learn How Generative AI Works
Generative AI and Large Language models are fantastic, but you can only gain so much knowledge by chatting with ChatGPT. The construct of the LLM blinds the user based on its own bias. If you haven’t already, I suggest taking the time to understand how Generative AI and LLMs work and how they are architected.
It’s a lot more than an omniscient chatbot; far from it. Specifically to ChatGPT, Stephen Wolfram wrote an excellent article: What is Chatgpt Doing and Why Does it Work?
ChatGPT is just one technology powered by OpenAI; many GenerativeAI technologies exist and behave slightly differently. Still, they all are a massive leap forward in how humans will interact with machines and each other.
Matt Bornstein, Guido Appenzeller, and Martin Casado wrote an excellent read about the market and the ecosystem around GenerativeAI. Understanding this ecosystem of Generative AI can help when thinking about how to build and leverage the technology safely.
Understand the Legal, Business Risks of Generative AI
Generative AI is a non-deterministic system. Non-deterministic systems mean the software output is fuzzy or not predictable. This unpredictability can make designing an application with consistent behavior challenging.
The non-deterministic nature can not only affect your platform but potentially have significant legal and business risks associated with it. Mathew F Ferrario from WilmerHale outlines a good list of what to consider with implementing ChatGPT and related Generative AI Chatbots.
-
-
- Contract Risks
- Cybersecurity Risks
- Data Privacy Risks
- Deceptive Trade Practice Risks
- Discrimination Risks
- Disinformation Risks
- Ethical Risks
- Government Contract Risks
- Intellectual Property Risks
- Validation Risks
-
Understand the Difference Between Explainable AI and Generative AI
There are significant differences between AI methodologies. When building AI into your platform, it is a good idea to understand what Explainable AI is and how it differs from the non-deterministic output of GenerativeAI. Depending on your platform, you should leverage more methodologies incorporating Explainable AI to build predictability and assurance in your software.
Combining the two systems can build a much more trustworthy platform. Explainable AI is often table stakes depending on your functional space, such as finance and healthcare.
Best Practices
Soon we will see a flurry of security and compliance products around Generative AI, but it will take some time for those to be built, published, and understood. In the meantime, Secure-by-Design, DevSecOps, and Best Practices are always great approaches to building sustainable software.
NIST has published its Secure Software Development Framework, which lays out a detailed approach to designing and architecting resilient systems.
Primary methodologies such as DevSecOps, which leverages organizations’ communication, security, and compliance automation, and a rigorous testing framework are phenomenal defenses in building resilient Generative AI Applications.
The carrot is also soon to be replaced with a stick. In the most recent Cyber Security Strategy release from the White House, the government is looking to push liability onto those organizations that do not follow secure-by-design principles.
Outside following known software design methodologies and best practices and understanding how non-deterministic Generative AI works, the best advice I can give you and your team is to ask these questions:
-
-
- What is your application supposed to do?
- How can you ensure it works as intended?
- What could it do?
- How do you mitigate bad behavior?
-
It is a raucous party; we love a fireworks show. Generative AI will make an incredible mark on software development. Those who build resilient systems can run faster, achieve more, and build foundational companies to help humanity thrive. There are no shortcuts to building secure, resilient software.