How to Think About Generative AI Risk Mitigation

Act on AI now, and don’t be left behind waiting for a perfect risk mitigation approach that may never come.


Generative AI offers businesses transformative advantages in cost-efficiently bringing new—and often innovative—solutions to market. Trained generative AI tooling can understand and follow instructions communicated in plain language, allows internal users (in many cases including those without a technical background) to assemble applications, analyze data, and create content faster and easier than ever before. Teams can dream big on use cases and discover new approaches at an unprecedented pace, making generative AI a game-changer from a market competition perspective.

At this point in 2023, though, that is all likely preaching to the choir.

What is still less understood is how to unlock the power of generative AI while mitigating risks. Businesses face a few intricacies that require the right strategies to navigate. Many of these risks stem from complexity. If a hallucinating generative AI gives company leaders incorrect business insights, generates the wrong content, or gives customers bad experiences, generative AI applications can backfire—fast. Effective policies and guardrails are essential, as are validation processes and keeping humans in the loop.

However, businesses can also miss out on generative AI advantages if they hesitate or overburden their adoption process. This is not a technology where you want an AI committee slowly exploring implementations and making approvals…unless you want to see competitors running circles around you. My advice for technology leaders: act now, and don’t be left behind waiting for a perfect approach that will never come. Put up guardrails, but move fast and expect competitors to do the same.

Mitigating generative AI risks

Strong generative AI compliance standards will arrive. For now, though, the best approach is to look at your existing company policies. For example, say you’re implementing a generative AI version of a bank helpline. What safeguards do your current policies and compliance procedures require? Available generative AI tools can help with rapid experimentation to test and validate application prompts, responses, and behaviors.

Traditional protections like firewalls and intrusion prevention systems aren’t designed to prevent attacks on AI, especially attacks that subvert or divert the AI learning process and all but poison the system. Have controls in place wherever training is involved. Don’t make your chatbot learn in real-time and don’t update it immediately: you’ll end up with the Microsoft problem, where the Tay chatbot became racist within minutes. Keep learning offline and introduce AI interference protections, especially where AI dynamically learns from input.

Recognize the value of your data and keep it private

Looking ahead to the generative-AI-driven competitive landscape of the near future, it’s easy to anticipate the coming battle for unique data. The hyperscaler companies like Google, Apple, Amazon, Facebook, and Twitter (er, X) have troves of data on how consumers use their apps. But they’re still missing valuable on-the-ground consumer data: where are consumers getting their oil changed, what beauty products are they buying, what burgers are they eating? That type of data sits with traditional enterprises and, when paired with generative AI, has tremendous power to drive innovation and competitive success.

In order to maintain control over that valuable data, many companies must commit to private large language models (LLMs) rather than public options like OpenAI or the major cloud offerings. Control over unique data will emerge as the essential differentiator that makes one business’s generative AI applications better than those of its competitors. For businesses, private LLMs mean building a unique IP with generative AI, rather than giving their data away for free.

Capture the generative AI opportunity

Generative AI is a revolution, offering capabilities and enabling techniques far different from what we’ve been used to. Organizations need to treat this differently: it’s not a hyped trend that will fade. The application of generative AI within companies is going to be deep-rooted and enduring. Certainly, secure safeguards and guardrails are crucial. But more than that, leaders need to think about generative AI as an accelerant for supercharging as many use cases as possible. Look at your departments, what you provide customers, and the critical use cases in your revenue path, then pursue the generative AI opportunities and promising innovations that present themselves—just be sure to do so with proper precautions.


Brian Sathianathan is the Chief Technology Officer at, whose Interplay platform facilitates rapid prototyping of AI-based and digital solutions, and operates as innovation middleware in production. Previously, Sathianathan worked at Apple on various emerging technology projects that included the Mac operating system and the first iPhone.