Don't Fear AI, Manage It.

I had the pleasure of connecting with Dennis Rodman, leading expert in GxP systems and digital validation, on the topic of AI and thought his insights were extremely valuable, so I asked Dennis to write a guest blog which is both informative and enjoyable to read.  

Dennis is an award-winning quality systems director specializing in bridging the gap between regulatory compliance and technological innovation in the life sciences. Below are his key take-away messages followed by the blog.  

  • The Core Thesis: Don't Fear AI, Manage It. Dennis dismisses fear of AI as simple technological hysteria. He posits that the rigorous principles used to control sterile pharmaceutical manufacturing facilities can and should be directly applied to managing AI models.

  • Validation is Universal: His comparison to a physical facility make sense.  At a facility, there is Commissioning, Qualification, and Validation (CQV); an AI model is proven the same way through Computer System Validation (CSV). The process is a direct parallel.

  • "Data Drift" is Digital Contamination: Just as Environmental Monitoring continuously scans a facility for contaminants, AI requires constant monitoring for "data drift"—the degradation of performance caused by new data, which is effectively a form of digital contamination.

  • Data Governance is the Antidote to "Hallucination": This digital contamination can potentially lead to AI "hallucinations" (confident falsehoods). The solution is not a new invention but a familiar discipline: robust Data Governance, which acts as the digital equivalent of a facility's cleaning and control protocols.

 Enjoy!


One develops a certain perspective on technological hysteria. I recall my grandfather, a veteran of both economic collapse and global war, eyeing my first computer with grave suspicion. He proclaimed this beige contraption was a device for mind-control, destined to liquefy my gray matter. After facing down history's greatest threats, he believed his final nemesis was a 16-bit processor. And so, it is with a sense of seasoned irony that I, now 50, listen to my brilliant colleagues in Pharma and Biotech speak of Artificial Intelligence in hushed tones—convinced it is not a tool, but an usurper poised to 'take over.' The fear is the same; only the beige box has changed.

So, how do we quiet the hushed tones and demystify this new AI "usurper"? We simply hold it to the same standards we've applied in our labs for decades.

Consider the sterile, classified facility where we manufacture drugs. We don’t just build it and hope for the best; we subject it to a painstaking process of commissioning, qualification, and validation (CQV) to prove it is fit for purpose. We do the exact same thing for an AI model through Computer System Validation (CSV). The process is a mirror image: one qualifies a physical space, the other validates a digital tool.

But validation is merely the price of entry. The real work is in the monitoring. We have an entire discipline, Environmental Monitoring (EM), dedicated to ensuring a facility stays in a state of control through personnel changes, seasonal shifts, and new materials. AI requires the same vigilance. Its version of contamination is called “data drift” or “concept drift,” where new data causes the model’s performance to degrade.

Think of it this way: “data drift” is the digital equivalent of a painter walking into your cleanroom with mud on his shoes. That mud—poor data—contaminates your pristine environment. In AI, this doesn’t create a physical mess; it can lead the system to generate outputs that appear convincing but are incorrect—the phenomenon we call “hallucination.” In pharma terms, we’d consider a hallucinating model to be “out of control.”

How do we handle the muddy painter? With robust, validated cleaning protocols. How do we prevent hallucinations from bad data? With robust Data Governance. The principles are identical; the tools change, but the standards of control endure.

While I hope my grandfather is looking down on this blog with a better understanding of innovation, he is likely asking, “What the heck is a blog?”


About SecureCHEK AI

SecureCHEK AI is a Software-as-a-Service (SaaS) system that seamlessly integrates with enterprise platforms to enhance MLR efficiency. Purpose-built for pharmaceutical and medical device companies, the software helps MLR reviewers efficiently assess and mitigate compliance risks and reduce comments and re-reviews.

As the first Analytical AI software solution for pre-MLR use in life sciences, SecureCHEK AI leads the way in integrating Analytical AI with GenAI to reduce rounds of review and review time per document. Rapid deployment and the user-friendly interface minimize the learning curve, making it easy to get started.

Contact us for a demo to learn how SecureCHEK AI builds libraries and executes prechecking.

Previous
Previous

Technology Building Blocks To Overcome GenAI-only Hallucinations

Next
Next

Can You Trust What Your AI Prechecking Solution Is Telling You? Don’t Let AI Hallucinations Derail Your MLR Strategy