Certification webinar

AI in GxP: The FDA’s take on AI-enabled systems

  • The use cases (and limitations) of AI in GxP
  • How to meet the FDA's requirements
  • What risks to look for, and what to document

June 3rd, 2026

Two professionals beside FDA logo and AI brain-chip graphic, representing AI in pharmaceutical regulatory compliance.

From FDA and ISPE guidance to practical steps

How do you make sure your AI usage meets FDA requirements?**

3 major FDA guidance documents in 12 months.

The regulatory framework for AI in GxP is taking shape.

In 2025, the FDA finalized their CSA guidance. In January, they issued draft guidance on AI for drug and biological products and on AI-enabled device software. ISPE published the first GAMP AI Guide.

These documents are now shaping how the industry approaches AI.

But translating 300+ pages of guidance into action is not simple.

So, what are we actually talking about when we say "AI in GxP" – and how do you validate it?

  • GxP use cases

    The core use cases of AI in GxP and the limitations to keep in mind.

  • Risk classification

    How to classify AI risk at the function level under CSA (not system level).

  • Vendor questions

    The questions to ask your vendors about AI features before your next audit.

  • AI framework

    A 6-step framework you can start applying to your operation right away.

Sign up now

What does the FDA have to say about AI in GxP?

Understand the use cases, limitations, and how to meet FDA requirements.

When: June 3rd, 2026

  • 9.30 A.M. CDT
  • 10.30 A.M. EDT
  • 3.30 P.M. BST
  • 4.30 P.M. CEST
Initializing ...

When AI enters, your validation approach has to change

Traditional software is deterministic: same input, same output. You validate it once, lock it down, and move on. AI doesn't work that way. Models drift, outputs are probabilistic, and "validate once" breaks down.

The FDA's CSA framework gives you a risk-based approach. But when the system includes AI, the questions multiply: How much testing does an AI feature need? What risks are specific to AI? What do you document, and what can you stop documenting?

In this webinar, Stefan Voinea and Jakob Konradsen will walk through how to validate AI-enabled systems in accordance with FDA requirements.

The webinar covers:

  • What AI means in a GxP context: How AI is being used in GMP, GDP, and GLP environments today, and where the real limitations are.
  • Why "validate once" doesn't work for AI: What makes AI different from traditional software and why your assurance approach has to change.
  • The FDA's requirements: How to validate AI-enabled systems under the CSA framework. How much testing you need, what risks to look for, and what to document.

Certificates Black.png

About the speakers

  • Jakob Konradsen is co-founder and Chief Quality Officer at Eupry. He is responsible for setting the quality strategy across all areas of the company, from developing SOPs to supporting product development in ensuring Eupry’s solutions continuously live up to the highest quality standards and meet GxP demands.
  • Stefan Voinea is a Senior Product Manager at Eupry, where he builds AI agents to scale operations and accelerate compliant innovation across GxP workflows. Previously at the World Health Organization, he served as Project Manager and Technical Lead and co-authored research on AI’s risks and opportunities in health. His focus is on building AI designed with humans in the loop and guardrails that prioritize safety, quality, and data integrity.
Two smiling men standing side by side against a light blue background, one with a beard and glasses, one in a navy sweater.