Most pharma AI initiatives do not fail because the technology does not work. They fail because no one can clearly answer a much more basic question: Who owns it? The proof of concept runs. The use case is compelling. The model performs as expected. And then progress slows to a crawl. QA waits for a validation strategy. IT waits for stable requirements. The business waits for someone to make a decision. By the time ownership is clarified, momentum is gone and the organization quietly returns to familiar, manual ways of working.
This pattern is far more common than it should be. And under the new regulatory framework, it is about to become much more expensive. EU GMP Annex 22 makes one thing explicit: AI governance in pharma is cross‑functional by design. It cannot be delegated to a single team. It cannot be outsourced to a vendor. And it cannot be treated as a compliance checkbox that QA signs off on at the end of a project.
AI in regulated environments requires business, QA, and IT to operate as a genuine triad, from the first discussion about intended use to the final decision about decommissioning. That is a deeper organizational shift than many leadership teams have fully acknowledged.
What the regulation actually says about accountability
Annex 22 is unusually specific about where accountability sits, and it's worth reading that section carefully. First, accountability for intended use is explicitly assigned to the process Subject Matter Expert. This is the person who understands the business process the AI is being applied to. That role is responsible for defining the intended use, setting accuracy expectations, and establishing acceptance criteria. This responsibility cannot be handed to a vendor, and it cannot be absorbed by QA. It belongs to the person who knows the process.
Second, accountability always remains with the regulated company, regardless of how the system is built or hosted. A cloud‑based system does not transfer responsibility. A vendor‑built model does not transfer responsibility. An AI component embedded by a software partner does not change the expectation that you can explain, document, and defend the system from your own organization, using your own evidence.
Across the regulation, four accountability anchors consistently appear:
- Qualified people with named, documented responsibilities
- A white‑box understanding of the model’s behavior, not just its outputs
- Risk management proportionate to actual GxP risk, aligned with ICH Q9
- Accountability that stays with the regulated organization, not with suppliers
None of these anchors can be fulfilled by a single function working in isolation.
Why single-function ownership always creates gaps
In practice, organizations often try to assign AI ownership to one team. That approach consistently fails in different ways.
When QA owns AI alone, governance becomes heavily compliance‑oriented, but often disconnected from technical reality. QA can define regulatory expectations, but typically does not have the depth to evaluate model architecture, data pipelines, or integration risks with ERP, MES, or LIMS. The result is excellent documentation around a solution that was never quite right to begin with.
When IT owns AI alone, validation is often treated as a project milestone rather than a lifecycle commitment. The system is built, it works technically, and only then does the organization attempt to assemble a validation package. Retrospective validation is not acceptable in GMP. Without QA and business involvement from the start, intended use is underspecified, acceptance criteria are vague, and auditability is incomplete. The system functions, but it is not defensible.
When the business owns AI alone, progress is fast and controls are thin. Change control feels like friction. Audit trails are incomplete because no one specified them as requirements. Performance monitoring is informal or absent. When something eventually goes wrong, the organization cannot reconstruct what happened or why.
Each function holds a critical piece. The absence of any one of them creates gaps. And in regulated environments, gaps are precisely what inspectors find.
What the triad looks like in practice
The triad is not a steering committee or a late‑stage review board. It is business, QA, and IT working together from day one, each contributing what the others cannot.
Business defines the process with precision. What decision is being supported? What outcome is expected? What constitutes acceptable performance? What must remain under human judgment?
QA defines the compliance frame. Which steps are GxP‑critical? Where must human decision gates sit? What needs to be captured in the audit trail? What does “validated” mean for this specific use case?
IT designs the architecture that makes the system controllable. Which steps must be deterministic? Where can AI assist without introducing risk? How are access, logging, monitoring, and failure modes handled?
The handoffs matter, but the overlap matters more. Intended use cannot be finalized without business and QA in the same room. Validation scope cannot be set without QA and IT together. Architecture cannot be designed without IT understanding the business process. In a true triad model, these conversations happen together, not sequentially.
The triad is three functions working together from day one, each contributing what the others can't — not a committee that reviews project updates, not a sign-off chain at the end.
The hybrid competence gap
One thing the triad model exposes quickly is a competence gap that most organizations haven't fully addressed: the need for people who can operate across these boundaries.
A regulatory expert who can evaluate model architecture. A QA professional who understands what a validation package for an AI system actually needs to contain. An IT architect who can speak fluently about GxP risk. A business lead who understands why the human decision gate isn't optional. These profiles are rare. And without them, the three functions struggle to actually collaborate, they end up talking past each other in their own language, producing documents that satisfy their own requirements but don't connect into a coherent governance model.
Building hybrid competence takes time. It means investing in cross-functional training, rotating people across teams, and deliberately creating forums where regulatory, digital, and business expertise are in the same conversation. It's not a quick fix, but it's the only way the triad actually works.
Five questions for your leadership team
If you're trying to translate this into action, these are the governance questions worth working through, before any technology selection happens.
- Who owns the intended use definition?
This is a named individual, not a project team. That person is accountable if an inspector asks why the AI was deployed for this specific purpose. - Who is accountable for the validation lifecycle?
A named person within your organization, someone who will own validation maintenance, performance monitoring, change control, and eventual decommissioning. This cannot sit with your vendor. - Where are the human decision gates?
For every GxP-critical workflow, there must be a documented point where a qualified person reviews, decides, and signs off. Who is that person, what exactly are they approving, and what rationale are they required to document? - How will changes to the AI system be controlled?
Model updates, configuration changes, data source changes, all of it needs to go through change control and trigger a revalidation assessment. Is that process designed and operational? - What is your retirement plan?
Validation is a lifecycle commitment. What happens when the model is no longer fit for purpose? What's the decommissioning process? How is the transition managed? These questions need answers before go-live.
One more thing: stay technology-agnostic
One of the most important lessons from Annex 22 and from practice is this: the process and the intended use come first. The model or the vendor comes second. AI is the enabler, not the foundation. Organizations that lock themselves prematurely into a single vendor or model architecture risk building governance structures that depend on that vendor’s roadmap rather than regulatory expectations. Technology will evolve. Governance must be able to evolve with it.
The GxP AI Readiness Checklist was built around exactly the kind of cross‑functional questions the
triad must answer together. It covers governance, validation, human‑in‑the‑loop controls, auditability, data quality, cybersecurity, and lifecycle management across 50 structured questions.
Download the GxP AI Readiness Checklist
-1.png?width=311&height=195&name=Ebook%20-%20GxP%20Ai%20Readiness%20Assessment%20(1)-1.png)