If you spend enough time in drug manufacturing plants, you start to notice a pattern in the questions people ask. They don’t usually start with the big questions—“Are we truly in a state of control?” or “How robust is our pharmaceutical quality system?” Instead, they start with something deceptively simple:
“How often should we audit?”
We’ve heard it in sterile filling suites in New Jersey, in cramped QC labs in Central Europe, in brand-new cell therapy facilities still smelling of fresh paint. Sometimes it’s a genuine question from a new Quality head. Sometimes it’s a nervous one from a seasoned director who already suspects the answer is, “More often than we are now.”
But what everyone is hoping for—what no regulator has ever given them—is a number. “Every six months.” “Once a year.” “Every two years if you’re stable.”
It would be comforting, but it would also be wrong. Regulators have very deliberately declined to give you a calendar. EU GMP tells you self-inspections should follow a “pre-arranged programme.” WHO suggests full coverage “at least once per year” and calls for special self-inspections when problems arise. ICH Q7 tells you internal audits should be performed “regularly in accordance with an approved schedule.” The FDA talks about planned intervals that reflect “importance and complexity” and the “results of previous audits.”
What none of them say is: “Your aseptic core shall be audited every 90 days” or “Your warehouse shall be audited every 18 months.” Instead, they put the burden back on you: build a program that is regular, risk-based, and defensible. Then live with it.
Over the past two decades, we’ve helped companies do exactly that, from small, single-site manufacturers to global networks with dozens of facilities and sprawling supplier landscapes. What follows isn’t a theoretical framework. It’s the playbook we use when a client asks us, “Can you help us build an audit schedule that actually works—and holds up when an inspector starts asking questions?”
When someone asks, “How often should we run our GMP audits?”, they’re rarely asking a math question. They’re actually asking about risk, resources, and scrutiny.
|
More specifically:
|
Regulators, for their part, are typically looking for three things when they review your audit program, which form the basis of our audits and mock inspection approach:
|
First, coverage: did you identify all the systems and partners that matter, and are you looking at them regularly enough to detect and correct problems? Second, consistency: do you actually do what your procedures and schedules say, or is the calendar more of a wish list? Third, responsiveness: when you find problems—your own or your suppliers’—do you act on them, close them, and prevent them from recurring? |
Generally speaking, everything else is implementation detail. The playbook that follows is built around those three expectations. It starts with mapping what you’re responsible for, moves through risk-based prioritization and frequency setting, and ends with the practicalities of living with—and defending—the program you’ve built.
One of the first things we do when we’re brought in to “fix” an audit program is ask for the list. “Show us everything you audit internally. Then show us the list of suppliers and partners you audit. Then any IT or digital systems you consider to be within GMP scope.”
There’s usually a pause. Sometimes a scramble.
What appears is often a patchwork: a list of “the usual suspects” (production, QC, cleaning validation), maybe a few key suppliers, and then… silence. No LIMS. No MES. No third-party logistics. No pharmacovigilance partner. No data center hosting the QMS.
The first real step toward a defensible audit schedule is admitting that you don’t yet have a complete picture of your audit universe, and then building one. Think of the audit universe as a map of everywhere GMP truly lives in your organization.
|
That map should include:
|
The details vary by company, but the discipline is the same: if a process, system, or partner can materially affect product quality, patient safety, or data integrity, it belongs on the map.
It doesn’t have to be elegant. It does have to be complete, controlled, and current. A simple table maintained under document control is enough. What matters is that when an inspector asks, “What falls within your audit scope?”, you can answer with something better than a shrug and a stack of last year’s reports.
The format doesn't matter—what matters is that it's current, accessible, and version-controlled.
At a minimum, your audit universe document should list:
| System/Area | GMP Scope (Yes/No) | Responsible Department | Notes |
|---|---|---|---|
| Aseptic Filling | Yes | Manufacturing | High complexity, sterile |
| Warehouse | Yes | Logistics | Temperature-controlled storage |
| API Supplier (Company X) | Yes | Supply Chain | Critical material, annual audit required |
| Cafeteria | No | Facilities | Outside GMP scope |
This becomes your master list: the foundation of your audit program.
Once you know what’s in your universe, you face the problem that drives most of the anxiety: you can’t audit everything with the same intensity.
A sterile filling line is not the same as a carton supplier. A data-integrity-sensitive HPLC system is not the same as a well-qualified, low-complexity warehouse with automated temperature control. Treating them as equivalent is not only wasteful, it’s contrary to the risk-based principles regulators have been pushing for years.
The goal is to turn “we think this is riskier” into something more disciplined than a gut feeling. That doesn’t mean building a baroque scoring system only one person in QA understands. It means agreeing, in advance, on the questions you will ask of every system and partner, and on how those answers translate into risk.
Risk-based auditing isn't just a buzzword; it's the regulatory expectation. ICH Q9 (Quality Risk Management) and ICH Q10 (Pharmaceutical Quality System) both emphasize that quality activities, including audits, should be proportionate to the risk to product quality and patient safety.
In practical terms, that means:
But how do you objectively determine what's high, medium, or low risk? A practical model we use with clients revolves around five deceptively simple questions:
|
You don’t need to overcomplicate this. For each system or partner, answer the questions honestly, tally the “yes” answers, and translate them into a simple classification: high, medium, or low risk. More “yes” votes doesn’t make something bad. It makes it deserving of tighter, more frequent scrutiny.
The point isn’t to create the perfect risk algorithm. It’s to be transparent and consistent. When an inspector asks, “Why is this audited every six months and that every two years?”, you want to be able to point to a simple, written rationale rather than saying, “That’s just what we’ve always done.”
At some point, talk has to become dates!
A risk-based audit program that never moves beyond classification is like a beautifully designed CAPA that never gets implemented. Regulators expect not only that you have thought through risk, but that you’ve converted those judgments into a schedule and followed it.
There is no official table that says “High risk = every X months.” There are, however, norms that have emerged across the industry, and they align well with how regulators think about oversight.
|
Most of the programs we help design settle into something like this:
|
Again, none of these numbers are sacred. In some high-risk environments—early-stage cell and gene therapy, for example—we’ve instituted monthly internal reviews of specific steps until the process stabilizes. In very mature, low-risk operations, it may make sense to extend some intervals slightly once there is long-term data showing consistent control.
What matters is that frequency is visibly tethered to risk, not tradition or convenience. A warehouse doesn’t get audited less often because “it’s always fine”; it’s audited less often because a structured risk assessment, backed by performance data, says failures there are less likely to harm patients than failures in aseptic filling.
When we help clients build their master schedule, we rarely start with a blank year. We take the audit universe, the risk classification, and the frequency rules, and we plot out a multi-year plan that makes sense operationally. High-risk areas show up multiple times in the calendar. Medium-risk systems make at least one appearance per year. Low-risk areas are staggered so that no part of the GMP universe goes untouched for too long.
If your audit schedule were the only driver of scrutiny, your program would be brittle. Real operations don’t cooperate with calendars. They throw you deviations, OOS clusters, recalls, inspection findings, supplier failures, and internal reorganizations often at the worst possible time.
Regulators know this, which is why WHO talks about “special self-inspections” after recalls or repeated rejections, and why FDA investigators routinely ask, “What triggers an unscheduled audit?”
A robust program has to make space for what we often call for-cause or event-triggered audits: focused reviews that occur because something has happened, not because a box came due on the calendar.
|
The triggers differ by company, but the themes are consistent:
|
Any of these should prompt a deliberate question: does this event warrant a focused audit?
The answer won’t always be “yes.” But you should be able to show that you asked—and that, when the answer was yes, you moved quickly.
In practice, this means enshrining for-cause triggers in your internal audit procedure. Not as vague aspirations, but as specific criteria and expectations: the types of events that normally demand an audit, who has authority to decide, and how quickly you will act once the trigger occurs.
Some of the most effective programs we've seen convene a brief, cross-functional review—Quality, operations, sometimes regulatory—whenever a serious event or trend emerges. Part of that discussion, alongside CAPA and risk assessment, is the question: “Do we need to look at this area through an audit lens? If so, what’s the scope and timing?”
Inspections are much calmer when you can show not only a neat, pre-planned schedule but a trail of documented, for-cause audits linked to real events. It tells the investigator that you aren’t treating audits as a mechanical chore, but as a tool for managing live risk.
Once the calendar is built and for-cause triggers are in place, the hard part begins: living with the program over years. That means doing something many organizations struggle with—treating the audit schedule not as a one-time plan, but as a living document that evolves with the business.
|
In practical terms, that looks something like this:
|
From the outside, this can look like a dry management review. From a regulator’s perspective, it is evidence that your audit program is not a piece of paper—it’s a mechanism for learning about your own operations and adjusting your scrutiny accordingly.
We often encourage clients to track a small handful of metrics to support these discussions. Nothing too exotic:
When those numbers are healthy and trending in the right direction, they tell a reassuring story. When they aren’t, they tell you where to look harder—before an inspector does it for you.
All of this—your universe, your risk model, your schedule, your triggers—has to live somewhere. For most companies, that somewhere is an internal audit SOP. The temptation is to write a procedure so detailed that it becomes a spreadsheet in paragraph form: every frequency, every interval, every form, every possible scenario spelled out. It feels thorough. It quickly becomes unmanageable.
The best audit procedures we've seen are surprisingly lean.
What they don’t do is embed the actual schedule. That remains a separate, controlled document: the annual or multi-year program that can be updated, re-balanced, and re-approved without sending the entire SOP through change control every time you need to move an audit by a quarter.
This separation of principles (in the SOP) and implementation (in the schedule) is one of those small structural choices that makes a huge difference in practice. It lets you respond to risk and reality without feeling like you are violating your own rules every time you make a necessary adjustment.
All of this effort comes to a head in a moment every Quality leader knows: the day an inspector sits down, flips open a notebook, and says, “Walk me through your internal audit program.”
In that moment, you are not just handing over documents. You’re telling a story about how you see your operations, how you manage risk, and how seriously you take your obligation to find and fix your own problems.
|
The strongest stories we see share the same structure:
|
You don’t have to use that exact script. But if you can tell a story like that—and your documents back it up—you’ve done the single most important thing an audit program can do: you’ve convinced an inspector that you see what they see, and often sooner.
You don’t need fancy software, but you do need structure!A final, practical note here. We are often asked whether a credible audit program demands a specific piece of software. The honest answer is no. We’ve seen very sophisticated programs run on carefully controlled spreadsheets and very weak programs running inside expensive, under-configured audit modules. What you need is less a particular tool than a set of disciplines:
Software can make those things easier and more scalable, especially across multiple sites and networks of suppliers. A good QMS or audit management platform can automate reminders, calculate next-due dates, aggregate metrics, and generate tidy reports. But it cannot decide what matters, how often to look at it, or how seriously to take what you find. That part is still very human. |
For many firms, the biggest challenge isn’t agreeing that they need a better audit program. It’s finding the time and expertise to build and run one while the rest of the business marches on.
That’s where The FDA Group often comes in.
Sometimes we’re asked to take a blank sheet and design the program: map the audit universe, establish the risk model, propose frequencies, draft the SOP, and build the first-year schedule. Sometimes we’re asked to pressure-test an existing program before a major inspection: does it really cover what it should, and can it withstand questioning? Sometimes we’re asked to augment internal capacity—to conduct internal audits of sterile operations, data integrity, or supplier controls that are too sensitive or complex to keep entirely in-house.
In every case, the goal is the same: to leave you with a program that belongs to you. One that is adapted to your products and processes, uses your language and systems, and can be explained by your people when we’re no longer in the room. Because that’s the standard regulators are quietly, consistently applying. Not “Have you bought the latest software?” or “Do you have a perfectly formatted SOP?” but “Do you truly see the risks in your own operation, do you look at them often enough, do you act on what you find, and can you show me how?”
If you can answer those questions with something more than a nervous shrug, your audit schedule is doing its job.
And if you can’t yet—if you know your calendar is more aspiration than reality, if your suppliers haven’t seen an auditor in years, if you’re still quietly hoping no one asks about your data systems—then this is the moment to fix it, before someone else chooses the timing for you.
Learn more about our auditing and mock inspection services—and get in touch to start the conversation.