Anda di halaman 1dari 80

/

GAMP 5 Overview
Paul Fenton
January 2013

/ Overview

Introduction to GAMP5
Differences between GAMP4 and GAMP5
How to use GAMP5 effectively
What the regulations say
High level overview of the key concepts of GAMP5
Quality Management
V Model
Lifecycle Phases
System Categories
Documentation
Required Procedures
Supplier Management

/ Introduction to GAMP5
GAMP 5 - A Risk-Based Approach to Compliant GxP
Computerized Systems
Is a framework for developing, qualifying,
validating and maintaining systems used in GxP
Is produced by ISPE
Is widely used within the pharmaceutical
industry
Is understood by inspectors
Is not a regulatory requirement but rather a
pragmatic guidance

/ Introduction to GAMP5
GAMP provides practical guidance that:
facilitates the interpretation of regulatory
requirements
establishes a common language and
terminology
promotes a system life cycle approach based on
good practice
clarifies roles and responsibility
Focuses on patient safety, product quality and
data integrity

/ Introduction to GAMP5
Aims to be compatible with other methods, models
and schemes including:
Quality systems (IEEE, ISO 9000 Series)
Organization Capability and Maturity (CMMI)
Software processing models (ISO 12207)
Software development models (RAD, Agile, RUP,
XP)
IT Service Models (ITIL)
Is composed of a main body and multiple appendix
with practical resources

/ Introduction to GAMP5
Rationale for GAMP 5
To align with ICH guidance
Q8 Pharmaceutical Development
Q9 Quality Risk Management
Q10 Pharmaceutical Quality System
To align with FDA cGMPs for the 21st Century
To align with updated PIC/S PI-011 guidance
To align with other standards from ISPE

/ GAMP Drivers
Focus on Patient
Safety,
Product Quality,
and Data Integrity

Life Cycle
Approach
within QMS
Scalable
Approach to GxP
Compliance
Effective
Supplier
Relationships

Quality by
Design (QbD)

Configurable
Systems
and Development
Models
Source: ISPE GAMP 5 Guide

GAMP 5

Use of Existing
Documentation
and Knowledge

Effective
Governance to
Achieve and
Maintain GxP
Compliance
Continuous
Improvement
within QMS

Improving GxP
Compliance
Efficiency
Science Based
Quality Mgt
of Risks

Critical Quality
Attributes (CQA)

/ Differences between GAMP4 and GAMP5


The numbering of the sections of the guide have changed
significantly
The system lifecycle approach has been expanded
significantly to encompass the full system lifecycle from
concept through project to operation and retirement
Guidance on risk management significantly enhanced
Increased guidance on system governance
More focus has been put on leveraging supplier
documentation new section on suppliers
Category 2 systems (Firmware) no longer exists
Risk Asssessment has become Quality Risk Management in
line with ICH Q9
www.ispe.org/publications/gamp4togamp5.pdf

/ How to use GAMP effectively


Remember that GAMP is a framework and not a
regulatory requirement
Use the elements of GAMP that make sense for
your company and activities
If you reference GAMP in your procedures, you
should indicate which areas of GAMP you are using
Try and adopt GAMP terminology to facilitate
understanding of your SDLC
Remember the aim of GAMP is to improve the
quality and reliability of GxP systems whilst
reducing the burden of such systems

/ Principal Regulations and Guidance


21 CFR Part 11 Electronic Records; Electronic
Signatures
Eudralex Volume 4 Annex 11
ICH Q9 Quality Risk Management
General Principles of Software Validation; Final
Guidance for Industry and FDA Staff

/ What the regulations say


Principle Requirements
We need to have formal system documentation
which is maintained under change control
We need to validate systems to ensure that they
are fit for thier intended use
We need to apply a risk based approach based on
patient safety, product quality and data integrity
We need to have adequate change and
configuration management procedures

/ Key concepts of GAMP5


User

Develop
Medicinal
Products

Produce
Medicinal
Products

Market and
Distribute
Medicinal
Products

Product and Process Understanding


Life Cycle Approach within a QMS
Scaleable Life Cycle Activities
Science Based Quality Risk Management
Leverage Supplier Involvement

Supplier (of
computerized
systems and
services)
Source: ISPE GAMP 5 Guide

Develop
Products and
Services

Deliver Products
and Services

Maintain and
Support
Products and
Services

/ Quality Management
A well designed system lifecycle should:
be an intrinsic part of the companys quality system
Allows for continuous system and process
improvements based on:
Periodic review
Operational and performance data
Root cause analysis of failures
Ensure assurance of quality and fitness for intended use
Ensure regulatory compliance
Facilitates a QbD approach

/ Lifecycle Approach
Good Engineering Practice
Product
Knowledge
Process
Knowledge

Requirements

Regulatory
Requirements

Specification
and Design

Verification

Company
Quality Reqs

Risk Management
Design Review
Change Management
Source: ISPE GAMP 5 Guide

Acceptance
and Release

Operation and
Continuous
Improvement

/ V Model

15

/ System Documentation General


Requirements
System documentation varies based on the category, risk,
complexity and novelty of the system
If system documentation is to be produced electronically,
then it should be maintained in a 21 CFR Part 11 / Annex 11
compliant way
Ensure that all documents meet ALCOA
Establish clear versioning and documentation IDs/Names
Keep documents in draft until development is complete to
minimize overhead (ensure adequate control)
Link to the traceability matrix and maintain under version /
change control

/ System Description
High level document which describes the hardware
and software components of the system
EU GMP Annex 11, Clause 4, requires that there is
an up to date description of every GxP regulated
computerized system
It should also describe:
Principles
Objectives
Scope of the system
Security features
Main functions

/ System Description
Identify GxP process which will be governed by the
system
Use diagrams to describe the hardware and
software components
Use non technical language where possible
Describe how the system is used
Describe any interactions with other systems
Define any procedures which will be used in
conjunction with the system

/ User Requirements Specification


Should be a structured document which describes
high level and detailed user requirements The
What
Group requirements by functional area
Requirements should be concise and measurable
Think about how requirements will be tested
Avoid combining requirements as this complicates
testing
Provide each requirement with a unique identifier
Ensure the document is versioned and aligned with
the traceability matrix

/ Functional Specifications
Document which describes How the system should meet
the user requirements
Establish a formal standard for functional specifications
Define a high level overview of the different
components/functions and thier relationships
Identify the different functions and describe:
The process
The inputs
The process
Critical calculations or algorithms
The outputs
Error handling

/ Functional Specifications
Use screen mockups to help define user interface
specifications
Establish performance and scaleability requirements
Identify each function with a unique identifier
Use process diagrams wherever possible
Establish clear links to user requirements thorough the use
of the traceability matrix
Use consistent naming conventions
Formal testing is done on FS, so ensure that it is measurable
and link to tests in the traceability matrix
Ensure the programmer understands the FS
Identify when design review needs to occur

/ Functional Specifications
The FS should also define data characteristics
including:
Data field definition
Data range
Required fields
Data validation checks
Data relationships
Data capacity, retention and archiving
Data integrity and security

/ Configuration Specification
Specific configuration specifications (CS) may be
required for the system if it is a category 4
CS may be produced for a specific client if a
category 5 system is deployed and configured in the
clients environment
CS are typically produced by the vendor and
reviewed by SMEs at the client organization
CS should clearly cross reference the version of the
system for which they have been written

/ Design Specifications
Technical document which describes how the system is to
be developped
Should allow the reconstruction of the system
Should describe all classes and reference functions in the FS
Establish a class design model
Class descriptions should include:
All inputs, outputs and parameters
System flow diagrams
Technical description of algorithms
Error handling and checking
Data mapping
Display screens and Reports (format, when generated,
which data)

/ Design Specifications
Database design should include:
Physical and logical database diagram with all
relevant keys, indexes and releationships
Data dictionary with table name, field name,
data type, size and required Y/N
Description of all stored procedures, views and
triggers
Identify when Design reivew is required
If using an iterative or agile approach, identify
whether several DS will be developed or a
cumulative document will be produced

/ Coding
All code should be versioned using a source code
management system i.e. SourceSafe, SVN etc.
Code should be properly documented through the use of
cartouches and in-code comments
Formal coding conventions should be used to define how
code is structured and code elements are identified
Formal, documented and independant code review should
be undertaken for each version/iteration of code
Source file names should be referenced in the DS
Released code should fall under change control

/ Testing of Computerized Systems


Testing fulfills objectives such as:
identifying defects so they can be corrected or removed
before operational use
preventing failures that might affect patient safety,
product quality or data integrity
providing documented evidence that the system
performs as specified
demonstrating the system meets its requirements
providing confidence that the system is fit for its
intended use
providing a basis for user acceptance
meeting a key regulatory requirement

/ Test Plan
Also known as Validation Plan
The test plan (TP) should include:
Clear roles and responsibilities
Test strategy based on risk assessment, system
category, complexity and novelty
List of document deliverables
List of governing procedures
System intended use and acceptance criteria
Aim to produce the TP at the same time as the
specifications

/ Test strategy
The test strategy should include:
The types of testing required
The different test protocols/specifications
required and thier purpose
Use of existing documentation (supplier)
Details regarding the different test phases
Test evidence required
Non-conformance procedure
Test metrics

/ Testing Documentation Typical Structure


Test Plan /
Strategy (VP)

Test Summary
Report

Test Protocol /
Specification

Test Cases

Source: ISPE GAMP 5 Guide

Test Results

Test Scripts

Test Results

/ Test Scripts / Test Cases


TS/TC should contain the following elements:

unique test reference


cross reference to controlling specification
title of test
desription of the test including test objective
test steps
acceptance criteria
pre-test steps
data to be recorded
post-test actions

Seperate test cases may be prepared for some tests

/ Test Scripts / Test Cases


Test script results should be formally reviewed and
approved
Supporting documentary evidence should be
present i.e. screen shots, data listings, log files etc.
Risk assessment may be used to help define test
cases and scripts
Risk assessment can be used to focus the scope of
testing

/ White Box vs Black Box

White Box

Also known as code based


testing, or structural testing.
Test cases are identified based
on source code knowledge,
knowledge of Detailed Design
Specifications
and
other
development documents
Used for Module Testing and
Integration Testing

Black Box
Based on the functional
specification, thus often known
as functional testing
Used for Functional Testing
(OQ) and Acceptance Testing
(PQ)

/ Software Module and Integration Testing


Usually performed for category 5 systems
Module (or unit) testing aims to test each module
against the design specification
Intergration testing aims to verify that modules
work together correctly
Testing can be automated on manual
Tests should be formally documented and executed
independantly
If automated testing is used, there should be
documentation to show that the testing program is
operating correctly

/ Installation Testing
Also known as Installtion Qualification
Purpose:To verify that the system is installed
properly in accordance to specifications, installation
instructions and local/global requirements
Verifies that adequate documents are in place i.e.
SOPs, user/admin guides, SLA and Security
procedures
Ensures that all installation steps are properly
executed (with objective evidence)
Ensures that the system is adequately protected
from power failure and data loss

/ Configuration Testing
Focuses on verifying the the configuration of the
system has been done in line with the configuration
specification (CS)
The tests usually take the form of a checklist
The tests should be approved before execution and
produce objective evidence
The configuration testing documentation is usually
provided by the vendor and could be client specific
Testing is usually performed on the client installed
environment

/ Functional Testing
Also known as Operational Qualification (OQ)
Usually governed by its own protocol
Positive and negative functional tests on each system
module
Documented using test scripts with predefined test cases
and data
Expected results and actual results should be defined
Scope of testing is defined following the risk assessment
Can also be executed as part of system testing before
release to client
Usually executed in clients environment

/ Requirements Testing
Also known as Perfromance Qualification (PQ) or User
Acceptance Testing (UAT)
Aims to verify that the system meets the user requirements
and that the system is fit for its intended use
Usually positive testing of end to end business process in
the system
Expected and actual results should be defined / captured
Scope of testing is defined following the risk assessment
Usually executed in clients environment and could be
executed by the client

/ Test Summary Report


Should document each phase of testing and the
results of testing
Should provide a summary of non-conformances
and their status
Should describe the different document
deliverables that were produced
Should evaluate if all acceptance criteria were met
and whether the system is fit for its intended use

/ System Categories

Source: ISPE GAMP 5 Guide

/ Typical Lifecycle Approach Category 1


Infrastructure Software
Record software in software invetory and / or
system description document
Perform an Installation verification (IQ)
Ensure software falls under proper configuration
control and change management procedures

/ Typical Lifecycle Approach Category 3


Non-configured Software
Abreiviated Lifecycle Approach
Establish clear user requirement specification (URS)
Risk based approach to supplier assessment
Record software in software invetory and / or
system description document
Perform risk based tests against URS(Requirements
Testing) - could also be calibration tests for very
simple systems
Ensure software falls under proper configuration
control and change management procedures to
ensure compliance and fitness for intended use

/ Typical Lifecycle Approach Category 4


Configured Software
Lifecycle Approach
Risk based approach to supplier assessment
Demonstrate Supplier has adequate QMS
Verify documentation maintained by supplier i.e.
design specifications
Record software in software invetory and / or
system description document
Perform risk based tests in test environment to
verify application works as designed (functional
testing)

/ Typical Lifecycle Approach Category 4


Configured Software (Cont)
Perform risk based tests to verify application works
as designed within the business process
(requirements testing)
Ensure software falls under proper configuration
control and change management procedures to
ensure compliance and fitness for intended use
Ensure procedures are in place for managing data

/ Typical Lifecycle Approach Category 5


Custom Software
Same as category 4 plus:
More rigourous supplier assessment and possible
supplier audit
Develop full lifecycle documentation (i.e. FS, DS and
full testing)
Perform design and source code reviews

/ Categories of Hardware Category 1 Standard Hardware Components


Document manufacturer or supplier details and
version/model numbers/serial numbers in
hardware inventory and / or system description
Perform and document installation verification
Ensure hardware falls under proper configuration
control and change management procedures to
ensure compliance and fitness for intended use

/ Categories of Hardware Category 2


Custom Built Hardware Components
Same as category 1 plus:
Design specifications should be developed and
perform acceptance testing undertaken
Take a risk based approach to supplier assessment
and perform supplier audit
May need to perform hardware compatibility tests
Document any hardware configuration in design
specs
Apply configuration management and change
control procedures

/ Design Review and Traceability- Objectives


To detect system defects early in the SLDC process
To ensure that all requirements have been met
That functionality is appropriate, consistent and
meets all pre-defined standards
That the system is properly tested

/ Design Review - Characteristics


Design reviews evaluate deliverables against
standards and requirements
Issues are identified and corrective actions
proposed
Planned and systematic reviews during key points
in the lifecycle (specifications, design and
development)
Important part of the verification process
performed by SMEs
Rigour of design review and level of documentation
should be based on risk, complexity and novelty

/ Design Review aspects to be considered


Aspects that should be considered when planning Design
Reviews include:
the scope and objectives of the review
what method or process will be followed
who will be involved
what the outputs will be
There should be a formal procedure in place for design
review
A design review form could be used to document reviews
Issues and corrective actions should be documented on
forms
Design review should be done for system categories 4 and 5

/ Traceability - Characteristics
Traceability establishes the relationship between
two or more products of the development process
Traceability ensures that:
Requirements are met and can be traced to
configuration and design specifications
Requirements are verified and can be traced to
test or verification activities
Traceability can be maintained in an electronic
system (i.e. HP System Center) or manually in a
traceability matrix document

/ Traceability - Example
URS

FS

DS

UT/IT

IV

FV

OV

UR4.10

FS5.6.

DS3.4

UT10.1
IT5, IT6

n/a

FV3 Steps 1-10

OV5 Steps 5-7


OV6 All Steps

UR4.11

FS5.6.
FS5.7.

DS3.4.

UT10.2
IT6

IV1

FV3 Steps 11-15


FV4 Steps 1-5

OV5 Steps 8-15

UR4.12

FS5.8.

DS3.5.

UT10.3
IT6

IV1

FV4 Steps 6-18

OV7 All Steps

UR5.1.

FS5.1.

DS5.1.

UT5.1
IT5

IV1

FV5.1. All Steps

OV5 All Steps

Could also add columns for:


Iterations
System version
System Document Identifier
Design review instance
Risk class

GxP Relevance
Tested Y/N
Test Run
Change Control Ids
References to SOPs

/ Traceability - Benefits
Accurate traceability can also provide benefit by:
enabling more effective risk management and
design review processes
judging potential impact of a proposed change
facilitating risk assessment for a proposed
change
identifying scope of regression testing for
changes
enabling fast and accurate responses during an
inspection or audit

/ Required Procedures
The following procedures are required to manage the
development and maintenance of GxP Systems:

Software Devlopment LifeCycle (SDLC)


Computer Systems Validation
Change Control for Validated Computerized Systems
Backup and Restoration
Failover management
Disaster Recovery and Business Continuity
Routine IT Maintenance
Incident and Problem Management
Configuration Management
Logical and Physical Security
Documentation Management

/ Supplier Management
There is an expectation that you assess your
suppliers if they are providing any sub-elements of
your system or associated services
This is usually done through the use of audits
You may want to consider integrating supplier
documentation with your documentation
Your suppliers should have the same quality
standards as you
Your clients may also want to audit your suppliers
so make sure you have this in your contracts

/ Quality Risk Management


Quality risk management is a systematic process for
the assessment, control, communication, and
review of risks
It is an iterative process used throughout the entire
computerized system life cycle from concept to
retirement
There is now a regulatory requirement to
implement a risk based approach (Annex 11)
GAMP provides a framework for Quality Risk
Management

/ Quality Risk Management


Business Process,
User and
Regulatory
Requirements

Output

Input
Identify
Risks

Improved Patient Safety,


Product Quality and
Data Integrity
Informed Decisions

Project Approach
Contracts,
Methods,
Timelines

Review
Risks

Analyze and
Evaluate Risks

System
Components and
Architecture
System Functions
Experience from
Use
Source: ISPE GAMP 5 Guide

Achieving Compliance
and fitness for intended
use
Efficient Validation

Control
Risks

Cost Effective
Maintenace and
Operation
Achieving Business
Benefits

/ What the regulations say


ICH Q9 Quality Risk Management describes a
systematic approach to risk management that is
intended for general application
It defines the following two primary principles:
The evaluation of the risk to quality should be
based on scientific knowledge and ultimately
link to the protection of the patient
The level of effort, formality, and
documentation of the quality risk management
process should be commensurate with the level
of risk

/ What the Regulations Say


Eudralex Volume 4 Annex 11 indicates:
Risk management should be applied throughout
the lifecycle of the computerised system taking
into account patient safety, data integrity and
product quality.
As part of a risk management system, decisions
on the extent of validation and data integrity
controls should be based on a justified and
documented risk assessment of the
computerised system.

/ Overview of risk based approaches


There are many methodologies that can be used to perform
risk assessments including:

Hazard and Operability Analysis (HAZOP)


Computer Hazards and Operability Analysis (CHAZOP)
Failure Mode and Effects Anaiysis (FMEA)
Failure Mode, Effects, and Criticality Analysis (FMECA)
Fault Tree Analysis (FTA)
Hazard Analysis and Critical Control Points (HACCP)
Basic Risk Management Facilitation Methods
Preliminary Hazard Analysis (PHA)
Risk Ranking and Filtering

GAMP 5 provides us with a methodology which is based on


a five step process
Risk management should be integrated into all activites not
just validation

/ Quality Risk Management Process


Perform Initial Risk Assessment and Determine
System Impact

Identify Functions which have impact on Patient


Safety, Product Quality and Data Integrity
Perform functional risk assessments and identify
controls

Implement and verify appropriate controls

Review risks and monitor controls

/ Determining risk
Determining the risks posed by a computerized system
requires a common and shared understanding of:
impact of the computerized system on patient safety,
product quality, and data integrity
supported business processes
user requirements
regulatory requirements
project approach (contracts, methods, timelines)
system components and architecture
system functions
Risks need to be determined by the SMEs that have the
knowledge to undertand the above

/ Risk Management
Managing Risks can be achieved through:
elimination by design
reduction to an acceptable level
verification to demonstrate that risks are
managed to an acceptable level
Elimination by design is desirable and design
reviews play a key role
Risks that cannot be eliminated must be reduced to
an acceptable level through the use of technical or
procedural controls

/ Example of a risk based for a category 5


system

/ Step 1 Initial Risk Assessment


An initial risk assessment should be performed based on:
an understanding of business processes
business risk assessments
user requirements
regulatory requirements
known functional areas
This initial assessment should focus on the GxP impact of the
system
Focus on the processes that the system is to manage
Formally Document the overall system impact
If the impact of the system is minimal then it may not be
necessary to continue the risk assessment exercise

/ Step 1 Initial Risk Assessement


Use process diagrams
to assess the impact
of each process step
This approach will
enable you to
determine overall
system impact
This also allows you
to define the type of
functional assessment
required

/ Step 2 - Identify functions which have GxP


impact
Functions that could have an impact of patient
safety, product quality or data integrity due to
system failure should be identified in the URS, FS
and from the system impact assessment
Functions that do not have GxP impact but have
high business impact could also be included in the
functional risk assessment

/ Step 3 - Example of Risk and Hazard


Identification

/ Step 3 Perform a Functional Risk


Assessment
Step 1: List the system functions
Assume each user requirement will be satisfied by a
specific system function.
Step 2: Identify the type of risk
Associate a type of risk (GxP vs. business) to each
function based on the following:
Are there applicable predicate rule requirements?
Is there an impact on patient safety?
Is there an impact of product quality?
Is there an impact on data integrity?
Is there an important impact on the ability to carry
out the daily business tasks?
70

/ Step 3 - Perform a Functional Risk


Assessment

Step 3: Identify Risk Scenarios and controls


For each function, list the more likely of the
possible risk scenarios based on the type of
analysis required (generic or specific)
Identify any controls that could be put in place to
mitigate risk. These could be technical or
procedural
Step 4: Assess the likelihood of occurrence
Occurrence = the likelihood that a fault will occur
Step 5: Assess the severity of impact
Severity = Impact on patient safety, product
quality or data integrity

/ Step 3 - Perform a Functional Risk


Assessment
Step 6: Assign a risk class
Risk Class = Severity x Probability
Step 7: Assess the probability of detection
Detection= Likelihood of detecting the fault
Step 8: Determine the Risk Priority
Risk Priority = Risk Class x Detectability

/ Step 3 - Perform a Functional Risk


Assessment
Risk Likelihood

High

Medium

Low

High

Medium

Low
Risk Class

High

Medium

Low
Severity

Probability of Detection

High

High

Medium

High

Medium

Low

Medium

Low

Low

Risk Priority

Determination of risk priority is a practical method for deciding


which functions pose the greatest risks and, in turn, helps focus
testing activities on those functions that present the greatest
threat.

/ Example: Risk Assessment

74

/ Step 3 - How to Interpret the Results


Risk class enables the organization to focus
attention on the areas where they are most
exposed
The interpretation of High, Medium and Low can
vary from organization to organization and from
system to system
This should be defined for each system prior to
performing the risk assessment
This should be based on system impact

/ Step 4 Select and Implement Controls


Controls are measures that are put in place to reduce risk to
an acceptable level.
These controls could be technical in nature such as:
Data verification checks
User prompts to verify inputs
Fault tolerance
Specification documents may be updated following the
identification of technical controls
Controls could also be procedural such as:
Introduction of SOPs to control processes
Increased rigour in testing
Increased user training

/ Step 4 - Risk Based Test Scope Definition


State the acceptable risk levels, with rationale, in
the Test Plan
Restrict the scope of testing based on the outcome
of the risk assessment
Example:
Business-relevant requirements/functions with
High priorities will be tested.
GxP-relevant requirements/functions,
regardless of priority, will be tested.

/ Step 5 Review Risks and Monitor Controls


Ensure that regular risk reviews are undertaken
(especially during change control and design
reviews)
Establish a mechanism for monitoring the controls
Establish a formal risk communicate procedure (as
required by ICH Q9)
The reporting on risks and monitoring of controls
should be communicated to quality assurance,
business owners and suppliers/clients
This communication should be done at every stage
of the system lifecycle

/ Recommendations when performing risk


assessements
Ensure that the team performing the risk
assessment fully understands the system and the
processes that the system is governs
Designate a moderator to keep the risk assessment
moving
Be careful not to over-analyze with enough
debate, everything becomes high risk
Make sure controls are realistic and manageable
Always carefully outweigh the effort to put controls
in place versus the risk reductions they bring

/ Conclusion
When applied properly, quality risk management
can significantly improve quality whilst reducing the
burden of system testing
Risk management is here to stay and is gradually
being applied to all aspects of GxP
Risk Management is a regulatory requirement
You need to build risk management into your
processes, systems and company culture

Anda mungkin juga menyukai