Computer System Validation (CSV) has been around since the introduction of FDA 21 CFR Part 11, 1997, the General Principles of Software Validation. Intended to ensure proper functioning of systems, CSV had the goal to ensure that auditors can get a detailed look at every aspect of an application used in production. Today, it has mutated into a burdensome exercise for audit proofing, resulting in enormous paper-based documentation packs with large volumes of screenshot or printout attachments.
CSA versus CSV
CSV has become a major point in audit preparation, creating unnecessary activities and bureaucracy. In the worst case, it can even hinder progress, by making updates or new software solutions incredibly difficult to implement due to extensive documentation requirements. It also locks up resources in documentation tasks, while neglecting testing and the original intention of ensuring that systems are fit for use and their functioning validated.
So, what is the difference between CSV and CSA? Computer system validation or CSV methodology has validation teams spending 80% of their time documenting and only 20% of their time testing systems. The US Food & Drug Administration (FDA) has recognized that the ‘test everything’ approach is becoming outdated as it leaves GMP manufacturing facilities spending more time on documentation than on actual testing. The goal is to turn this around: focus on testing and document where necessary. The FDA has therefore started to promote what it calls Computer Software Assurance or CSA. With CSA 80% of the time should be spent on critical thinking and applying the right level of testing to higher-risk activities, while only 20% of time is spent on documentation. In this process, critical thinking should be focused on three questions:
- Does this software impact patient safety (Pharma) or consumer safety (Food)?
- Does this software impact product quality?
- Does this software impact system integrity?
Using a risk-based approach is something the FDA already applies in their audits. It is important to notice that the upcoming CSA guidance document brings no change to the current governing regulations or to the Good Automation Manufacturing Practice guide release 5 (GAMP 5); the requirements remain the same. However, following the CSA approach means that validation activities are designed around testing the critical aspects in an efficient and focused way. The goal is to avoid unnecessary activities and bureaucracy. Instead, critical thinking and a risk-based approach are used to ensure that testing focuses on the features and/or actions that pose a medium to high risk to patient and/or consumer safety, product quality and/or data integrity. Less focus is spent on the lower risk features of the system. Executing an appropriate Quality Risk Assessment prior to start other validation activities becomes essential.
Where does CSA apply?
The current indication is that the shift towards CSA should be considered for the validation of the following systems (non-limitative list):
- Enterprise Resource Planning (ERP) systems
- Electronic Quality Management Systems (eQMS) with functionalities like
- Document Management
- Learning Management
- Event Management
- Laboratory Information Management Systems (LIMS)
- Serialization systems
A good CSA program can be easily implemented and will benefit from a strong Quality Management System. With a CSA approach, these systems can be implemented in a much more agile way than applying the classical CSV approach. Applying CSA is then no longer a burden but becomes a distinguisher for success while CSV is considered more the qualifier.
Our Q7 recommendations when applying CSA:
- A thorough supplier audit may provide justification to leverage supplier testing. This means that you do not need to repeat testing already performed by your vendor. If the vendor has a strong quality management system or QMS in place and has already validated the offered system, then there may be no requirement for the user to re-validate the out-of-the-box features.
- Use a risk-based approach to define what functions, features or aspects of the computerized system are high-risk to patient/consumer safety, product quality and/or data integrity. These high-risk areas require intensive testing if not already been tested by the qualified vendor. Medium or low risk functions, features or aspects may require less or even only informal testing.
- Clearly define the scope of validation. In case an existing system is changed, focus the testing mainly on the changed functionality. Follow an agile approach for development and consider unscripted testing as part of the validation approach for low-risk testing.
- Have a clear and simple policy in place that says how testing is conducted (for example, collection of documented evidence and validation non-conformance management), how training is done and how validation is summarized. This will ease execution and make the approach transparent and understandable for all involved in testing.
- Where possible, use testing tools for automated assurance activities instead of manual testing. Automated tools provide the advantage of reduced errors in testing, maximizing the use of resources and can reduce risk. Finally, ensure you know the intended use of the system. Keep that in mind when defining what is required.
To conclude, with the introduction of CSA, there is no change to the current governing regulations or to GAMP 5. Important is to make the shift to the right mindset : focus on critical thinking and defining the high-risk areas. Testing is focused on these areas accordingly.
Use CSA is a distinguisher for success and stay away from the burdensome exercises of CSV. If your company is ready to make the change but needs support on the implementation, Q7 Consulting will be happy to help you. We have the experienced resources to support you in the transition to an efficient validation approach and have more than two decades of successful track record in implementing CSV and CSA .