Computer

adesso Blog

The long-awaited draft guidance for Computer Software Assurance for Production and Quality System Software was published last September. Being a draft state, it’s open to comments and discussion. The non-binding final guidance will reflect FDA’s current thinking of the topic. But what can we learn from this draft guidance already?

First, this is a guidance addressing the Production and Quality System Software in accordance with 21 CFR part 820 which addresses the requirements for medical device manufacturers to validate computer software used as part of production or the quality system for its intended use. Therefore, pharmaceutical systems are excluded, but this article will focus on both the medical devices and the pharmaceutical world. By coincidence the second version of the Good Manufacturing Guide 5 (GAMP5) was also recently published. This document does include both pharma and medical device manufacturing.

Evidence of acceptance vs. evidence of approval

The two statements in the guideline documents are quite interesting.

In the FDA Draft Guidance in the section Establishing the Appropriate Record (p.16): “Documentation of assurance activities need not include more evidence than necessary…”

and

in the GAMP5 guide (p.250), they talk of evidence of acceptance vs. evidence of approval.

What does this seemingly deviation from the traditional documentation approach mean?

What has been done in the past and is still used is that many stakeholders need to sign records for review and/or approval. Everybody who is in charge of collecting the signatures will confirm that this can be quite time-consuming and prone to project delays or at least creates pressure on the stakeholders. Furthermore, due to the time-pressure there is the risk that people don’t review the document as properly as they maybe should. So why need five, six, seven or more signatures on one document, when it is creating avoidable burden and does not add anything to make the record more compliant?

In addition, the traditional computer system validation process usually demands a number of document templates which need to be completed. But it doesn’t necessarily allow the flexibility to exclude templates or combine templates. Sometimes document templates are required to be completed although the source of information is an electronic system that holds these records. Here, a lot of copying over of redundant information could easily be avoided. Again, redundant information in various places does not increase the compliance of the system, in my point of view it can, on the contrary lead to discrepancies across the records.

Further, the collection of evidence of testing in form of screenshots at almost every test step should also be reconsidered. It may be helpful at certain stages to capture a screenshot as additional evidence, but a reduction of the overall screenshot number taken during formal tests should be aimed for. This could be an interpretation of the sentence “...need not include more evidence than necessary”.

Scripted testing vs. unscripted testing

Both guidance documents introduce another term: unscripted testing.In contrast to scripted testing (written an preapproved test scripts), unscripted testing does not give the tester any approved instructions. This could for example a tester very knowledgeable in the underlying business processes and workflow of the system. The tester will then test the system in an exploratory manner, without following a pre-approved test case.

So how can the effort of scripted testing be reduced? First, there must be a definition of unscripted and scripted testing within the project. Usually, the definition centers around the risk level of the system, the requirements and specifications. For a GMP application with a high-risk profile, there may be justified hesitations against unscripted testing. However, for low or moderate risk systems, it may be sufficient only to formally test the high-risk requirements and specifications. For low and medium risk requirements/specifications unscripted testing can be deployed in various degrees depending on the stated risk level. To give a simple example: It makes absolutely no sense to (re)test a feature like a free-text comment field with 256 characters limit in a formal function test by a user to verify that any character up to this very limit can be really written in this field. It is sufficient that a developer just checks that this has been built according to specifications and documents that the verification has been performed.

It is crucial to employ critical thinking of what could happen, if for example the sequential step of a workflow can be bypassed and due this a falsified output will occur. This can be much better verified by “playing” with the workflow trying out different combinations of data input, instead by writing a formal test case that only checks that the linear sequence is correct.

The described change to the traditional approach is similar to an agile approach. The agile approach empowers the project team to gain more responsibility and work more independently on its own tasks. Only with this right mindset an agile project will succeed. The same is true for an unscripted testing approach. In the scripted testing, the test scripts are designed and written by a test designer and usually formally and content-wise reviewed by various stakeholders. Next, the final test scripts are subjected to a dry-run to reduce the number of potential test defects even further and then formally approved, before the formal testing in the quality environment will commence. This approach does not leave much room for chance (read test deviations), but also does not leave much room for critical thinking. With the unscripted testing approach, similar to an agile setting, right-minded people need to perform the testing, they need to know what they are doing and know the business processes behind the system; but they also have the ability to think out of the box, try to “break” the system, and need to be aware and attentive when a bug occurs.

Summary

Both guidelines attempt to reduce the amount of (over)documentation by introducing the concept of evidence of acceptance of more informal activities vs. evidence of approval of almost everything. The challenge is where to keep the balance between documentation of activities (like formal testing) and informal activities which need to be tracked but not approved by signature. This is highly dependent on the nature of the project, the risk-level of the system and its requirements, and the maturity and willingness of the organization to engage in such an approach.

Picture Lars  Schmiedeberg

Author Lars Schmiedeberg

Lars Schmiedeberg leads the quality management and compliance team in the line of business Life Sciences of the adesso Schweiz AG in Basel. He serves both pharmaceutical and medical device customer on quality and compliance topics predominantly in software development.

Save this page. Remove this page.