Change Request – Preventive measures.

“If you don’t have time to do it right, then you must have time to do it over.” – Russian Proverb.

During the clinical database setup phase, the critical milestones are CRF approval, Completion of CRF build, validation checks approval and completion of validation check programming and UAT.  We encounter on so many occasions that tasks must be completed in curtailed timeframe. This has a great impact on the quality of deliverable and there is always a risk of re-work looming around. How often do we see a well-coordinated task completion per the decided timelines by all stakeholder? Often there are delays; some within our control, but most of these out of our control. Inability to anticipate delays and lack of planning lead to crunched timelines. While panning better is going to resolve a lot of problems, it is equally important to be accept such delays may not always be avoidable.

So how to achieve/maintain the quality standard while working with in our constraints of timelines and delays? How to make sure that we have enough capacity to function with these constraints? Well the answer is not that simple. To achieve the quality standard, we need to identify and understand root cause of these errors. One of the easy approaches is to analyze the existing repository of change requests (CR).

The CRs can be broadly categorized into Protocol Amendments, Specification Amendments and Programming error.

AdaptationProtocol Amendments are often driven by science and at times inherent to nature of clinical trial design e.g. Adaptive Clinical Trials. Almost always these amendments are not in control of clinical data manager. The protocol amendments often lead to creation on new standards, addition to the terminologies, etc. Clinical data manager must take a bigger role in creating an error free new standard by following the basics of CRF Creation and Data validation. This will ensure no re-work.

The specification amendments very often are due to realization that the CRF is not intuitive to the site data entry personnel or the data validation checks are not triggering as expected. Most of the times lack of robust test cases during the UAT is also the reason for such failures. When such CRs are implemented it is the responsibility of the data manager to make sure the learnings are documented at the TA level and passed on to thecore-values1 relevant teams. Sharing such learnings will not only avoid these errors from recurring but also will provide the data manager ability to direct the clinical teams on the design of CRF and Validation check during the setup phase. It is a good practice to maintain a repository of such learnings. A best practice could be frequently referring to this repository in the set-up phase.

errorThe programming errors are the ones that can be minimized/eliminated by having a strong quality checks framework. The quality checks can be manual (cumbersome) or can be metadata driven (automated reports) and should be dynamic/self-updating whenever possible. The programming quality is function of how accurately the programming is done w.r.t

  • The final specifications.
  • The available standards and
  • The programming conventions.

The final specifications must be worked upon to produce the database. On the CRF there labels and fields. While the label should have a default value, the fields should have the correct attributes. This is many a times not explicitly mentioned in the specification provided to the programmer e.g. Category and sub-category variables.  The failure to consider the implied specification leads to an error in programming. Very simple metadata level check for missing default values, incorrect attributes etc. can help fix these errors in study set-up phase.

When the standard CRF and validation check are available, these must be used on every occasion to minimize the errors. Even when this is done, to ensure that the standard is correctly replicated at study level, the study level metadata can be compared with the standard metadata. Any mismatch can be fixed in the set-up phase itself. There should be validated programs to perform these checks.

The programming conventions can also be converted to a quality check program which is executed to check the compliance to the programming conventions.

The quality framework should facilitate communication of these errors to the appropriate stakeholders. It is a best practice that an error identified on one study is looked for in all other studies.

As a data management unit one should always look for automating the quality checks especially when we are working with our constraints of timelines and delays.

Quality by design is the crux for avoiding the CRs and providing a complete and error free data collection and validation tool.

As always …think disruptive..think right…think basic…

 

 

 

 

Advertisements

I am a clinical data management professional with 13 years of experience in healthcare and clinical trial data management. I am focused on bringing disruption in the area of clinical trials by conceptualising break through data management practices.

Tagged with: , , ,
Posted in Best Practices

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Enter your email address to follow this blog and receive notifications of new posts by email.

Archives
%d bloggers like this: