Lessons Learned in Implementing RBM: Data Review for Clinical Trials- What you see now may not be what you need to see.

By Penny Manasco - September 18, 2017

Since the FDA and EMA released draft guidance on Risk Based Monitoring (RBM) and Electronic Source in 2011, Pharmas, Biotechs, and CROs have focused tremendous efforts on determining how these guidances affect monitoring activities onsite and remotely.

Surprisingly, there have not been similar, significant efforts to determine how best to conduct data review. In general, most monitors still review data using a page-by-page review of the eCRF—which is similar to the approach used in SDV: review of each data set by visit (e.g., vital signs, then physical exam, then ECG, for visit 1, then a repeat for each subsequent visit).

Unfortunately, this approach does not optimally identify data discrepancies because the data are not formatted in a way to identify data errors. Monitors review the data page by page, so errors or data discrepancies that occur across visits and across subjects at a site cannot be easily recognized.

A second challenge to monitoring critical data is the use of “standard” reports that fail to focus on study-specific data and processes. This results in the use of blunt, lagging indicators to identify study-specific high-risk sites and processes—to disappointing effect.

Finally, data review has been focused on internal consistency (i.e., if a data point is outside the expected result—query the result), but does not evaluate the reason(s) why the data were erroneous.

While we commit tremendous efforts to determine how we will analyze trial data for efficacy, we, as an industry, need to spend the same level of effort on:

How to best provide data for review and identify critical data findings specific to a research study or program. There are many visualization tools that provide standard data visualizations, but they do not provide the study-specific insights truly needed.

How to teach monitors and data managers to become detectives rather than box checkers. They need to move from observing findings to determining why the events occurred and what methods are needed to fix them (a quality management approach).

In a recent presentation, we gave the participants a stack of CRF pages, similar to what monitors are expected to review. Team members had 5 minutes to review the pages and then 5 more minutes to review the data using our proprietary Subject Profile Analyzing Risk (SPAR). SPAR synthesizes data across data sets and across visits—specifically focused on the High Risk Data and Processes identified during Risk Assessment. The team immediately identified the errors using the SPAR. They were unable to do so just using the eCRF pages.

In a separate data example, we showed that critical data must be reviewed using many different tools and reports. Critical rating data must be reviewed using at least 4 different reports—each providing a different aspect of quality: who did the assessments (i.e., were they trained, given the appropriate delegation), when were the assessments done, did the assessments require additional tools (e.g., ePRO, photographs), and finally did the ratings make sense over time. We were able to show that each aspect identified different errors important to the primary efficacy endpoint.

In that presentation, we also provided examples of non-essential trial activities that did not add to the quality of the trial data but added weeks of additional, non-productive work for the study and site teams. These observations provide the basis for a comprehensive, efficient, study-specific, cost-effective approach to data review. One that can be implemented rapidly with a small well trained staff; saving significant costs while enhancing study quality.

Defining what data are important to evaluate your scientific findings and how best to illustrate the data findings are essential steps toward successfully implementing RBM principles. As an industry, we need to spend the same amount of time on data presentations for clinical operations as we do to identify, review, and analyze data for submission.

Training cannot be understated or classified as only necessary at study start-up. As an industry, we have trained our monitors to perform at the most basic levels of performance based on Bloom’s Taxonomy of Learning Domains (see below). We need to move our monitors (just as we have moved children in school) to more advanced cognitive activities. Monitors need to be trained to move from Remembering/Understanding to Analyzing and Evaluating.

Figure 1: Bloom’s Taxonomy of Cognitive Domains
Please join me in a FREE presentation:” Lessons Learned in Implementing RBM Data Review for Clinical Trials: What you see now may not be what you need to see.”
It demonstrates the power of data presentation and how MANA RBM’s custom, proprietary training methods can move study team members from Remembering to Evaluating data with the resulting enhanced data quality.