Good Clinical Practice Guide
Results 1 to 2 of 2

Thread: Examples of Central (remote) and Statistical Monitoring Methodologies

  1. #1

    Examples of Central (remote) and Statistical Monitoring Methodologies

    This thread is to publish examples of central and statistical monitoring approaches that organisations have provided to MHRA as per the request in Monitoring FAQ questions 21 and 22. The examples are not intended to be definitive approaches. They are not endorsed or recommended by the MHRA. They have been reviewed by the MHRA and are provided to illustrate and share methodologies that are being used to support a risk based monitoring approach.

  2. #2
    Example 1 – CluePoints Intelligent Statistical Monitoring

    CluePoints' SMART™ engine, is a statistical software solution that examines if data collected are consistent, and if not, pinpoints those investigator sites that differ substantially from the others involved in a trial. In this regard, it is powerful in detecting outlying sites, but would be limited in detecting a systematic issue affecting all sites in the trial because no site would be an outlier. It is a multivariate approach and analyses are performed on all available data and this generates thousands of p-values as test results (a single p-value being the probability of an observed result arising by chance). The scoring algorithm summarizes the information from the tests into a summary indicator or Data Inconsistency Score (DIS) that pinpoints atypical sites and data subsets. All the calculated individual p-values are used to process a score per site and rank all of them accordingly. The most significant the DIS for a particular site, the most different statistically the site is. As the chance to identify outliers increases with the number of sites in the study, the SMART™ engine also offers users an adjustable FDR (False Discovery Rate %) that takes multiplicity into account. The FDR sets an upper limit to the probability of detecting a center as an outlier when it is not one. The smaller the FDR, the more confident one can be that a center is truly an outlier. The output is displayed on charts to identify the outlying investigator sites and this can then be investigated further, perhaps triggering audit or site visit. An example of its use is attached.

    References:

    L. Desmet, D. Venet, E. Doffagne, T. Burzykowski, C. Legrand and M. Buyse Signal detection and power in central statistical monitoring of multicenter trials. Statist. Med. 2010; 00: 1-16.

    D. Venet, E.Doffagne, T. Burzykowski, F. Beckers, Y. Tellier, E. Genevois-Marlin, U. Becker, V. Bee, V. Wilson, C. Legrand and M. Buyse A statistical approach to central monitoring of data quality in clinical trials. Clinical Trials 2012; 9: 705-713.


    Cluepoints Example 1.pdf

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •