Good Clinical Practice Guide
Page 3 of 3 FirstFirst 123
Results 21 to 28 of 28

Thread: MHRA produced FAQs for monitoring

  1. #21
    20. What activities comprise “central monitoring” of a clinical trial and should they be documented?

    Central monitoring activities, focussing on the areas that matter as identified in the monitoring strategy could be undertaken by numerous roles and across various departments, for example clinical operations, study management, data management, statistics, medical monitoring etc. with close co-operation where appropriate. It is recommended that where the sponsor is undertaking many trials, some generic central monitoring processes are contained in SOPs, rather than trial specific documents.

    Any relevant/important communication or contact relating to the conduct of the trial by the sponsor would be part of monitoring the trial and this oversight, be it telephone calls, emails or letters should be documented and musta be retained and this would also demonstrate regular contact with the investigator(s).

    There will be many documents received from the investigator sites as per the monitoring procedures, these could include: consent/eligibility documents, status reports (recruitment/withdrawals etc), self assessment questionnaires/checklists. The sponsor should ensure that processes are in place to ensure blinding of the trial is maintained if documentation that could potentially unblind the trial is to be handled in central monitoring activities, for example records sent from pharmacy. Any generated evidence that receipt and review of such information has taken place musta be retained and as such this could prevent the need to retain the actual copies of the documents received. Some non-commercial trials use a form of central monitoring that involves sending in information from the investigator site to a central data centre. It is important that if information that identifies a subject is sent, for example copy of consent form, for monitoring purposes, that the subject has given explicit consent for this and is aware of who will have access to their data and this process is recommended to be included in the protocol. A formal system should be in place at the receiving site to restrict access to confidential subject data with respect to compliance with the Data Protection Act. Compliance any local country-specific requirements will also be required.

    Additionally, the investigator site will submit the clinical data CRFs either using eCRF or faxing/sending paper CRFs. Any generated evidence of central monitoring activities regarding this data musta be retained, these could include reports generated from interrogating the database, for example, looking at data submission timeliness, audit trails of the eCRF to examine times of completion, data query rates and response timeliness, SAE reporting rate, comparisons/reconciliations with other databases (e.g. IVRS, Central Labs) and data from other sites. The investigator musta retain the original source document or a certified copy and the sponsor should not have exclusive control of a source document.

    The data validation activities, usually done by data management, would be expected to be documented and so musta be retained; this would include evidence of manual checks of CRFs and outputs of inconsistencies detected from data validation systems. It is recommended that the data validation activities are recommended to be focussed on the data that is critical to the reliability of the trial results as identified by the risk assessment rather than excessive resource spent on raising data queries whose resolution makes little or no impact on the quality of the trial, the safety of the subjects and reliability of the results. This is similar to the approach taken for proportionate source data verification (SDV) (see FAQ 26).

    There should be a formal process for dealing with issues and data queries identified during central monitoring and data management activities, including an escalation process. Any generated evidence of identification of the issue(s), review and discussion and subsequent actions musta be retained. The monitoring strategy and procedures mustb be followed and there should be documentary evidence of this.

    The use of central monitoring transfers some additional activities to the investigator site and may impact on site resources; however, the investigator and research team mustc conduct the trial in accordance with the principles of GCP and have some specific responsibilities they mustd undertake according to the legislation. Some of the additional activities needed for central monitoring may assist in ensuring that this occurs (e.g. completing a checklist for review of investigator site file contents).

    The MHRA is willing to publish examples here of central monitoring documentation (for example plans and output reports) to assist sponsors in developing processes and procedures. Sponsors, who wish to provide examples for consideration for publication, should contact the MHRA GCP Inspectorate: GCP.inspectors@mhra.gsi.gov.uk
    The examples are not intended to be definitive approaches. They are not endorsed or recommended by the MHRA.

    a. SI 2004/1031 (as amended) Regulation 31A (3)
    b. SI 2004/1031 (as amended) Regulation 28 (1) and (2), Schedule 1, Part 2, (4)
    c. SI 2004/1031 (as amended) Regulation 28 1(a)
    d. SI 2004/1031 (as amended) Regulations 12, 13, 14, 29 (a), 31 A (7) & (8) and 32

    Version 1: 22 February 2013
    Last edited by MHRA Super Moderator; 14th Nov 2014 at 12:04 PM.

  2. #22
    21. What is statistical monitoring?

    Statistical monitoring is an aspect of central monitoring. Sometimes this term can be used in relation to a sequential trial design with respect to analysis for stopping rules, but this is not what is being considered here, which relates to monitoring the conduct of the trial. It is where the accumulating data, either clinical data from the CRF, for example SAE event rates or performance data, for example eCRF completion times obtained from audit trail, data query levels/response timeliness and CRF return times are examined using statistical approaches or modelling across the trial. This allows comparisons of sites to occur and this trending, modelling and process control has the aim of identifying any unusual or extraordinary patterns/variance/distributions within the data and in particular if any sites appear to be “outliers”. Some statistical monitoring can use multivariate methods whereby the data from many variables can be used simultaneously to identify outlying sites. The aim of the methodology is to use the information to decide where to target potential increased monitoring such as on-site visits, telephone calls, training etc and there may be predefined criteria or tolerance limits in the derived parameters which would trigger further monitoring activities or corrective/preventative actions. Outputs from the statistical monitoring usually take the form of graphs.

    The use of all the data in this manner can potentially reveal issues that on-site monitoring might not detect, for example potential fraud. Statisticians have traditionally been involved in trial design and data analysis, but an expansion in use of central monitoring is likely to develop new roles in this area for statisticians. It will also increase the necessary interaction between monitoring, data management and others with statistics personnel.

    The MHRA is willing to publish examples of statistical monitoring documentation/reports to assist sponsors in developing processes and procedures. Sponsors, who wish to provide examples for consideration for publication here, should contact the MHRA GCP Inspectorate: GCP.inspectors@mhra.gsi.gov.uk
    Version 1: 22 February 2013
    The examples are not intended to be definitive approaches. They are not endorsed or recommended by the MHRA
    Last edited by MHRA Super Moderator; 14th Nov 2014 at 12:05 PM.

  3. #23
    22. Are there any expectations of which metrics or key performance indicators or methodologies should be used in central/statistical monitoring?

    It is understood that many organisations are developing quality metrics or key risk/performance indicators to assist in determining which investigator site to visit, often this is part of the audit function rather than monitoring, but there does not appear to be a list of accepted or validated metrics for sponsors to use. The methodology would include ranking investigator sites with respect to the metrics to identifying outlying sites. Some organisations are also using multivariate statistical methodologies enabling the use of several metrics simultaneously. It is recommended that the methods and metrics to be used are documented in the monitoring procedures. Some sponsors have used the following metrics:
    • Recruitment rate
    • Screen failure rates
    • CRF submission/completion times against actual patient’s progress in the trial
    • Query rates
    • Time to queries resolution vs number of active queries (site level)
    • SAEs reported
    • Numbers of missed or late visits/data
    • Number of Subject withdrawals/dropouts
    • Numbers of protocol/GCP non-compliances recorded/reported
    • eCRF – audit trail information on completion times in relation to visits or expected timescales

    The MHRA is willing to publish further examples of metrics/methodologies being used to assist sponsors in developing processes and procedures and undertaking research. It is possible that organisations may be undertaking similar approaches already. Sponsors, who wish to provide examples for consideration for publication here, should contact the MHRA GCP Inspectorate.

    The MHRA encourages further research and publications into the use of key performance indicators and statistical methodology and validation of their effectiveness in the identification of non compliant sites. Sponsors undertaking such research are encouraged to contact the GCP Inspectorate.


    GCP.inspectors@mhra.gsi.gov.uk
    Version 1: 22 February 2013
    The examples are not intended to be definitive approaches. They are not endorsed or recommended by the MHRA
    Last edited by MHRA Super Moderator; 14th Nov 2014 at 12:06 PM.

  4. #24
    23. What are the benefits of conducting an on-site visit?

    Central monitoring may cover many of the tasks that a monitor would undertake at site provided the appropriate and genuine documentation has been provided by the investigator. There are, however, some key benefits of undertaking an on-site visit which include:
    • Reviewing patient medical records (e.g. when central remote electronic review is not possible by the sponsor) - to conduct SDV, verify the existence of a subject; verifying the existence and quality of source documents and detect unreported adverse events
    • Meeting, training and motivating the research team and the investigator
    • Interviewing staff face to face, which builds rapport and can determine the actual processes used at the site (and any issues associated with that) rather than only conducting a review of the outcome (documentation) that may not present the whole picture
    • Reviewing facilities and equipment
    • Establishing the role of the investigator in the trial conduct (e.g. monitoring the delegation of duties and the investigator’s involvement and oversight of the trial)
    • Providing an overall impression of the quality of the conduct of the trial at the site, which may enable action to be taken to improve quality at an earlier stage
    • Reviewing IMP storage areas and performing any necessary accountability checks on the actual IMP (this may only be a sample (initially), if considered appropriate in the risk assessment)
    • Reviewing investigators’ site file (Investigator’s TMF) and archive facilities
    • Mentoring new staff/investigators
    • Witnessing subject visits – e.g. consenting process

    If such considerations are identified as vulnerabilities in the trial risk assessment, then this would impact on the decision to undertake on-site monitoring and make it more likely to be needed. For example, for a trial categorised as a Type C, with little safety information in patients, on-site visits to check for adverse events in the notes is more important than in a trial categorised as a Type A where the IMP is well known and used in accordance with standard clinical practice. It should be noted that on-site monitoring does not necessarily need frequent regular visits (e.g. traditionally once every 4-8 weeks), the interval between visits could be set at an appropriate level based on risk assessment, for example, the sites could be visited once, shortly after starting the trial or on a fairly infrequent basis, with a more targeted approach to monitoring activities based on the outcomes seen at previous visits or from central monitoring.
    Version 1: 22 February 2013

  5. #25
    24. How should non-compliance be dealt with when it is identified?

    There should be a formal process to identify, assess and document any non-compliance, i.e. failure to comply with the protocol, GCP or the legislation, which is identified through the monitoring activities. This is to protect the rights and well-being of the trial subjects and the integrity of the trial results. Ability to do this is also necessary to comply with the requirements of serious breach reporting, as some non-compliances musta to be reported to the MHRA.

    Actions could include increasing site monitoring intensity/frequency, visits to site by sponsor senior management, conducting an audit, holding or terminating recruitment at the site until the non-compliance issue is resolved (such temporary halting of recruitment at a site may constitute an Urgent Safety Measure)– e.g. by re-training, additional resources etc. The corrective and preventative actions to deal with the non-compliance should be documented and they should be followed up to ensure that they are completed as per a formalised escalation process and in a timely manner. In many MHRA GCP inspections it has been found that where non-compliance issues have been found at investigator sites by the monitor there has been a failure to address them in a timely manner and to an appropriate resolution. There should be a mechanism to ensure that all the non-compliance is documented such that it can be reviewed as part of the analysis of the data for impact on the trial results and details provided in the clinical study report/publication.

    a. SI 2004/1031 (as amended) Regulation 29A
    Version 1: 22 February 2013

  6. #26
    25. How important is the accuracy of the clinical trial data?

    It is not the accuracy of the individual trial data that is important, but the reliability and robustness of the trial results. The aim of the management, monitoring and data management activities is recommended to focus on the data and activities that are critical to the reliability of the trial results, for example, the endpoint for the primary objective of the trial or key design aspects (e.g. randomisation). These would be identified during a risk assessment of the trial. It is recommended to aim for a high level of accuracy in these areas identified and potentially accept some degree of error in other areas. Consideration for defining such acceptability in terms of tolerance limits is recommended

    Concern may be raised by sponsors about the reliability of the data where no SDV is conducted, for example, if the primary endpoint has not been verified or if there has been no review of patient notes to detect unreported SAEs or if the data submitted in the CRF is found to be incorrect. These concerns arise from the traditional monitoring approach that all the data has to be accurate and that any error is unacceptable. The design of the trial can assist in reducing or mitigating the impact of missing or incorrect data, for example, the results of large blinded, randomised trials with high power are unlikely to be affected by increased variability/omissions of the data, particularly as the errors/omissions would not be differential on a treatment basis (biased). Small blinded and randomised trials may suffer from reduced power with increased data variability/omissions and there is potential to increase the risk of a false negative result. Open trials are more at risk from bias, as errors and omissions could be potentially differential for the treatment groups. This issue is recommended to be evaluated as part of the risk assessment to determine what level of SDV (and other monitoring checks) is needed to mitigate any concerns about the reliability of the trial results. The monitoring plan may take a conservative approach initially, then reduce the monitoring intensity if the concerns are not realised. The data accuracy and proper conduct of the trial can be influenced not only by the monitor detecting and the investigator correcting errors retrospectively (where possible), but by prevention of such errors in the first place, for example, by appropriate trial design, training, communication and systems that facilitate the conduct of the trial.
    Version 1: 22 February 2013

  7. #27
    26. Is it necessary to perform 100% source data verification (SDV)?

    It is recommended that any SDV is focussed on the data that matters to the reliability of the trial results, for example, focus may be on consent, eligibility, data for outcome measures related to the primary objective, safety reporting, non-compliance, or IMP accountability rather than 100% SDV of all the data. Also, if the monitor is doing 100% SDV then they generally have little time for anything else, and important issues can be missed if the monitoring focus is this narrow. The key data could be identified during the risk assessment and the trial protocol. It is also recommended to define what the source data is at a particular investigator site. For trials categorised as Type C, due to the exploratory nature of the trial and the uncertainty, it is likely that a higher level of SDV would be needed compared to a trial categorised as Type A, where the IMP is well known and used as per normal clinical practice. This illustrates the risk-based approach. There may be situations where no or very limited SDV is acceptable by the risk assessment. The necessary SDV checks are recommended to be contained in the protocol, SOPs or monitoring strategy documents for the trial. The sponsor may wish to share these planned SDV checks with the investigator, because if the investigator is informed of which data is of importance for the trial results, this may improve the quality of this data. Where there is not 100% SDV of the identified critical data, but a sampling mechanism is used, then there should be some procedure for escalation if the sample verified reveals problems with data quality.

    GCP inspectors may perform some SDV at investigator site inspections, but organisations are not recommended to undertake 100% SDV using this as a rationale. The inspectors will always consider the risk assessment (where available) and the plans and procedures which will define the sponsor’s plans for the SDV of the trial data. Inspectors accept that discrepancies may be found if they examine data where SDV is not planned to be undertaken (or the planned SDV had not yet been undertaken at the time of the inspection, for example data collected since the previous monitoring SDV). Also, if the SDV plan is risk-based, focussing on potential discrepancies that would impact on the trial results, then similarly, as the inspectors will use a risk-based approach to selecting data to check, then inspection examination of the data that is not planned to have SDV is unlikely to occur anyway. If discrepancies are found in data that has been documented to have been verified by the monitor, and the selection of this data for SDV by the monitor was risk-based, then this may result in a significant inspection finding as there is obviously a potential to impact on the trial results (which is why it was selected to be verified).

    It is likely that trials will continue to be increasingly electronically based, with the potential for data to be copied from source data electronically and transferred to the sponsor. This occurs already for laboratory data and examples have been seen where patient electronic records have been accessed to obtain data. Such an increase in this activity is likely to reduce the need for source data verification with more emphasis on computer system validation to ensure that the systems used function correctly and obtain the correct data.

    Finally, some source data can be accessed remotely and verified, for example, there are registries to confirm the date of and reason for death that could be used in long-term survival trials. Access to such information may be restricted to sponsors who are authorised to do this, where subjects have explicitly consented to this and where the relevant subject’s personal data (e.g. NHS number) that is needed for such access has been provided and is kept confidential (complying with the Data Protection Act), for example, sponsors who are NHS Trusts.
    Version 1: 22 February 2013

  8. #28
    27. How do I show that the monitoring strategy has been complied with?

    There will need to be documented evidence of the activities outlined in the monitoring strategy which would provide evidence to support compliance. Whilst site visit reports are well-established monitoring evidence, and take a similar format across organisations, activity reports from central and statistical monitoring are not established in the same way, so all the documentation and checks undertaken to demonstrate the planned central monitoring activities musta be retained. It is of particular importance that documentation shows that any non-compliance issues were effectively dealt with in a timely manner including documentation of any escalation actions.

    a. SI 2004/1031 (as amended) Regulation 31A (3)
    Version 1: 22 February 2013

Page 3 of 3 FirstFirst 123

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •