Good Clinical Practice Guide
Results 1 to 10 of 28

Thread: MHRA produced FAQs for monitoring

Hybrid View

Previous Post Previous Post   Next Post Next Post
  1. #1
    24. How should non-compliance be dealt with when it is identified?

    There should be a formal process to identify, assess and document any non-compliance, i.e. failure to comply with the protocol, GCP or the legislation, which is identified through the monitoring activities. This is to protect the rights and well-being of the trial subjects and the integrity of the trial results. Ability to do this is also necessary to comply with the requirements of serious breach reporting, as some non-compliances musta to be reported to the MHRA.

    Actions could include increasing site monitoring intensity/frequency, visits to site by sponsor senior management, conducting an audit, holding or terminating recruitment at the site until the non-compliance issue is resolved (such temporary halting of recruitment at a site may constitute an Urgent Safety Measure)– e.g. by re-training, additional resources etc. The corrective and preventative actions to deal with the non-compliance should be documented and they should be followed up to ensure that they are completed as per a formalised escalation process and in a timely manner. In many MHRA GCP inspections it has been found that where non-compliance issues have been found at investigator sites by the monitor there has been a failure to address them in a timely manner and to an appropriate resolution. There should be a mechanism to ensure that all the non-compliance is documented such that it can be reviewed as part of the analysis of the data for impact on the trial results and details provided in the clinical study report/publication.

    a. SI 2004/1031 (as amended) Regulation 29A
    Version 1: 22 February 2013

  2. #2
    25. How important is the accuracy of the clinical trial data?

    It is not the accuracy of the individual trial data that is important, but the reliability and robustness of the trial results. The aim of the management, monitoring and data management activities is recommended to focus on the data and activities that are critical to the reliability of the trial results, for example, the endpoint for the primary objective of the trial or key design aspects (e.g. randomisation). These would be identified during a risk assessment of the trial. It is recommended to aim for a high level of accuracy in these areas identified and potentially accept some degree of error in other areas. Consideration for defining such acceptability in terms of tolerance limits is recommended

    Concern may be raised by sponsors about the reliability of the data where no SDV is conducted, for example, if the primary endpoint has not been verified or if there has been no review of patient notes to detect unreported SAEs or if the data submitted in the CRF is found to be incorrect. These concerns arise from the traditional monitoring approach that all the data has to be accurate and that any error is unacceptable. The design of the trial can assist in reducing or mitigating the impact of missing or incorrect data, for example, the results of large blinded, randomised trials with high power are unlikely to be affected by increased variability/omissions of the data, particularly as the errors/omissions would not be differential on a treatment basis (biased). Small blinded and randomised trials may suffer from reduced power with increased data variability/omissions and there is potential to increase the risk of a false negative result. Open trials are more at risk from bias, as errors and omissions could be potentially differential for the treatment groups. This issue is recommended to be evaluated as part of the risk assessment to determine what level of SDV (and other monitoring checks) is needed to mitigate any concerns about the reliability of the trial results. The monitoring plan may take a conservative approach initially, then reduce the monitoring intensity if the concerns are not realised. The data accuracy and proper conduct of the trial can be influenced not only by the monitor detecting and the investigator correcting errors retrospectively (where possible), but by prevention of such errors in the first place, for example, by appropriate trial design, training, communication and systems that facilitate the conduct of the trial.
    Version 1: 22 February 2013

  3. #3
    26. Is it necessary to perform 100% source data verification (SDV)?

    It is recommended that any SDV is focussed on the data that matters to the reliability of the trial results, for example, focus may be on consent, eligibility, data for outcome measures related to the primary objective, safety reporting, non-compliance, or IMP accountability rather than 100% SDV of all the data. Also, if the monitor is doing 100% SDV then they generally have little time for anything else, and important issues can be missed if the monitoring focus is this narrow. The key data could be identified during the risk assessment and the trial protocol. It is also recommended to define what the source data is at a particular investigator site. For trials categorised as Type C, due to the exploratory nature of the trial and the uncertainty, it is likely that a higher level of SDV would be needed compared to a trial categorised as Type A, where the IMP is well known and used as per normal clinical practice. This illustrates the risk-based approach. There may be situations where no or very limited SDV is acceptable by the risk assessment. The necessary SDV checks are recommended to be contained in the protocol, SOPs or monitoring strategy documents for the trial. The sponsor may wish to share these planned SDV checks with the investigator, because if the investigator is informed of which data is of importance for the trial results, this may improve the quality of this data. Where there is not 100% SDV of the identified critical data, but a sampling mechanism is used, then there should be some procedure for escalation if the sample verified reveals problems with data quality.

    GCP inspectors may perform some SDV at investigator site inspections, but organisations are not recommended to undertake 100% SDV using this as a rationale. The inspectors will always consider the risk assessment (where available) and the plans and procedures which will define the sponsor’s plans for the SDV of the trial data. Inspectors accept that discrepancies may be found if they examine data where SDV is not planned to be undertaken (or the planned SDV had not yet been undertaken at the time of the inspection, for example data collected since the previous monitoring SDV). Also, if the SDV plan is risk-based, focussing on potential discrepancies that would impact on the trial results, then similarly, as the inspectors will use a risk-based approach to selecting data to check, then inspection examination of the data that is not planned to have SDV is unlikely to occur anyway. If discrepancies are found in data that has been documented to have been verified by the monitor, and the selection of this data for SDV by the monitor was risk-based, then this may result in a significant inspection finding as there is obviously a potential to impact on the trial results (which is why it was selected to be verified).

    It is likely that trials will continue to be increasingly electronically based, with the potential for data to be copied from source data electronically and transferred to the sponsor. This occurs already for laboratory data and examples have been seen where patient electronic records have been accessed to obtain data. Such an increase in this activity is likely to reduce the need for source data verification with more emphasis on computer system validation to ensure that the systems used function correctly and obtain the correct data.

    Finally, some source data can be accessed remotely and verified, for example, there are registries to confirm the date of and reason for death that could be used in long-term survival trials. Access to such information may be restricted to sponsors who are authorised to do this, where subjects have explicitly consented to this and where the relevant subject’s personal data (e.g. NHS number) that is needed for such access has been provided and is kept confidential (complying with the Data Protection Act), for example, sponsors who are NHS Trusts.
    Version 1: 22 February 2013

  4. #4
    27. How do I show that the monitoring strategy has been complied with?

    There will need to be documented evidence of the activities outlined in the monitoring strategy which would provide evidence to support compliance. Whilst site visit reports are well-established monitoring evidence, and take a similar format across organisations, activity reports from central and statistical monitoring are not established in the same way, so all the documentation and checks undertaken to demonstrate the planned central monitoring activities musta be retained. It is of particular importance that documentation shows that any non-compliance issues were effectively dealt with in a timely manner including documentation of any escalation actions.

    a. SI 2004/1031 (as amended) Regulation 31A (3)
    Version 1: 22 February 2013

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •