Assessing the quality of wider sources of data and evidence in our guidance on COVID-19

Guide for internal guidance developers and external collaborators

July 2020

Using wider sources of data and evidence in our work

In January 2020, we published a statement of intent signalling our ambition for the future use of wider sources of data and analytic methods, including sources commonly referred to as real-world data and evidence, in our products. The aim is to improve the way we synthesise research evidence, the efficiency of our processes and the relevance of our products by making use of:

  • the increasing amount and breadth of data available
  • new and efficient mechanisms for analysis
  • advances in the way information is labelled, linked and shared.

NICE has supported the NHS and social care to respond quickly to the challenges of the COVID 19 pandemic with rapid guidelines and evidence summaries. Published research findings from randomised controlled trials (RCTs) and summary statistics from meta-analyses of RCTs will continue to be the core evidence for NICE recommendations and advice. However, because the COVID 19 pandemic is new, there is a limited evidence base and rarely any data from RCTs. There are significant areas of uncertainty arising because of the limited evidence base. This means it will be important to use wider sources of data and evidence appropriately to address areas of uncertainty when developing and updating our COVID 19 products.

To deliver the ambitions in the statement of intent, and ensure the quality and validity of data and evidence used to inform our guidance, our data and analytics team plans to establish a comprehensive methods and standards programme. This programme will develop a suite of detailed standards and methodological frameworks for conducting and assessing analyses of data to inform our work. More details are set out in a paper approved by our board in March 2020. 

Before the establishment of the methods and standards programme, we have developed a pragmatic approach to assessing the quality of wider sources of data and evidence used to inform our COVID 19 work. This approach involves assessing risk of bias, reporting standards for data sources and analyses, and certainty of outcomes.

Internal guideline development teams in NICE can use this pragmatic approach, for example when updating our COVID 19 rapid guidelines. External collaborators directly commissioned by NICE to conduct analyses, and manufacturers and sponsors looking to provide wider sources of data and evidence to support our assessment of COVID 19 interventions, should also consider it.

We will review and update this pragmatic approach as we learn more about using wider sources of data and evidence. We will feed this learning into our methods and standards programme to ensure that we can use these wider sources across more of our work.

Using wider sources of data and evidence in our response to COVID-19

Our interim process and methods for guidelines developed in response to health and social care emergencies sets out the process for the surveillance and rapid update of our COVID 19 guidelines. During this process, wider sources of quantitative and qualitative evidence may be considered when they, for example:

  • complement existing evidence from RCTs, when available
  • address research questions that could be better answered using alternative study designs (such as safety issues, real-life effects or identifying rare complications)
  • provide early insight in the face of an emerging evidence base (such as cases describing the possibility of harm).

NICE and the National Institute for Health Research have together produced advice on clinical evidence generation for developers of medicinal products to prevent or treat COVID 19. It includes key considerations for observational data collection. Developers are advised that this data collection should not replace well-designed clinical trials. Rather, it should be used to address evidence gaps and areas of uncertainty, for example, to:

  • better characterise the clinical effectiveness of the technology when used in the NHS
  • obtain evidence on NHS clinical practice
  • collect clinical-effectiveness data on populations of patients not enrolled in clinical studies
  • to obtain long-term follow-up of patients.

Assessing the quality of wider sources of data and evidence

Our interim process and methods for guidelines developed in response to health and social care emergencies sets out the approach to rapid evidence review for updating our rapid COVID 19 guidelines. This includes a critical appraisal of included evidence, with documented risk of bias judgement using an appropriate checklist. Preferred checklists for different study designs are identified in appendix H of Developing NICE guidelines: the manual. These include the Newcastle-Ottawa scale for assessing the quality of non-randomised studies and the CASP checklist for appraising qualitative studies. It also includes using GRADE when appropriate for presenting the certainty of the findings at outcome level.

Before our internal guidance developers or external collaborators appraise or carry out an analysis using a wider data source (such as a newly developed disease-specific registry), they should ensure that the provenance of the source is clearly understood, and do a quality assessment of the dataset.

The REQueST tool should be used to evaluate the suitability of registry data. The standards it outlines will also be useful when assessing the quality of other data sources. For example, the following information, extracted from the REQueST tool standards, should be available for all data sources:

  • a description of the governance structure and any relevant ethics approval
  • details of informed consent, or authorisation for managing data if there is no informed consent
  • details of standards used for organising, storing, managing and protecting the dataset
  • details of the data collection procedure, data validation methods, accuracy checks, routine completeness and coverage estimates
  • details of the data cleaning plan and the analytical plan for missing data.

The prespecified analysis plan and the methods used to do the analysis should be clearly reported, so that the quality of the evidence can be assessed. A reporting checklist, such as RECORD, should be completed by anyone doing an analysis. Internal guidance developers could use this to identify whether important study elements have been reported. For example, the following items, extracted from the RECORD checklist, should be reported:

  • specific objectives, including any prespecified hypotheses
  • detail of setting, locations, and relevant dates, including periods of recruitment, exposure, follow up and data collection
  • detail on population selection methods, including the codes used to identify subjects and do statistical analysis, validation and data cleaning
  • a complete list of codes and algorithms used to classify exposures, outcomes, confounders and effect modifiers
  • a description of any efforts to address potential sources of bias
  • a description of all statistical methods, including methods to examine subgroups and interactions, and how any missing data were addressed
  • details of the extent to which the investigators had access to the database population used to create the study population
  • information on how to access any supplemental information such as the study protocol, raw data, or programming code.

Our data and analytics team can advise internal guidance developers and external collaborators on the appropriateness and quality of wider data sources, analytical methods and interpretation. This could include identifying elements of quality assessment that need to be adapted or considered in the context of COVID 19, for example:

  • Synthetic data (data artificially generated to replicate the statistical components of real-world data, which do not contain any identifiable information): use this with caution when validating COVID 19 datasets. It is vital to understand the ground-truth data and the processes or algorithms used in the synthetic dataset generation when assessing the quality of evidence derived from such data.
  • Clinical coding of COVID 19 in computerised medical record systems in the UK: this has not been consistent since the start of the pandemic. Studies based on these data should clearly describe how the coding has been managed and accounted for.

As we respond to COVID 19, we may identify elements of quality assurance that are not included in existing appraisal or reporting checklists but are important to NICE when using wider sources of data and evidence to inform recommendations. We will feed these findings into the methods and standards programme.

Contact us

Our data and analytics team can advise internal guidance developers and external collaborators on:

  • the appropriateness and quality of wider data sources
  • analytical methods
  • interpretation.

For more information, contact DataAnalytics@nice.nhs.uk.