4 Developing review questions and planning the evidence review

At the start of guideline development, the key issues and draft questions listed in the scope should be translated into review questions and review protocols.

Review questions define the scope of the review and therefore must be clear and focused. They provide the framework for the design of the literature searches, inform the planning and process of the evidence review, and act as a guide for the development of recommendations by the committee.

This chapter describes how review questions are developed and agreed. It describes the different types of review question and provides examples. It also provides information on the different types of evidence and how to plan the evidence review. The best approach may vary depending on the topic. Options should be considered by the developer, and the chosen approach discussed and agreed with NICE staff with responsibility for quality assurance. The approach should be documented in the review protocol (see appendix I) and the guideline, together with the reasons for the choice.

4.1 Number of review questions

The number of review questions for each guideline depends on the topic and the breadth of the scope. However, it is important that the total number of review questions:

  • provides sufficient focus for the guideline, and covers all key areas outlined in the scope

  • can be covered in the time and with the resources available.

Review questions can vary considerably in terms of both the number of included studies and the complexity of the question and analyses. For example, a single review question might involve a complex comparison of several interventions with many primary studies included. At the other extreme, a review question might investigate the effects of a single intervention compared with a single comparator and there may be few primary studies or no study meeting the inclusion criteria. The number of review questions for each guideline will therefore vary depending on the topic and its complexity.

4.2 Developing review questions from the scope

The review questions should cover all key areas specified in the scope but should not introduce new areas. They should build on the draft questions in the scope and usually contain more detail.

Review questions are usually drafted by the developer. They are then refined and agreed with the committee members. This enables the literature search to be planned efficiently. Sometimes the questions need refining once the evidence has been searched; such changes to review questions (with reasons) should be agreed with a member of NICE staff with a quality assurance role, and documented in the review protocol and evidence review.

4.3 Formulating and structuring different review questions

Review questions should be clear and focused. The exact structure of each question depends on what is being asked. The aims of questions will differ, but are likely to cover at least one of the following:

  • extent and nature of the issue as described in the scope

  • causal mechanisms, or associations between factors or variables and the outcome of interest, the epidemiology or aetiology of a disease or condition

  • interventions that work best in ideal circumstances and might work in specific circumstances or settings (the extent to which something works, how and why)

  • technologies or tests that work best to diagnose certain diseases or conditions

  • a relevant programme theory, theory of change, or mechanisms of action likely to explain behaviour or effects

  • views and experiences of people using services or people who may be affected by the recommendation, including how acceptable and accessible they find the intervention, and whether there might be differences in people's values and preferences that might affect uptake of a recommended intervention

  • practitioners' or providers' views, experiences and working practices (including any factors hindering the implementation of the intervention and factors supporting implementation)

  • costs and resource use

  • potential for an intervention to do harm or have unintended consequences.

Conceptual or logic models can be useful when developing review questions.

When developing review questions, it is important to consider what information is needed for any planned economic modelling. This might include information about quality of life, rates of, and inequalities in, adverse effects and use of health and social care services. In addition, review questions often cover acceptability and accessibility of interventions, and experiences of practitioners or people using services and the public. The nature and type of review questions determines the type of evidence that is most suitable (Petticrew and Roberts 2003). There are examples of different types of review questions and the type of evidence that might best address them throughout this chapter. Developers should consider whether particular review questions might be addressed through analysis of primary data, based on an understanding of the evidence base and different sources available (see section 2.3).

Review questions about the effectiveness of an intervention

A helpful structured approach for developing questions about interventions is the PICO (population, intervention, comparator and outcome) framework (see box 4.1).

However, other frameworks exist (such as SPICE; setting, perspective, intervention, comparison, evaluation) and can be used as appropriate.

Box 4.1 Formulating a review question on the effectiveness of an intervention using the PICO framework

Population: Which population are we interested in? How best can it be described? Are there subgroups that need to be considered?

Intervention: Which intervention, treatment or approach should be examined?

Comparators: Are there alternative(s) to the intervention being examined? If so, what are these (for example, other interventions, standard active comparators, usual care or placebo)?

Outcome: Which outcomes should be considered to assess how well the intervention is working (including outcomes on both benefits and harms)? What is really important for people using services? Core outcome sets should be used if suitable based on quality and validity; one source is the COMET database. The Core Outcome Set Standards for Development (COS‑STAD) and Core Outcome Set Standards for Reporting (COS‑STAR) should be used to assess the suitability of identified core outcome sets.

For each review question, factors that may affect the outcomes and effectiveness of an intervention, including any wider social factors that may affect health and any health inequalities, should be considered. The setting for the question should also be specified if necessary. Outcomes (on both benefits and harms) and other factors that are important should be pre‑specified in the review protocol. In general, a maximum of 7 to 10 outcomes should be defined. Guidance on prioritising outcomes is provided by the GRADE working group.

Box 4.2 Examples of review questions on the effectiveness of interventions

  • What types of mass-media intervention help prevent children and young people from taking up smoking? Are the interventions delaying rather than preventing the onset of smoking?

  • Which of the harm-reduction services offered by needle and syringe programmes (including advice and information on safer injecting, onsite vaccination services, and testing for hepatitis B and C and HIV) are effective in reducing blood-borne viruses and other infections among people who inject drugs?

  • What types of intervention and programme are effective in increasing physical activity levels among children under 8 – particularly those who are not active enough to meet the national recommendations for their age – or help to improve their core physical skills?

  • Does brief advice from GPs increase adult patients' physical activity levels?

  • What are the most effective school-based interventions for changing young people's attitudes to alcohol use?

  • For people with IBS (irritable bowel syndrome), are antimuscarinics or smooth muscle relaxants effective compared with placebo or no treatment for the long-term control of IBS symptoms? Which is the most effective antispasmodic?

  • Which first-line opioid maintenance treatments are effective and cost effective in relieving pain in patients with advanced and progressive disease who require strong opioids?

  • What are the most effective methods of care planning, focusing on improving outcomes for people with dementia and their carers?

  • What is the effectiveness and cost effectiveness of intermediate care and reablement for people living with dementia?

Review questions about pharmacological management will usually only include medicines with a UK marketing authorisation for some indication, based on regulatory assessment of safety and efficacy. Use of a medicine outside its licensed indication (off‑label use) may be considered in some circumstances; for example, if this use is common practice in the UK, if there is good evidence for this use, and there is no other medicine licensed for the indication (see also the section on recommendations on medicines, including off-label use of licensed medicines). Medicines with no UK marketing authorisation for any indication will not usually be considered in a guideline because there is no UK assessment of safety and efficacy to support their use.

A review question about the effectiveness of an intervention is usually best answered by a randomised controlled trial (RCT), because a well-conducted RCT is most likely to give an unbiased estimate of effects. More information (for example, information about long-term effects) may be obtained from other sources. Advice on finding data on the adverse effects of an intervention is available in the Cochrane handbook for systematic reviews of interventions and SuRe Info (Summarized Research in Information Retrieval for HTA) resource.

RCTs provide the most valid evidence of the effects of interventions. However, such evidence may not always be available. In addition, for many health and social care interventions it can be difficult or unethical to assign populations to control and intervention groups (for example, for interventions which aim to change policy). In such cases, a non-randomised controlled trial might be a more appropriate way of assessing association or possible cause and effect. The Medical Research Council (MRC) has produced guidance on evaluating complex interventions (Craig et al. 2008) and using natural experiments to evaluate health interventions delivered at population level (Craig et al. 2011).

There are also circumstances in which an RCT is not needed to confirm the effectiveness of an intervention (for example, giving insulin to a person in a diabetic coma compared with not giving insulin or reducing speed limits to 20 mph to reduce the severity of injuries from road traffic accidents). In these circumstances, there is sufficient certainty from non‑RCT evidence that an important effect exists. In these circumstances due consideration needs to be given to the following:

  • whether an adverse outcome is likely if the person is not treated (evidence from, for example, studies of the natural history of a condition)

  • if the intervention gives a large benefit or shows a clear dose–response gradient that is unlikely to be a result of bias (evidence from, for example, historically controlled studies)

  • whether the side effects of the intervention are acceptable (evidence from, for example, case series)

  • if there is no alternative intervention

  • if there is a convincing mechanism of action (such as a pathophysiological basis) for the intervention.

When review questions are about the effectiveness of interventions, additional types of evidence reviews may be needed to answer different aspects of the question. For example, additional evidence reviews might address the views of people using services or the communities where services are based, or barriers to use as reported by practitioners or providers. Sometimes, a review may use different sources of evidence or types of data (for example, a review may combine current practice or map quantitative information with qualitative data [that is, a mixed methods review]). A review on effectiveness may also include evidence of the intervention's mechanism of action, that is, evidence of how the intervention works. Some reviews may also include analysis of large, high-quality primary data sources (such as patient registries).

Review questions that consider implementation

Review questions on effectiveness may also consider implementation, for example, 'What systems and processes should be in place to increase shared decision-making?'

Review questions that consider cost effectiveness

For more information on review questions that consider cost effectiveness, see chapter 7.

Review questions about the accuracy of diagnostic tests

Review questions about diagnosis are concerned with the performance of a diagnostic test or test strategy. Diagnostic tests can include identification tools, physical examination, history‑taking, laboratory or pathological examination and imaging tests.

Broadly, review questions that can be asked about a diagnostic test are of 3 types:

  • questions about the diagnostic accuracy (or diagnostic yield) of a test or a number of tests individually against a comparator (the reference standard)

  • questions about the diagnostic accuracy (or diagnostic yield) of a test strategy (such as serial testing) against a comparator (the reference standard)

  • questions about the value of using the test.

In studies of the accuracy of a diagnostic test, the results of the test under study (the index test[s]) are compared with those of the best available test (the reference standard) in a sample of people. It is important to be clear when deciding on the question what the exact proposed use of the test is (for example, as an identification tool, an initial 'triage' test or after other tests).

The PICTO (population, index test, comparator, target condition and outcome) framework can be useful when formulating review questions about diagnostic test accuracy (see box 4.3). However other frameworks (such as PPIRT; population, prior tests, index test, reference standard, target condition) can be used if helpful.

Box 4.3 Features of a well-formulated review question on diagnostic test accuracy using the PICTO framework

Population: To which populations would the test be applicable? How can they be best described? Are there subgroups that need to be considered?

Index test[s]: The test or test strategy being evaluated for accuracy.

Comparator/reference standard: The test with which the index test(s) is/are being compared, usually the reference standard (the test that is considered to be the best available method for identifying the presence or absence of the condition of interest – this may not be the one that is routinely used in practice).

Target condition: The disease, disease stage or subtype of disease that the index test(s) and the reference standard are being used to identify.

Outcome: The diagnostic accuracy of the test or test strategy for detecting the target condition. This is usually reported as test parameters, such as sensitivity, specificity, predictive values, likelihood ratios, or – when multiple thresholds are used – a receiver operating characteristic (ROC) curve. This should also include issues of importance to people having the test, such as acceptability.

A review question about diagnostic test accuracy is usually best answered by a cross-sectional study in which both the index test(s) and the reference standard are performed on the same sample of people. Cohort and case–control studies are also used to assess the accuracy of diagnostic tests, but these types of study design are more prone to bias (and often result in inflated estimates of diagnostic test accuracy). Further advice on the types of study to include in reviews of diagnostic test accuracy can be found in the Cochrane handbook for diagnostic test accuracy reviews.

Box 4.4 Examples of review questions on diagnostic test accuracy

What is the accuracy of imaging (MRI, CT scan, PET scan, X‑ray, ultrasonography) for diagnosing osteomyelitis compared with invasive bone biopsy?

What is the accuracy of D‑dimer assay for diagnosing deep vein thrombosis compared with compression ultrasonography?

In people suspected of having coronary artery disease, can multi-slice spiral CT of coronary arteries be used as replacement for conventional invasive coronary angiography?

In patients suspected of cow's milk allergy, should skin prick tests rather than an oral food challenge with cow's milk be used for diagnosis and management?

In adults receiving care in non-specialist settings, should serum or plasma cystatin C rather than serum creatinine concentration be used for diagnosing and managing renal impairment?

Although assessing test accuracy is important for establishing the usefulness of a diagnostic test, the value of a test lies in how useful it is in guiding treatment decisions or the provision of services, and ultimately in improving outcomes. 'Test and treat' studies, for example, compare outcomes for people who have a new diagnostic test (in combination with a management strategy) with outcomes of people who have the usual diagnostic test and management strategy. These types of study are not very common. If there is a trade‑off between costs, benefits and harms of the tests, a decision-analytic model may be useful (see Lord et al. 2006).

Review questions aimed at establishing the value of a diagnostic test in practice can be structured in the same way as questions about interventions. The best study design is test-and-treat RCT. Review questions about the safety of a diagnostic test should be structured in the same way as questions about the safety of interventions.

Review questions about prognosis

Prognosis describes the likelihood of a particular outcome, such as disease progression, the development of higher levels of need, or length of survival after diagnosis or for a person with a particular set of risk markers. A prognosis is based on the characteristics of the person or user of services ('prognostic factors'). These prognostic factors may be disease specific (such as the presence or absence of a particular disease feature) or demographic (such as age or sex), and may also include the likely response to treatment or care and the presence of comorbidities. A prognostic factor does not need to be the cause of the outcome, but should be associated with (in other words, predictive of) that outcome.

Information about prognosis can be used within guidelines to:

  • classify people into risk categories (for example, cardiovascular risk or level of need) so that different interventions can be applied

  • define subgroups of populations that may respond differently to interventions

  • identify factors that can be used to adjust for case mix (for example, in investigations of heterogeneity)

  • help determine longer-term outcomes not captured within the timeframe of a trial (for example, for use in an economic model).

Review questions about prognosis address the likelihood of an outcome for a person or user of services from a population at risk for that outcome, based on the presence of a proposed prognostic factor.

Review questions about prognosis may be closely related to questions about aetiology (cause of a disease or need) if the outcome is viewed as the development of the disease or need based on a number of risk factors.

Box 4.5 Examples of review questions on prognosis

Are there factors related to the individual (characteristics either of the individual or of the act of self-harm) that may predict outcomes (including suicide, non-fatal repetition, other psychosocial outcomes) from self-harm?

Which people having neoadjuvant chemotherapy or chemoradiotherapy for rectal cancer do not need surgery?

A review question about prognosis is best answered using a prospective cohort study with multi-variate analysis. Case–control studies and cross-sectional studies are not usually suitable for answering questions about prognosis because they do not estimate baseline risk, but give only an estimate of the likelihood of the outcome for people with and without the prognostic factor.

Review questions about clinical prediction models for individual prognosis or diagnosis

Clinical prediction models are developed to help healthcare professionals estimate the probability or risk that a specific disease or condition is present (diagnostic prediction models) or that a specific event will occur in the future (prognostic prediction models). These models are used to inform decision-making. They are usually developed using a multivariable prediction model – a mathematical equation that relates multiple predictors for a particular person to the probability of or risk for the presence (diagnosis) or future occurrence (prognosis) of a particular outcome. Other names for a prediction model include risk prediction model, predictive model, prognostic (or prediction) index or rule, and risk score.

Diagnostic prediction models can be used to inform who should be referred for further testing, whether treatment should be started directly, or to reassure patients that a serious cause for their symptoms is unlikely. Prognostic prediction models can be used for planning lifestyle or treatment decisions based on the risk for developing a particular outcome or state of health in a given period.

Clinical prediction model studies can be broadly categorised into those that develop models, those that validate models (with or without updating the model) and those that do both. Studies that report model development aim to derive a prediction model by selecting the relevant predictors and combining them statistically into a multi-variable model. Logistic and Cox regression are most frequently used for short-term (for example, disease absent versus present, 30‑day mortality) outcomes and long-term (for example, 10‑year risk) outcomes, respectively. Studies may also focus on quantifying how much value a specific predictor (for example, a new predictor) adds to the model.

Quantifying the predictive ability of a model using the same data from which the model was developed (often referred to as apparent performance) tends to overestimate performance. Studies reporting the development of new prediction models should always include some form of validation to quantify any optimism in the predicted performance (for example, calibration and discrimination). There are 2 types of validation: internal validation and external validation. Internal validation uses only the original study sample with methods such as bootstrapping or cross-validation. External validation evaluates the performance of the model with data not used for model development. The data may be collected by the same investigators or other independent investigators, typically using the same predictor and outcome definitions and measurements, but sampled from a later period (temporal or narrow validation). If validation indicates poor performance, the model can be updated or adjusted on the basis of the validation data set. For more information on validating prediction models, see Steyerberg et al. 2001, 2003, 2009; Moons et al. 2012; Altman et al. 2009; and Justice et al. 1999.

Well-known clinical prediction models include QCancer, GerdQ, Ottawa Ankle Rules, and the Alvarado Score for diagnosis; and for prognosis, QRISK2, QFracture, FRAX, EuroScore, Nottingham Prognostic Index, the Framingham Risk Score and the Simplified Acute Physiology Score.

For more information, see the TRIPOD statement and the TRIPOD statement: explanation and elaboration.

Although assessing predictive accuracy is important for establishing the usefulness of a clinical prediction model, the value of a clinical prediction model lies in how useful it is in guiding treatment or management decisions, or the provision of services, and ultimately in improving outcomes. Review questions aimed at establishing the value of a clinical prediction model in practice, for example, to compare outcomes of people who were identified from a clinical prediction model (in combination with a management strategy) with outcomes of people who were identified opportunistically (in combination with a management strategy) can be structured in the same way as questions about interventions.

Box 4.6 Examples of review questions on clinical prediction models

Diagnostic prediction models

Which scoring tools for signs and symptoms (including Centor and FeverPAIN) are most accurate in predicting sore throat caused by group A beta-haemolytic streptococcus (GABHS) infection in primary care?

What are the accuracy, clinical utility and cost effectiveness of clinical prediction models/tools (clinical history, cardiovascular risk factors, physical examination) in evaluating stable chest pain of suspected cardiac origin?

Prognostic prediction models

What risk tool best identifies people with multimorbidity who are at risk of unplanned hospital admission?

What risk tool best identifies people with type 2 diabetes who are at risk of reduced life expectancy?

Which risk assessment tools are the most accurate in predicting the risk of fragility fracture in adults with osteoporosis or previous fragility fracture?

Review questions about views and experiences of people using or providing services, family members or carers and the public

Most review questions should ensure that views and experience of people using or providing services, family members or carers and the public are considered when deciding on the type of evidence review and the type of evidence that will best inform the question.

In some circumstances, specific questions should be formulated about the views and experience of people using services, family members or carers and the public. The views and experiences of those providing services may also be relevant. These views and experiences, which may vary for different populations, can cover a range of dimensions, including:

  • views and experiences of people using or providing services, family members or carers or the public on the effectiveness and acceptability of given interventions

  • preferences of people using services, family members or carers or the public for different treatment or service options, including the option of foregoing treatment or care

  • views and experiences of people using or providing services, family members or carers or the public on what constitutes a desired, appropriate or acceptable outcome.

Such questions should be clear and focused, directly relevant to the topic, and should address experiences of an intervention or approach that are considered important by people using or providing services, family members or carers or the public. Such questions can address a range of issues, including:

  • elements of care or a service that are of particular importance to people using or providing services

  • factors that encourage or discourage people from using interventions or services

  • the specific needs of certain groups of people using services, including those sharing the characteristics protected by the Equality Act (2010)

  • information and support needs specific to the topic

  • which outcomes reported in studies of interventions are most important to people using services, family members or carers or the public.

As for other types of review question, questions that are broad and lack focus (for example, 'What is the experience of living with condition X?') should be avoided.

NICE guidelines should not reiterate or re‑phrase recommendations from the NICE guidelines on patient experience in adult NHS services, service user experience in adult mental health, people's experience in adult social care services, or other NICE guidelines on the experience of people using services. However, whether there are specific aspects of views or experiences that need addressing for a topic should be considered during the scoping of every guideline. Specific aspects identified during scoping should be included in the scope if they are not covered by existing guidelines and are supported as a priority area. These are likely to be topic specific and should be well defined and focused. The PICo (Population, Interest, Context) framework and the SPIDER framework are examples of frameworks that can be used to structure review questions on the views or experiences of people using or providing services, family members or carers or the public.

Box 4.7 Examples of review questions on the views or experiences of people using or providing services, family members or carers or the public

What elements of care on the general ward are viewed as important by patients following their discharge from critical care areas?

How does culture affect the need for and content of information and support for bottle or breastfeeding?

What are the perceived risks and benefits of immunisation among parents, carers or young people? Is there a difference in perceived benefits and risks between groups whose children are partially immunised and those who have not been immunised?

What information and support should be offered to children with atopic eczema and their families and carers?

What are the views and experiences of health, social care and other practitioners about home-based intermediate care?

A review question about the views or experiences of people using or providing services, family members or carers or the public could be answered using qualitative studies or cross-sectional surveys (or both) , although information on views and experiences is also becoming increasingly available as part of some intervention studies.

When there is a lack of evidence on issues important to people affected by the guideline (including families and carers, where appropriate), the developer should consider seeking information via a call for evidence (see section 5.5), or approaching experts who may have access to additional data sources, such as surveys of user views and experiences, to present as expert testimony (see section 3.5).

Exceptionally, when the information gap cannot be addressed in other ways, the developer may commission an additional consultation exercise with people affected by the guideline to obtain their views on specific aspects of the scope or issues raised by the committee, or to validate early draft recommendations before consultation with registered stakeholders. (For more information, see appendix B.) The developer should document the reasons, together with a proposal for the work, including possible methods and the anticipated costs. The proposal should be discussed and agreed with members of NICE staff with a quality assurance role, and approved by the centre director. Where the work is approved, the reasons and methods should be documented in the guideline.

Review questions about service delivery

Guidelines often cover areas of service delivery. These might include how delivery of services could improve or what are the different core components of services and how different components could be re‑configured.

Box 4.8 Examples of review questions on service delivery

In people with hip fracture what is the clinical and cost effectiveness of hospital-based multidisciplinary rehabilitation on the following outcomes: functional status, length of stay in secondary care, mortality, place of residence/discharge, hospital readmission and quality of life?

What is the clinical and cost effectiveness of surgeon seniority (consultant or equivalent) in reducing the incidence of mortality, the number of people requiring reoperation, and poor outcome in terms of mobility, length of stay, wound infection and dislocation?

What types of needle and syringe programmes (including their location and opening times) are effective and cost effective?

What regional or city level commissioning models, service models, systems and service structures are effective in:

  • reducing diagnostic delay for TB

  • improving TB contact tracing

  • improving TB treatment completion?

A review question about the effectiveness of service delivery models is usually best answered by an RCT. However, a wide variety of methodological approaches and study designs have been used, including observational evidence (including real world evidence), experimental and qualitative evidence. Other types of questions on service delivery are also likely to be answered using evidence from study types other than RCTs. For example, in order to determine whether an intervention will work for a particular subgroup or setting, we might want to know how the intervention works, which will require evidence of the relevant underlying mechanisms.

Depending on the type of review questions, the PICO framework may be appropriate but other frameworks can be used.

When a topic includes review questions on service delivery, approaches described in chapter 7 and appendix A may be used. Such methods should be agreed with NICE staff with responsibility for quality assurance and should be clearly documented in the guideline.

Review questions about epidemiology

Epidemiological reviews describe the problem under investigation and can be used to inform other review questions. For example, an epidemiological review of incidence or prevalence of a condition would provide baseline data for further evidence synthesis, an epidemiological review of accidents would provide information on the most common accidents, as well as morbidity and mortality statistics, and data on inequalities in the impact of accidents.

Box 4.9 Examples of review questions that might benefit from an epidemiological review

What are the patterns of physical activity among children from different populations and of different ages in England?

Which populations of children are least physically active and at which developmental stage are all children least physically active?

What is the incidence of Lyme disease in the UK?

The structure of the question and the type of evidence will depend on the aim of the review.

Another use of epidemiological reviews is to describe relationships between epidemiological factors and outcomes – a review on associations. If an epidemiological review has been carried out, information will have been gathered from observational studies on the nature of the problem. However, further analysis of this information – in the form of a review on associations – may be needed to establish the epidemiological factors associated with any positive or negative behaviours or outcomes.

Box 4.10 Examples of review questions that might benefit from a review on associations

What factors are associated with children's or young people's physical activity and how strong are the associations?

What physiological and aetiological factors are associated with coeliac disease?

What physical, environmental and sociological factors are associated with the higher prevalence of multiple sclerosis in European countries?

4.4 Evidence used to inform recommendations

In order to formulate recommendations, the guideline committee needs to consider a range of evidence about what works generally, why it works, and what might work (and how) in specific circumstances. The committee needs evidence from multiple sources, extracted for different purposes and by different methods.

Scientific evidence

Scientific evidence should be explicit, transparent and replicable. It can be context free or context sensitive. Context-free scientific evidence assumes that evidence can be independent of the observer and context. It can be derived from evidence reviews or meta-analyses of quantitative studies, individual studies or theoretical models. Context-sensitive scientific evidence looks at what works and how well in real‑life situations. It includes information on attitudes, implementation, organisational capacity, forecasting, economics and ethics. It is mainly derived using social science and behavioural research methods, including quantitative and qualitative research studies, surveys, theories, cost-effectiveness analyses and mapping reviews. Sometimes, it is derived using the same techniques as context-free scientific evidence. Context-sensitive evidence can be used to complement context-free evidence, and can so provide the basis for more specific and practical recommendations. It can be used to:

  • supplement evidence on effectiveness (for example, to look at how factors such as occupation, educational attainment and income influence effectiveness)

  • inform the development and refinement of logic models (see section 2.3) and causal pathways (for example, to explain what factors predict teenage parenthood)

  • provide information about the characteristics of the population (including social circumstances and the physical environment) and about the process of implementation

  • describe psychological processes and behaviour change.

Quantitative studies may be the primary source of evidence to address review questions on:

  • the effectiveness of interventions or services (including information on what works, for whom and under which circumstances)

  • measures of association between factors and outcomes

  • variations in delivery and implementation for different groups, populations or settings

  • resources and costs of interventions or services.

Examples of the types of review questions that are addressed using quantitative evidence include:

  • How well do different interventions work (for example, does this vary according to age, severity of disease)?

  • What other factors affect how well an intervention works?

  • How much resource does an intervention need to be delivered effectively and does this differ depending on location?

Scientific evidence can include both quantitative and qualitative evidence. Sometimes, qualitative studies may be the primary source of evidence to address review questions on:

  • the experiences of people using services, family members or carers or practitioners (including information on what works, for whom and under which circumstances)

  • the views of people using services, family members or carers, the public or practitioners

  • opportunities for and factors hindering improvement of services (including issues of access or acceptability for people using services or providers)

  • variations in delivery and implementation for different groups, populations or settings

  • factors that may help or hinder implementation

  • social context and the social construction and representation of health and illness

  • background on context, from the point of view of users, stakeholders, practitioners, commissioners or the public

  • theories of, or reasons for, associations between interventions and outcomes.

Examples of the types of review questions that could be addressed using qualitative evidence include:

  • How do different groups of practitioners, people using services or stakeholders perceive the issue (for example, does this vary according to profession, age, gender or family origin)?

  • What social and cultural beliefs, attitudes or practices might affect this issue?

  • How do different groups perceive the intervention or available options? What are their preferences?

  • What approaches are used in practice? How effective are they in the views of different groups of practitioners, people using services or stakeholders?

  • What is a desired, appropriate or acceptable outcome for people using services? What outcomes are important to them? What do practitioner, service user or stakeholder groups perceive to be the factors that may help or hinder change in this area?

  • What do people affected by the guideline think about current or proposed practice?

  • Why do people make the choices they do or behave in the way that they do?

  • How is a public health issue represented in the media and popular culture?

Quantitative and qualitative information can also be used to supplement logic models (see section 2.3). They can also be combined in a single review (mixed methods) when appropriate (for example, to address review questions about factors that help or hinder implementation or to assess why an intervention does or does not work).

Examples of questions for which qualitative evidence might supplement quantitative evidence include:

  • How acceptable is the intervention to people using services or practitioners?

  • How accessible is the intervention or service to different groups of people using services? What factors affect its accessibility?

  • Does the mode or organisation of delivery (including the type of relevant practitioner, the setting and language) affect user perceptions?

Existing systematic reviews

Often reviews of quantitative or qualitative studies (secondary evidence) already exist (for example, those developed by internationally recognised producers of systematic reviews such as Cochrane, the Campbell Collaboration and the Joanna Briggs Institute among others). Existing reviews may include systematic reviews (with or without a meta-analysis or individual patient data analysis) and non-systematic literature reviews and meta-analyses. Well-conducted systematic reviews may be of particular value as sources of evidence (see appendix H for checklists to assess risk of bias or quality of studies when developing guidelines). Some reviews may be more useful as background information or as additional sources of potentially relevant primary studies. This is because they may:

  • not cover inclusion and exclusion criteria relevant to the guideline topic's referral and parameters (for example, comparable research questions, relevant outcomes, settings, population groups or time periods)

  • group together different outcome or study types

  • include data that are difficult or impossible to separate appropriately

  • not provide enough data to develop recommendations (for example, some reviews do not provide sufficient detail on specific interventions making it necessary to refer to the primary studies).

Conversely, some high-quality systematic reviews may provide enhanced data not available in the primary studies. For example, authors of the review may have contacted the authors of the primary studies or other related bodies in order to include additional relevant data in their review, or may have undertaken additional analyses (such as individual patient data analyses). In addition, if high-quality reviews are in progress (protocol published) at the time of development of the guideline, the developer may choose to contact the authors for permission to access pre‑publication data for inclusion in the guideline (see section 5.5).

Systematic reviews can also be useful when developing the scope and when defining review questions, outcomes and outcome measures for the guideline evidence reviews. The discussion section of a systematic review can also help to identify some of the limitations or difficulties associated with a topic, for example, through a critical appraisal of the limitations of the evidence base. The information specialists may also wish to consider the search strategies of high-quality systematic reviews. These can provide useful search approaches for capturing different key concepts. They can also provide potentially useful search terms and combinations of terms, which have been carefully tailored for a range of databases.

High‑quality systematic reviews that are directly applicable to the guideline review question can be used as a source of data, particularly for complex organisational, behavioural and population level questions.

When considering using results from an existing high-quality review, due account should be taken of the following:

  • The parameters (for example, research question, PICO, inclusion and exclusion criteria) of the review are sufficiently similar to the review protocol of the guideline review question. In such cases, a search should be undertaken for primary studies published after the search date covered by the existing review.

  • Whether the use of existing high-quality reviews will be sufficient to address the guideline review question if the evidence base for the guideline topic is very large.

Colloquial evidence

'Colloquial evidence' can complement scientific evidence or provide missing information on context. It can come from expert testimony (see section 3.5), from members of the committee, from a reference group of people using services (see section 3.2) or from comments from registered stakeholders (see section 10.1). Colloquial evidence includes evidence about values (including political judgement), practical considerations (such as resources, professional experience or expertise and habits or traditions, the experience of people using services) and the interests of specific groups (views of lobbyists and pressure groups).

An example of colloquial evidence is expert testimony. Sometimes oral or written evidence from outside the committee is needed for developing recommendations, if limited primary research is available or more information on current practice is needed to inform the committee's decision-making. Inclusion criteria for oral or written evidence specify the population and interventions for each review question, to allow filtering and selection of oral and written evidence submitted to the committee.

Other evidence

Depending on the nature of the guideline topic and the review question, other sources of relevant evidence such as reports, audits, and service evaluation may be included. This should be agreed with NICE staff with responsibility for quality assurance before proceeding. The quality, reliability and applicability of the evidence is assessed according to standard processes (see appendix H).

See also chapter 8 on linking and using evidence from non-NICE guidance.

4.5 Planning the evidence review

For each guideline evidence review, a review protocol is prepared that outlines the background, the objectives and the planned methods. This protocol will explain how the review is to be carried out and will help the reviewer to plan and think through the different stages. In addition, the review protocol should make it possible for the review to be repeated by others at a later date. A protocol should also make it clear how equality issues have been considered in planning the review work, if appropriate.

Structure of the review protocol

The protocol should describe any differences from the methods described in this manual (chapters 5 to 7), rather than duplicating the methodology stated here. It should include the components outlined in appendix I.

When a guideline is updating a published guideline, the protocol from the published guideline, if available, should be used to outline how the review question would be addressed. Information gathered during surveillance and scoping of the guideline should also be added. This might include new interventions and comparators, and extension of the population.

Process for developing the review protocol

The review protocol should be drafted by the developer, with input from the guideline committee, after the review question has been agreed and before starting the evidence review. It should then be reviewed and approved by NICE staff with responsibility for quality assurance.

All review protocols should be registered on the PROSPERO database before the completion of data extraction. The review protocol, principal search strategy (see sections 5.2 and 5.4) and a version of the economic plan (see section 7.5) are published on the NICE website at least 6 weeks before the draft guideline goes out for consultation. Any changes made to a protocol in the course of guideline development should be agreed with NICE staff with responsibility for quality assurance and should be described and updated on the PROSPERO database.

4.6 References and further reading

Allmark P, Baxter S, Goyder E et al. (2013) Assessing the health benefits of advice services: using research evidence and logic model methods to explore complex pathways. Health and Social Care in the Community 21: 59–68

Altman DG, Vergouwe Y, Royston P et al. (2009) Prognosis and prognostic research: validating a prognostic model. BMJ 338: b605

Cargo M, Harris J, Pantoja T et al. (2017) Cochrane Qualitative and Implementation Methods Group Guidance Series Paper 3: Methods for assessing evidence on intervention implementation. Journal of Clinical Epidemiology doi: 10.1016/j.jclinepi.2017.11.028.

Cargo M, Harris J, Pantoja T et al. (2018) Cochrane Qualitative and Implementation Methods Group Guidance Series Paper 4: Methods for integrating qualitative and implementation evidence within intervention effectiveness reviews. Journal of Clinical Epidemiology 97: 59–69

Centre for Reviews and Dissemination (2009) Systematic reviews: CRD's guidance for undertaking reviews in health care. Centre for Reviews and Dissemination, University of York

Cochrane Diagnostic Test Accuracy Working Group (2008) Cochrane handbook for diagnostic test accuracy reviews. (Updated December 2013). The Cochrane Collaboration

Collins GS, Reitsma JB, Altman DG et al. (2015) Transparent reporting of a multivariable prediction model for individual prognosis or diagnosis (TRIPOD): the TRIPOD statement. Annals of Internal Medicine 162: 55–63

Craig P, Dieppe P, McIntyre S et al. on behalf of the MRC (2008) Developing and evaluating complex interventions: new guidance. London: Medical Research Council

Craig P, Cooper C, Gunnell D et al. on behalf of the MRC (2011) Using natural experiments to evaluate population health interventions: guidance for producers and users of evidence. London: Medical Research Council

Flemming K, Booth A, Hannes K et al. (2018) Cochrane Qualitative and Implementation Methods Group Guidance Series Paper 6: Reporting guidelines for qualitative, implementation and process evaluation evidence syntheses. Journal of Clinical Epidemiology 97: 79–85

Harden A, Garcia J, Oliver S et al. (2004) Applying systematic review methods to studies of people's views: an example from public health research. Journal of Epidemiology and Community Health 58: 794–800

Harris JL, Booth A, Cargo M et al. (2018) Cochrane Qualitative and Implementation Methods Group Guidance Series Paper 2: Methods for question formulation, searching and protocol development for qualitative evidence synthesis. Journal of Clinical Epidemiology 97: 39–48

Higgins JPT, Green S, editors (2008) Cochrane handbook for systematic reviews of interventions, version 5.1.0 (updated March 2011). The Cochrane Collaboration

Justice AC, Covinsky KE, Berlin JA (1999) Assessing the generalizability of prognostic information. Annals of Internal Medicine 130: 515–24.

Kelly MP, Swann C, Morgan A et al. (2002) Methodological problems in constructing the evidence base in public health. London: Health Development Agency

Kelly MP, Moore TA (2012) The judgement process in evidence-based medicine and health technology assessment. Social Theory and Health 10:1–19

Kirkham JJ, Gorst S, Altman DG et al. (2016) Core Outcome Set–STAndards for Reporting: The COS-STAR Statement. PLoS: 21

Kirkham JJ, Davis K, Altman DG et al. (2017) Core Outcome Set-23 STAndards for Development: The COS-STAD Recommendations. PLoS

Lomas J, Culyer T, McCutcheon C et al. (2005) Conceptualizing and combining evidence for health system guidance: final report. Ottawa: Canadian Health Services Research Foundation

Lord SJ, Irwig L, Simes RJ (2006) When is measuring sensitivity and specificity sufficient to evaluate a diagnostic test, and when do we need randomized trials? Annals of Internal Medicine 144: 850–5

Moons KG, Kengne AP, Grobbee DE et al. (2012) Risk prediction models: II. External validation, model updating, and impact assessment. Heart 98: 691–8.

Moons KGM, Altman DG, Reitsma JB et al. (2015) Transparent reporting of a multivariable prediction model for individual prognosis or diagnosis (TRIPOD): explanation and elaboration. Annals of Internal Medicine 162: W1–W73

Moore GF, Audrey S, Barker M et al. (2015) Process evaluation of complex interventions: Medical Research Council guidance. BMJ 2015;350:h1258

Muir Gray JM (1996) Evidence-based healthcare. London: Churchill Livingstone

Noyes J, Booth A, Cargo M et al. (2018) Cochrane Qualitative and Implementation Methods Group Guidance Series Paper 1: Introduction Journal of Clinical Epidemiology 97: 35–38

Ogilvie D, Hamilton V, Egan M et al. (2005) Systematic reviews of health effects of social interventions: 1. Finding the evidence: how far should you go? Journal of Epidemiology and Community Health 59: 804–8

Ogilvie D, Egan M, Hamilton V et al. (2005) Systematic reviews of health effects of social interventions: 2. Best available evidence: how low should you go? Journal of Epidemiology and Community Health 59: 886–92

Petticrew M (2003) Why certain systematic reviews reach uncertain conclusions. British Medical Journal 326: 756–8

Petticrew M, Roberts H (2003) Evidence, hierarchies, and typologies: horses for courses. Journal of Epidemiology and Community Health 57: 527–9

Popay J, Rogers A, Williams G (1998) Rationale and standards for the systematic review of qualitative literature in health services research. Qualitative Health Research 8: 341–51

Richardson WS, Wilson MS, Nishikawa J et al. (1995) The well-built clinical question: a key to evidence-based decisions. American College of Physicians Journal Club 123: A12–3

Rychetnik L, Frommer M, Hawe P et al. (2002) Criteria for evaluating evidence on public health interventions. Journal of Epidemiology and Community Health 56: 119

Steyerberg EW (2009) Clinical prediction models: a practical approach to development, validation, and updating. Springer

Steyerberg EW, Harrell FE, Borsboom GJJM et al. (2001) Internal validation of predictive models: efficiency of some procedures for logistic regression analysis. Journal of Clinical Epidemiology 54: 774–81

Steyerberg EW, Bleeker SE, Moll HA et al. (2003) Internal and external validation of predictive models: a simulation study of bias and precision in small samples. Journal of Clinical Epidemiology 56: 441–7.

Summarized research for Information Retrieval in HTA (SuRe Info) [online; accessed 13 September 2017]

Tannahill A (2008) Beyond evidence – to ethics: a decision making framework for health promotion, public health and health improvement. Health Promotion International 23: 380–90

Victora C, Habicht J, Bryce J (2004) Evidence-based public health: moving beyond randomized trials. American Journal of Public Health 94: 400–5