4 Developing review questions and planning the evidence review

At the start of guideline development, the key issues and questions listed in the scope may need to be translated into review questions.

Review questions define the boundaries of the review and therefore must be clear and focused. They are the framework for the design of the literature searches, inform the planning and process of the evidence review, and act as a guide for the development of recommendations by the Committee.

This chapter describes how review questions are developed and agreed. It describes the different types of review question and provides examples. It also provides information on the different types of evidence and how to plan the evidence review. The best approach may vary depending on the topic. Options should be considered by the Developer, and the chosen approach discussed and agreed with NICE staff with responsibility for quality assurance. The approach should be documented in the review protocol (see table 4.1) and the guideline, together with the rationale for the choice.

4.1 Number of review questions

The number of review questions for each guideline depends on the topic and the breadth of the scope. However, it is important that the total number of questions:

  • is manageable

  • can be covered in the time and with the resources available

  • provides sufficient focus for the guideline, and covers all areas outlined in the scope.

Review questions can vary considerably in terms of both the number of included studies and the complexity of the question and analyses. For example, a single review question might involve a complex comparison of several interventions with many primary studies included. At the other extreme, a review question might address the effects of a single intervention and there may be few primary studies meeting the inclusion criteria. The number of review questions for each guideline will therefore vary depending on the topic and its complexity.

4.2 Developing review questions from the scope

The review questions should cover all areas specified in the scope but should not introduce new areas. They will often build on the key questions in the scope and usually contain more detail.

Review questions are usually drafted by the Developer. They may then be refined and agreed with people with specialist knowledge and experience (for example, the Committee members). This enables the literature search to be planned efficiently. Sometimes the questions need refining once the evidence has been searched; such changes to review questions should be agreed with a member of NICE staff with a quality assurance role, and documented in the evidence review.

4.3 Formulating and structuring different review questions

When developing review questions, it is important to consider what information is needed for any planned economic modelling. This might include information about quality-of-life, rates of, and inequalities in, adverse effects and use of health and social care services. In addition, review questions often cover acceptability and accessibility of interventions, and experiences of practitioners or people using services and the public. The nature and type of review questions determines the type of evidence reviews and the type of evidence that is most suitable (for example, intervention studies or qualitative data); both the type of evidence review and type of evidence need careful consideration (Petticrew and Roberts 2003). The process for developing a review question is the same whatever the nature and type of the question.

Review questions should be clear and focused. The exact structure of each question depends on what is being asked. The aims of questions will differ, but are likely to cover at least one of the following:

  • extent and nature of the issue as described in the scope

  • factors, causal mechanisms and the role of the various vectors

  • interventions that work in ideal circumstances and might work in specific circumstances or settings (the extent to which something works, how and why)

  • a relevant programme theory, theory of change, or mechanisms of action likely to explain behaviour or effects

  • views and experiences of people using services or people who may be affected by the recommendation, including how acceptable and accessible they find the intervention

  • practitioners' or providers' views, experiences and working practices (including any factors hindering the implementation of the intervention and factors supporting implementation)

  • costs and resource use

  • potential for an intervention to do harm or have unintended consequences.

If a conceptual map or logic models are developed, they can be useful when developing review questions (see appendix A).

When review questions are about the effectiveness of interventions, additional types of evidence review may be needed to answer other aspects or aims of the question. For example, additional evidence reviews might address the views of people using services or the communities where services are based, or barriers to use as reported by practitioners or providers. Sometimes, a review may use different sources of evidence or types of data (for example, a review may combine current practice or map quantitative information with qualitative data).

There are examples of different types of review questions and the type of evidence that might best address them throughout this chapter.

Review questions about the effectiveness of an intervention

A helpful structured approach for developing questions about interventions is the PICO (population, intervention, comparator and outcome) framework (see box 4.1).

However, other frameworks exist (such as SPICE; setting, perspective, intervention, comparison, evaluation) and can be used as appropriate.

Box 4.1 Formulating a review question on the effectiveness of an intervention using the PICO framework

Population: Which population are we interested in? How best can it be described? Are there subgroups that need to be considered?

Intervention: Which intervention, treatment or approach should be used?

Comparators: Are there alternative(s) to the intervention being considered? If so, what are these (for example, other interventions, standard active comparators, usual care or placebo)?

Outcome: Which outcomes should be considered to assess how well the intervention is working? What is really important for people using services? Core outcome sets may be used where appropriate; one source is the COMET database.

For each review question, factors that may affect the outcomes and effectiveness of an intervention, including any wider social factors that may affect health and any health inequalities, should be considered. The setting for the question should also be specified if necessary. To help with this, outcomes and other factors that are important should be listed in the review protocol. In general, a maximum of 7–10 outcomes should be defined.

Box 4.2 Examples of review questions on the effectiveness of interventions

  • What types of mass‑media intervention help prevent children and young people from taking up smoking? Are the interventions delaying rather than preventing the onset of smoking?

  • Which of the harm‑reduction services offered by needle and syringe programmes (including advice and information on safer injecting, onsite vaccination services, and testing for hepatitis B and C and HIV) are effective in reducing blood‑borne viruses and other infections among people who inject drugs?

  • What types of intervention and programme are effective in increasing physical activity levels among children under 8 – particularly those who are not active enough to meet the national recommendations for their age – or help to improve their core physical skills?

  • Does brief advice from GPs increase adult patients' physical activity levels?

  • What are the most effective school‑based interventions for changing young people's attitudes to alcohol use?

  • For people with IBS (irritable bowel syndrome), are antimuscarinics or smooth muscle relaxants effective compared with placebo or no treatment for the long‑term control of IBS symptoms? Which is the most effective antispasmodic?

  • Which first‑line opioid maintenance treatments are effective and cost effective in relieving pain in patients with advanced and progressive disease who require strong opioids?

  • What reporting and learning systems are effective and cost effective in reducing medicines‑related patient safety incidents, compared with usual care?

Review questions about pharmacological management will usually only include medicines with a UK marketing authorisation for some indication, based on regulatory assessment of safety and efficacy. Use of a medicine outside its licensed indication (off‑label use) may be considered in some circumstances; for example, if this use is common practice in the UK, if there is good evidence for this use, or there is no other medicine licensed for the indication (see also the section on recommendations on medicines, including off-label use of licensed medicines). Medicines with no UK marketing authorisation for any indication will not usually be considered in a guideline because there is no UK assessment of safety and efficacy to support their use.

A review question about the effectiveness of an intervention is usually best answered by a randomised controlled trial (RCT), because a well‑conducted RCT is most likely to give an unbiased estimate of effects. More information (for example, information about long‑term effects) may be obtained from other sources. Advice on finding data on the adverse effects of an intervention is available in the Cochrane handbook for systematic reviews for interventions.

RCTs provide the most valid evidence of the effects of interventions. However, such evidence may not always be available. In addition, for many health and social care interventions it can be difficult or unethical to assign populations to control and intervention groups (for example, for interventions which aim to change policy). In such cases, a non-randomised controlled trial might be a more appropriate way of establishing cause and effect. The Medical Research Council (MRC) has produced guidance on evaluating complex interventions (Craig et al. 2008) and using natural experiments to evaluate population health interventions (Craig et al. 2011).

There are also circumstances in which an RCT is not needed to confirm the effectiveness of an intervention (for example, giving insulin to a person in a diabetic coma compared with not giving insulin or reducing speed limits to 20 mph to reduce the severity of injuries from road traffic accidents). In these circumstances, there is sufficient certainty from non‑RCT evidence that an important effect exists. In these circumstances due consideration needs to be given to the following:

  • whether an adverse outcome is likely if the person is not treated (evidence from, for example, studies of the natural history of a condition)

  • if the intervention gives a large benefit or shows a clear dose–response gradient that is unlikely to be a result of bias (evidence from, for example, historically controlled studies)

  • whether the side effects of the intervention are acceptable (evidence from, for example, case series)

  • if there is no alternative intervention

  • if there is a convincing pathophysiological basis for the intervention.

Review questions about cost effectiveness

For more information on review questions about cost effectiveness, see chapter 7.

Review questions about the accuracy of diagnostic tests

Review questions about diagnosis are concerned with the performance of a diagnostic test or test strategy. Diagnostic tests can include identification tools, physical examination, history‑taking, laboratory or pathological examination and imaging tests.

Broadly, review questions that can be asked about a diagnostic test are of 3 types:

  • questions about the diagnostic accuracy of a test or a number of tests individually against a comparator (the reference standard)

  • questions about the diagnostic accuracy of a test strategy (such as serial testing) against a comparator (the reference standard)

  • questions about the value of using the test.

In studies of the accuracy of a diagnostic test, the results of the test under study (the index test[s]) are compared with those of the best available test (the reference standard) in a sample of people. It is important to be clear when deciding on the question what the exact proposed use of the test is (for example, as an identification tool, an initial 'triage' test or after other tests).

The PICO framework can be useful when formulating review questions about diagnostic test accuracy (see box 4.3). However other frameworks (such as PPIRT; population, prior tests, index test, reference standard, target condition) can be used if helpful.

Box 4.3 Features of a well‑formulated review question on diagnostic test accuracy using the PICO framework

Population: To which populations would the test be applicable? How can they be best described? Are there subgroups that need to be considered?

Intervention (index test[s]): The test or test strategy being evaluated.

Comparator: The test with which the index test(s) is/are being compared, usually the reference standard (the test that is considered to be the best available method for identifying the presence or absence of the condition of interest – this may not be the one that is routinely used in practice).

Target condition: The disease, disease stage or subtype of disease that the index test(s) and the reference standard are being used to identify.

Outcome: The diagnostic accuracy of the test or test strategy for detecting the target condition. This is usually reported as test parameters, such as sensitivity, specificity, predictive values, likelihood ratios, or – when multiple thresholds are used – a receiver operating characteristic (ROC) curve. This should also include issues of importance to people having the test, such as acceptability.

A review question about diagnostic test accuracy is usually best answered by a cross-sectional survey in which both the index test(s) and the reference standard are performed on the same sample of people. Case–control studies are also used to assess the accuracy of diagnostic tests, but this type of study design is more prone to bias (and often results in inflated estimates of diagnostic test accuracy). Further advice on the types of study to include in reviews of diagnostic test accuracy can be found in the Cochrane handbook for diagnostic test accuracy reviews.

Box 4.4 Examples of review questions on diagnostic test accuracy

In children and young people under 16 years of age with a petechial rash, can non‑specific laboratory tests (C‑reactive protein, white blood cell count, blood gases) help to confirm or refute the diagnosis of meningococcal disease?

What are the most appropriate methods/instruments for case identification of conduct disorders in children and young people?

Although assessing test accuracy is important for establishing the usefulness of a diagnostic test, the value of a test lies in how useful it is in guiding treatment decisions or the provision of services, and ultimately in improving outcomes. 'Test and treat' studies compare outcomes for people who have a new diagnostic test (in combination with a management strategy) with outcomes of people who have the usual diagnostic test and management strategy. These types of study are not very common. If there is a trade‑off between costs, benefits and harms of the tests, a decision-analytic model may be useful (see Lord et al. 2006).

Review questions aimed at establishing the value of a diagnostic test in practice can be structured in the same way as questions about interventions. The best study design is an RCT. Review questions about the safety of a diagnostic test should be structured in the same way as questions about the safety of interventions.

Review questions about prognosis

Prognosis describes the likelihood of a particular outcome, such as disease progression, the development of higher levels of need, or length of survival after diagnosis or for a person with a particular set of risk markers. A prognosis is based on the characteristics of the person or user of services ('prognostic factors'). These prognostic factors may be disease specific (such as the presence or absence of a particular disease feature) or demographic (such as age or sex), and may also include the likely response to treatment or care and the presence of comorbidities. A prognostic factor does not need to be the cause of the outcome, but should be associated with (in other words, predictive of) that outcome.

Information about prognosis can be used within guidelines to:

  • classify people into risk categories (for example, cardiovascular risk or level of need) so that different interventions can be applied

  • define subgroups of populations that may respond differently to interventions

  • identify factors that can be used to adjust for case mix (for example, in investigations of heterogeneity)

  • help determine longer‑term outcomes not captured within the timeframe of a trial (for example, for use in an economic model).

Review questions about prognosis address the likelihood of an outcome for a person or user of services from a population at risk for that outcome, based on the presence of a proposed prognostic factor.

Review questions about prognosis may be closely related to questions about aetiology (cause of a disease or need) if the outcome is viewed as the development of the disease or need based on a number of risk factors. They may also be closely related to questions about interventions if one of the prognostic factors is treatment. However, questions about interventions are usually better addressed by controlling for prognostic factors.

Box 4.5 Examples of review questions on prognosis

Are there factors related to the individual (characteristics either of the individual or of the act of self‑harm) that predict outcome (including suicide, non‑fatal repetition, other psychosocial outcomes)?

For people who are opioid dependent, are there particular groups that are more likely to benefit from detoxification?

A review question about prognosis is best answered using a prospective cohort study with multi‑variate analysis. Case–control studies are not usually suitable for answering questions about prognosis because they do not estimate baseline risk, but give only an estimate of the likelihood of the outcome for people with and without the prognostic factor.

Review questions about views and experiences of people using or providing services, family members or carers and the public

Most review questions should ensure that views and experience of people using or providing services, family members or carers and the public are considered when deciding on the type of evidence review, the type of evidence, and how these views will be sought.

In some circumstances, specific questions should be formulated about the views and experience of people using services, family members or carers and the public to ensure that the question is person‑centred. The views and experiences of those providing services may also be relevant. These views and experiences, which may vary for different populations, can cover a range of dimensions, including:

  • views and experiences of people using or providing services, family members or carers or the public on the effectiveness and acceptability of given interventions

  • preferences of people using services, family members or carers or the public for different treatment or service options, including the option of foregoing treatment or care

  • views and experiences of people using or providing services, family members or carers or the public on what constitutes a desired, appropriate or acceptable outcome.

Such questions should be clear and focused, directly relevant to the topic, and should address experiences of an intervention or approach that are considered important by people using or providing services, family members or carers or the public. Such questions can address a range of issues, including:

  • information and support needs specific to the topic

  • elements of care or a service that are of particular importance to people using or providing services

  • factors that encourage or discourage people from using interventions or services

  • the specific needs of certain groups of people using services, including those sharing the characteristics protected by the Equality Act (2010)

  • which outcomes reported in studies of interventions are most important to people using services, family members or carers or the public.

As for other types of review question, questions that are broad and lack focus (for example, 'What is the experience of living with condition X?') should be avoided.

NICE guidelines should not reiterate or re‑phrase recommendations from the NICE guideline on patient experience in adult NHS services, the NICE guideline on service user experience in adult mental health or other NICE guidelines on the experience of people using services. However, whether there are specific aspects of views or experiences that need addressing for a topic should be considered during the scoping of every guideline. Specific aspects identified during scoping should be included in the scope if they are not covered by existing guidelines and are supported as a priority area. These are likely to be topic specific and should be well defined and focused.

Box 4.6 Examples of review questions on the views or experiences of people using or providing services, family members or carers or the public

What information and support should be offered to children with atopic eczema and their families and carers?

What elements of care on the general ward are viewed as important by patients following their discharge from critical care areas?

How does culture affect the need for and content of information and support for bottle or breastfeeding?

What are the perceived risks and benefits of immunisation among parents, carers or young people? Is there a difference in perceived benefits and risks between groups whose children are partially immunised and those who have not been immunised?

A review question about the views or experiences of people using or providing services, family members or carers or the public is likely to be best answered using qualitative studies and cross-sectional surveys, although information on views and experiences is also becoming increasingly available as part of wider intervention studies.

When there is a lack of evidence on issues important to people affected by the guideline (including families and carers, where appropriate), the Developer should consider seeking information via a targeted call for evidence (see section 5.5), or approaching key stakeholders who may have access to additional data sources, such as surveys of user views and experiences, to present as expert testimony (see section 3.5).

Exceptionally, when the information gap cannot be addressed in other ways, the Developer may commission a consultation exercise with people affected by the guideline to obtain their views on specific aspects of the scope or issues raised by the Committee, or to validate early draft recommendations before consultation with registered stakeholders. (For more information, see the section on fieldwork with practitioners and targeted consultation with people using services and appendix B.) The Developer should document the rationale, together with a proposal for the work, including possible methods and the anticipated costs. The proposal should be discussed and agreed with members of NICE staff with a quality assurance role, and approved by the Centre Director. Where the work is approved, the rationale and methods should be documented in the guideline.

Review questions about service delivery

Guidelines often cover areas of service delivery. These might include how delivery of services could improve.

Box 4.7 Examples of review questions on service delivery

In people with hip fracture what is the clinical and cost effectiveness of hospital‑based multidisciplinary rehabilitation on functional status, length of stay in secondary care, mortality, place of residence/discharge, hospital readmission and quality of life?

What is the clinical and cost effectiveness of surgeon seniority (consultant or equivalent) in reducing the incidence of mortality, the number of people requiring reoperation, and poor outcome in terms of mobility, length of stay, wound infection and dislocation?

What types of needle and syringe programmes (including their location and opening times) are effective and cost effective?

How can access to immunisations be increased?

What regional or city level commissioning models, service models, systems and service structures are effective in:

  • reducing diagnostic delay for TB?

  • improving TB contact tracing?

  • improving TB treatment completion?

A review question about the effectiveness of service delivery models is usually best answered by an RCT. However, a wide variety of methodological approaches and study designs have been used. Other types of questions on service delivery are also likely to be answered using evidence from study types other than RCTs.

Depending on the type of review questions, the PICO framework may be appropriate but other frameworks can be used.

When a topic includes review questions on service delivery, approaches described in NICE's Interim methods guide for developing service guidance may be used. Such methods should be agreed with NICE and should be clearly documented in the final guideline.

Review questions about epidemiology

Epidemiological reviews describe the problem under investigation and can be used to inform other review questions. For example, an epidemiological review of accidents would provide information on the most common accidents, as well as morbidity and mortality statistics, and data on inequalities in the impact of accidents.

Examples of review questions that might benefit from an epidemiological review include:

  • What are the patterns of physical activity among children from different populations and of different ages in England?

  • Which populations of children are least physically active and at which developmental stage are all children least physically active?

  • What effect does physical activity have on children's health and other outcomes in the short- and long-term?

The structure of the question and the type of evidence will depend on the aim of the review.

Another use of epidemiological reviews is to describe relationships between epidemiological factors and outcomes – a correlates review. If an epidemiological review has been carried out, information will have been gathered from observational studies on the nature of the problem. However, further analysis of this information – in the form of a correlates review – may be needed to establish the epidemiological factors associated with any positive or negative behaviours or outcomes.

Examples of review questions that might benefit from a correlates review include:

  • What factors are associated with children's or young people's physical activity and how strong are the associations?

  • What are the factors that encourage or discourage people from taking part in physical activity?

  • How do the factors that encourage or discourage people from taking part differ for the least active subpopulations and age groups?

Review questions about the implementation of recommendations

Review questions on how best to implement recommendations may be considered appropriate for some topics.

The type of review question depends on the issue but is likely to fit into 1 of the types described above (for example, 'What is the effectiveness of an intervention to increase a practitioner's awareness of a specific condition?' is an example of an intervention question and would be addressed using the same methods as any other intervention question). The question 'What are the views of practitioners who provide this service?' would be addressed using the same methods as those used to address questions about views and experiences of people using services.

When deciding if review questions about implementation are appropriate for a guideline, current practice should be considered to identify areas of inappropriate variation in which recommendations about implementation would be of value.

4.4 Evidence used to inform recommendations

In order to formulate recommendations, the guideline Committee needs to consider a range of evidence about what works generally, why it works, and what might work (and how) in specific circumstances. The Committee needs evidence from multiple sources, extracted for different purposes and by different methods.

Scientific evidence

Scientific evidence is explicit, transparent and replicable. It can be context free or context sensitive. Context‑free scientific evidence assumes that evidence can be independent of the observer and context. It can be derived from evidence reviews or meta‑analyses of quantitative studies, individual studies or theoretical models. Context‑sensitive scientific evidence looks at what works and how well in real‑life situations. It includes information on attitudes, implementation, organisational capacity, forecasting, economics and ethics. It is mainly derived using social science and behavioural research methods, including quantitative and qualitative research studies, surveys, theories, cost‑effectiveness analyses and mapping reviews. Sometimes, it is derived using the same techniques as context‑free scientific evidence. Context‑sensitive evidence can be used to complement context‑free evidence, and can so provide the basis for more specific and practical recommendations. It can be used to:

  • supplement evidence on effectiveness (for example, to look at how factors such as occupation, educational attainment and income influence effectiveness)

  • inform the development of logic models (see section 2.3 and appendix A) and causal pathways (for example, to explain what factors predict teenage parenthood)

  • provide information about the characteristics of the population (including social circumstances and the physical environment) and about the process of implementation

  • describe psychological processes and behaviour change.

Quantitative studies may be the primary source of evidence to address review questions on:

  • the effectiveness of interventions or services (including information on what works, for whom and under which circumstances)

  • measures of association between factors and outcomes

  • variations in delivery and implementation for different groups, populations or settings

  • resources and costs of interventions or services.

Examples of the types of review questions that are addressed using quantitative evidence include:

  • How well do different interventions work (for example, does this vary according to age, severity of disease)?

  • What other factors affect how well an intervention works?

  • How much resource does an intervention need to be delivered effectively and does this differ depending on location?

Scientific evidence need not be quantitative information alone.

Qualitative studies may be the primary source of evidence to address review questions on:

  • the experiences of people using services, family members or carers or practitioners (including information on what works, for whom and under which circumstances)

  • the views of people using services, family members or carers, the public or practitioners

  • opportunities for and factors hindering improvement (including issues of access or acceptability for people using services or providers)

  • variations in delivery and implementation for different groups, populations or settings

  • factors that may help or hinder implementation

  • social context and the social construction and representation of health and illness

  • background on context, from the point of view of an observer (and not necessarily that of a person using services or a practitioner)

  • theories of, or reasons for, associations between interventions and outcomes.

Examples of the types of review questions that are addressed using qualitative evidence include:

  • How do different groups of practitioners, people using services or stakeholders perceive the issue (for example, does this vary according to profession, age, gender or family origin)?

  • What social and cultural beliefs, attitudes or practices might affect this issue?

  • How do different groups perceive the intervention or available options? What are their preferences?

  • What approaches are used in practice? How effective are they in the views of different groups of practitioners, people using services or stakeholders?

  • What is a desired, appropriate or acceptable outcome for people using services? What outcomes are important to them? What do practitioner, service user or stakeholder groups perceive to be the factors that may help or hinder change in this area?

  • What do people affected by the guideline think about current or proposed practice?

  • Why do people make the choices they do or behave in the way that they do?

  • How is a public health issue represented in the media and popular culture?

Quantitative and qualitative information can also be used to supplement logic models (see section 2.3 and appendix A). They can also be combined in a single review (mixed methods) when appropriate (for example, to address review questions about factors that help or hinder implementation or to assess why an intervention does or does not work).

Examples of questions for which qualitative evidence might supplement quantitative evidence include:

  • How acceptable is the intervention to people using services or practitioners?

  • How accessible is the intervention or service to different groups of people using services? What factors affect its accessibility?

  • Does the mode or organisation of delivery (including the type of relevant practitioner, the setting and language) affect user perceptions?

Often reviews of quantitative or qualitative studies (secondary evidence) already exist. Existing reviews may include systematic reviews (with or without a meta-analysis or individual patient data analysis) and non‑systematic literature reviews and meta‑analyses). Well‑conducted systematic reviews (such as Cochrane intervention and diagnostic test accuracy reviews) may be of particular value as sources of evidence. Some reviews may more useful as background information or as additional sources of potentially relevant primary studies. This is because they may:

  • not cover inclusion and exclusion criteria relevant to the guideline topic's referral and parameters (for example, comparable research questions, relevant outcomes, settings, population groups or time periods)

  • group together different outcome or study types

  • include data that are difficult or impossible to separate appropriately

  • not provide enough data to develop recommendations (for example, some reviews do not provide sufficient detail on specific interventions making it necessary to refer to the primary studies).

Conversely, some high‑quality systematic reviews, such as Cochrane reviews, may provide enhanced data not available in the primary studies. For example, authors of the review may have contacted the authors of the primary studies or other related bodies in order to include additional relevant data in their review, or an individual patient data analysis may have been conducted. In addition, if high‑quality reviews are in progress (protocol published) at the time of development of the guideline, the Developer may choose to contact the authors for permission to access pre‑publication data for inclusion in the guideline (see section 5.5).

Reviews can also be useful when developing the scope and when defining review questions, outcomes and outcome measures for the guideline evidence reviews. The discussion section of a review can also help to identify some of the limitations or difficulties associated with a topic, for example, through a critical appraisal of the state of the evidence base. The information specialists may also wish to consider the search strategies of high‑quality systematic reviews. These can provide useful search approaches for capturing different key concepts. They can also provide potentially useful search terms and combinations of terms, which have been carefully tailored for a range of databases.

High‑quality reviews that are directly applicable to the guideline review question can be used as a source of effectiveness data, particularly for complex organisational, behavioural and population level questions.

When considering using results from an existing high‑quality review, due account should be taken of the following:

  • The parameters (for example, research question, inclusion and exclusion criteria) of the review are sufficiently similar to those of the guideline topic to be able to answer 1 or more specific review questions. In such cases, a search should be undertaken for primary studies published after the search date covered by the existing review.

  • Whether the use of existing high‑quality reviews will be sufficient to address the guideline review question if the evidence base for the guideline topic is very large.

Colloquial evidence

'Colloquial evidence' can complement scientific evidence or provide missing information on context. It can come from expert testimony (see section 3.5), from members of the Committee, from a reference group of people using services (see section 3.2 and appendix B) or from comments from registered stakeholders (see section 10.1). Colloquial evidence includes evidence about values (including political judgement), practical considerations (such as resources, professional experience or expertise and habits or traditions, the experience of people using services) and the interests of specific groups (views of lobbyists and pressure groups).

An example of colloquial evidence is expert testimony. Sometimes oral or written evidence from outside the Committee is needed for developing recommendations, if limited primary research is available or more information on current practice is needed to inform the Committee's decision‑making. Inclusion criteria for oral or written evidence specify the population and interventions for each review question, to allow filtering and selection of oral and written evidence submitted to the Committee.

Other evidence

Depending on the nature of the guideline and the topic, other sources of relevant evidence such as reports, audits, and standard operating procedures may be included. The reasonableness and rigour of the process used to develop the evidence is assessed as well as the relevance of the evidence to the topic under consideration.

See also chapter 8 on linking and using other guidance.

4.5 Planning the evidence review

For each guideline evidence review, a review protocol is prepared that outlines the background, the objectives and the planned methods. This protocol will explain how the review is to be carried out and will help the reviewer to plan and think through the different stages. In addition, the review protocol should make it possible for the review to be repeated by others at a later date. A protocol should also make it clear how equality issues have been considered in planning the review work, if appropriate.

Structure of the review protocol

The protocol should describe any differences from the methods described in this manual (chapters 5–7), rather than duplicating the methodology stated here. It should include the components outlined in table 4.1.

When a guideline is updating a published guideline, the protocol from the published guideline, if available, should be used as the basis for outlining how the review question would be addressed. Information gathered during the formal check of the need to update the guideline should also be added. This might include new interventions and comparators, and extension of the population.

Table 4.1 Components of the review protocol

Component

Description

Review question(s)

The review question(s)

Context and objectives

Short description; for example, 'To estimate the effectiveness and cost effectiveness of…' or 'To estimate the acceptability of…'

Searches

To include:

  • sources to be searched (see chapter 5)

  • plans to use any supplementary search techniques, when known at the protocol development stage, and the rationale for their use (see section 5.4)

  • limits to be applied to the search (see section 5.4)

Types of study to be included

Inclusion and exclusion criteria, based on the 'ideal' study designs to be included, and the study designs to be included if the 'ideal' study designs are not available. In some circumstances, a decision to include only 'ideal' study designs may be made. This should also be documented here

Participants/population

Inclusion and exclusion criteria, using the structured framework (for example, PICO, SPICE) such as setting, or age

Intervention(s), exposure(s)

Inclusion and exclusion criteria, based on the intervention, treatment, exposure or approach that will be included

Comparator(s)/control

Inclusion and exclusion criteria, based on the alternative(s) to the intervention being considered

Outcome(s)

Inclusion and exclusion criteria, based on the outcomes that will be considered

Data extraction and quality assessment

Brief details of:

  • data extraction

  • how the quality assessment and applicability will be presented (by whole study, or by outcome – GRADE)

  • any deviations from the methods and processes described in this manual

Strategy for data synthesis

Brief details of the proposed approach to data synthesis and analysis, and details of any alternative analysis to be undertaken if the planned analysis is not possible

Analysis of subgroups or subsets

Brief details of any subgroups that will be considered (for example, population or intervention types)

Any other information or criteria for inclusion/exclusion

For example, the equality issues that will be considered when reviewing the evidence, based on the equality impact assessment conducted during scoping of the guideline

Process for developing the review protocol

The review protocol should be produced by the evidence review team after the review question has been agreed and before starting the evidence review. It should then be reviewed and approved by NICE staff with responsibility for quality assurance.

The review protocol, principal search strategy (see section 5.4) and a version of the economic plan (see section 7.5) are published on the NICE website at least 6 weeks before the release of the draft guideline. Any changes made to a protocol in the course of guideline development should be agreed with NICE staff with responsibility for quality assurance and should be described. Consideration should be given to registering review protocols on the PROSPERO or SRDR databases.

4.6 References and further reading

Centre for Reviews and Dissemination (2009) Systematic reviews: CRD's guidance for undertaking reviews in health care. Centre for Reviews and Dissemination, University of York

Cochrane Diagnostic Test Accuracy Working Group (2008) Cochrane handbook for diagnostic test accuracy reviews. (Updated December 2013). The Cochrane Collaboration

Craig P, Dieppe P, McIntyre S et al. on behalf of the MRC (2008) Developing and evaluating complex interventions: new guidance. London: Medical Research Council

Craig P, Cooper C, Gunnell D et al. on behalf of the MRC (2011) Using natural experiments to evaluate population health interventions: guidance for producers and users of evidence. London: Medical Research Council

Harden A, Garcia J, Oliver S et al. (2004) Applying systematic review methods to studies of people's views: an example from public health research. Journal of Epidemiology and Community Health 58: 794–800

Higgins JPT, Green S, editors (2008) Cochrane handbook for systematic reviews of interventions, version 5.1.0 (updated March 2011). The Cochrane Collaboration

Kelly MP, Swann C, Morgan A et al. (2002) Methodological problems in constructing the evidence base in public health. London: Health Development Agency

Kelly MP, Moore TA (2012) The judgement process in evidence-based medicine and health technology assessment. Social Theory and Health 10:1–19

Lomas J, Culyer T, McCutcheon C et al. (2005) Conceptualizing and combining evidence for health system guidance: final report. Ottawa: Canadian Health Services Research Foundation

Lord SJ, Irwig L, Simes RJ (2006) When is measuring sensitivity and specificity sufficient to evaluate a diagnostic test, and when do we need randomized trials? Annals of Internal Medicine 144: 850–5

Muir Gray JM (1996) Evidence-based healthcare. London: Churchill Livingstone

Ogilvie D, Hamilton V, Egan M et al. (2005) Systematic reviews of health effects of social interventions: 1. Finding the evidence: how far should you go? Journal of Epidemiology and Community Health 59: 804–8

Ogilvie D, Egan M, Hamilton V et al. (2005) Systematic reviews of health effects of social interventions: 2. Best available evidence: how low should you go? Journal of Epidemiology and Community Health 59: 886–92

Petticrew M (2003) Why certain systematic reviews reach uncertain conclusions. British Medical Journal 326: 756–8

Petticrew M, Roberts H (2003) Evidence, hierarchies, and typologies: horses for courses. Journal of Epidemiology and Community Health 57: 527–9

Popay J, Rogers A, Williams G (1998) Rationale and standards for the systematic review of qualitative literature in health services research. Qualitative Health Research 8: 341–51

Popay J, editor (2006) Moving beyond effectiveness in evidence synthesis: methodological issues in the synthesis of diverse sources of evidence. London: National Institute for Health and Clinical Excellence

Richardson WS, Wilson MS, Nishikawa J et al. (1995) The well‑built clinical question: a key to evidence‑based decisions. American College of Physicians Journal Club 123: A12–3

Rychetnik L, Frommer M, Hawe P et al. (2002) Criteria for evaluating evidence on public health interventions. Journal of Epidemiology and Community Health 56: 119

Tannahill A (2008) Beyond evidence – to ethics: a decision making framework for health promotion, public health and health improvement. Health Promotion International 23: 380–90

Victora C, Habicht J, Bryce J (2004) Evidence-based public health: moving beyond randomized trials. American Journal of Public Health 94: 400–5

Woolgar S (1988) Science: the very idea. London: Routledge