7 Incorporating economic evaluation

7.1 Introduction

This chapter describes the role of economics in developing NICE guidelines, and suggests possible approaches to use when considering economic evidence. It also sets out the principles for conducting new economic modelling if there is insufficient published evidence that can be used to assess the cost effectiveness of key interventions, services or programmes.

It should be noted that significant methodological developments in this area are anticipated, and this manual will be updated in response to these. Developments in methodology for considering the economic aspects of delivering services will also be taken into account.

7.2 The role of economics in guideline development

Economic evaluation compares the costs and consequences of alternative courses of action. Formally assessing the cost effectiveness of an intervention, service or programme can help decision-makers ensure that maximum gain is achieved from limited resources. If resources are used for interventions or services that are not cost effective, the population as a whole gains fewer benefits.

It is particularly important for committee members to understand that economic analysis is not only about estimating the resource consequences of a guideline recommendation, but is concerned with evaluating costs in relation to benefits (including benefits to quality of life) and harm of alternative courses of action. NICE's principles on social value judgements usually take precedence over economics.

Guideline recommendations should be based on the balance between the estimated costs of the interventions or services and their expected benefits compared with an alternative (that is, their 'cost effectiveness'). In general, the committee should be increasingly certain of the cost effectiveness of a recommendation as the cost of implementation increases.

Common types of health economic analysis are summarised in box 7.1.

Box 7.1 Types of economic analysis

  • Cost-minimisation analysis: a determination of the least costly among alternative interventions that are assumed to produce equivalent outcomes

  • Cost-effectiveness analysis (CEA): a comparison of costs in monetary units with outcomes in quantitative non-monetary units (for example, reduced mortality or morbidity)

  • Cost–utility analysis (CUA): a form of cost-effectiveness analysis that compares costs in monetary units with outcomes in terms of their utility, usually to the patient, measured in QALYs

  • Cost–consequence analysis: a form of cost-effectiveness analysis that presents costs and outcomes in discrete categories, without aggregating or weighting them

  • Cost–benefit analysis (CBA): a comparison of costs and benefits, both of which are quantified in common monetary terms

The committee may require more robust evidence on the effectiveness and cost effectiveness of recommendations that are expected to have a substantial impact on resources. Economic analysis must be done when there is no robust evidence of cost effectiveness to support these recommendations. Any uncertainties must be offset by a compelling argument in favour of the recommendation. However, the cost impact or savings potential of a recommendation should not be the sole reason for the committee's decision.

Resource impact is considered in terms of the additional cost or saving above that of current practice for each of the first 5 years of implementing the guideline. Resource impact is defined as substantial if:

  • implementing a single guideline recommendation in England costs more than £1 million per year or

  • implementing the whole guideline in England costs more than £5 million per year.

The aim is to ensure that the guideline does not introduce a cost pressure into the health and social care system unless the committee is convinced of the benefits and cost effectiveness of the recommendations.

Defining the priorities for economic evaluation should start during scoping of the guideline, and should continue when the review questions are being developed. Questions on economic issues mirror the review questions on effectiveness, but with a focus on cost effectiveness. Health economic input in guidelines typically involves 2 stages. The first is a literature review of published economic evidence to determine whether the review questions set out in the scope have already been assessed by economic evaluations. Reviews of economic evidence identify, present and appraise data from studies of cost effectiveness. They may be considered as part of each review question undertaken for a guideline. If existing economic evidence is inadequate or inconclusive for 1 or more review questions, then the second stage may involve a variety of economic modelling approaches such as adapting existing economic models or building new models from existing data.

Reviews of economic evidence and any economic modelling are quality assured by the developer and a member of NICE staff with responsibility for quality assurance. The nature of the quality assurance will depend on the type of economic evaluation, but will consider the evaluation in terms of the appropriate reference case and be based on a methodology checklist (for example, those in appendix H).

7.3 The reference case

A guideline may consider a range of interventions, commissioned by various organisations and resulting in different types of benefits (outcomes). It is crucial that reviews of economic evidence and economic evaluations undertaken to inform guideline development adopt a consistent approach depending on the type of interventions assessed. The 'reference case' specifies the methods considered consistent with the objective of maximising benefits from limited resources. NICE is interested in benefits to patients (for interventions with health outcomes in NHS and personal social services [PSS] settings), to individuals and community groups (for interventions with health and non-health outcomes in public sector settings) and to people using services and their carers (for interventions with a social care focus).

Choosing the most appropriate reference case depends on whether or not the interventions undergoing evaluation:

  • are commissioned by the NHS and PSS alone or by any other public sector body

  • focus on social care outcomes.

The reference case chosen should be agreed for each decision problem (relevant to a review question), should be set out briefly in the scope and detailed in the economic plan. A guideline may use a different reference case for different decision problems if appropriate (for example, if a guideline reviews interventions with non‑health- and/or social care-related outcomes). This should be agreed with NICE before any economic evaluation is conducted.

Table 7.1 summarises the reference case according to the interventions being evaluated.

Table 7.1 Summary of the reference case

Element of assessment

Interventions funded by the NHS and PSS with health outcomes

Interventions funded by the public sector with health and non-health outcomes

Interventions funded by the public sector with a social care focus

Defining the decision problem

The scope developed by NICE

Comparator

Interventions routinely used in the NHS, including those regarded as current best practice

Interventions routinely used in the public sector, including those regarded as best practice

Interventions routinely delivered by the public and non-public social care sector1

Perspective on costs

NHS and PSS; for PSS include only care that is funded by NHS (such as 'continuing healthcare' or 'funded nursing care')

Public sector – often reducing to local government

Societal perspective (where appropriate)

Other (where appropriate); for example, employer

Perspective on outcomes

All direct health effects, whether for people using services or, when relevant, other people (principally family members and/or informal carers)

All health effects on individuals. For local government and other settings, where appropriate, non-health effects may also be included

Effects on people for whom services are delivered (people using services and/or carers)

Type of economic evaluation

Cost–utility analysis

Cost–utility analysis (base case)

Cost-effectiveness analysis

Cost–consequences analysis

Cost–benefit analysis

Cost-minimisation analysis

Synthesis of evidence on outcomes

Based on a systematic review

Time horizon

Long enough to reflect all important differences in costs or outcomes between the interventions being compared

Measuring and valuing health effects

QALYs2: the EQ‑5D3 is the preferred measure of health-related quality of life in adults

Measure of non‑health effects

Not applicable

Where appropriate, to be decided on a case-by-case basis

Capability or social care-related quality of life measures where an intervention results in both health and capability or social care outcomes

Source of data for measurement of quality of life

Reported directly by people using service and/or carers

Source of preference data for valuation of changes in health-related quality of life

Representative sample of the UK population

Discounting

The same annual rate for both costs and health effects (currently 3.5%)

Sensitivity analyses using rates of 1.5% for both costs and health effects may be presented alongside the reference-case analysis, particularly for public health interventions

In certain cases, cost-effectiveness analyses are very sensitive to the discount rate used. In this circumstance, analyses that use a non-reference-case discount rate for costs and outcomes may be considered

Equity considerations: QALYs

A QALY has the same weight regardless of the other characteristics of the people receiving the health benefit

Equity considerations: other

Equity considerations relevant to specific topics, and how these were addressed in economic evaluation, must be reported

Evidence on resource use and costs

Costs should relate to the perspective used and should be valued using the prices relevant to that perspective

Costs borne by people using services and the value of unpaid care may also be included if they contribute to outcomes

1 Social care costs are the costs of interventions which have been commissioned or paid for in full, or in part by non-NHS organisations.

2 QALY is a measure of health effects based on patient-reported changes in health-related quality of life, and combines both quantity and health-related quality of life into a single measure of health gain.

3 See NICE position statement on the EQ-5D-5L

Abbreviations: PSS, personal social services; QALY, quality-adjusted life year.

Interventions funded by the NHS and PSS with health outcomes

For decision problems where the intervention evaluated is solely commissioned by the NHS and does not have a clear focus on non-health outcomes, the reference case for 'interventions funded by the NHS and PSS with health outcomes' should be chosen.

More details on methods of economic evaluation for interventions with health outcomes in NHS and PSS settings can be found in NICE's guide to the methods of technology appraisal 2013. This includes a reference case, which specifies the methods considered by NICE to be the most appropriate for analysis when developing technology appraisal guidance. The reference case is consistent with the NHS objective of maximising health gain from limited resources.

All relevant NHS and PSS costs that change as a result of an intervention should be taken into account. Important non-NHS and PSS costs should also be identified and considered for inclusion in sensitivity analysis, or to aid decision-making. These may include costs to other central government departments and local government. Service recommendations are likely to have additional costs, which include implementation costs not usually included in the analysis and costs to other government budgets, such as social care. Implementation costs should be included in a sensitivity analysis, where relevant, while costs to other government budgets can be presented in a separate analysis to the base case.

Productivity costs and costs borne by people using services and carers that are not reimbursed by the NHS or PSS should usually be excluded from any analyses (see the guide to the methods of technology appraisal 2013). That is, a societal perspective will not normally be used.

Interventions funded by the public sector with health and non-health outcomes

For decision problems where the interventions evaluated are commissioned in full or in part by non‑NHS public sector and other bodies, the reference case for 'interventions funded by the public sector with health and non-health outcomes' should be chosen. For the base-case analysis, a cost–utility analysis should be done using a cost per QALY (quality-adjusted life year) where possible.

This reference case may be most appropriate for public health interventions paid for by an arm of government, and would consider all the costs of implementing the intervention, and changes to downstream costs. In some cases, the downstream costs are negative, and refer to cost savings. For example, an intervention such as increasing physical activity, whose effects may include preventing type 2 diabetes, may be paid for by local government, but may result in cost savings to the NHS in the form of fewer or delayed cases of diabetes. A public sector cost perspective would aggregate all these costs and cost savings. A narrower local government cost perspective would consider only the cost of implementation, whereas an NHS cost perspective would consider only the cost savings. When examining interventions that are not paid for by an arm of government (such as workplace interventions), the perspective on costs should be discussed and agreed with NICE staff with responsibility for quality assurance.

Productivity costs should usually be excluded from both the reference-case and non-reference-case analyses; exceptions (for example, when evaluating interventions in the workplace) can only be made with the agreement of NICE staff with a quality assurance role.

For public health interventions, all direct health effects for people using services or, when relevant, other people such as family members and/or informal carers will be included. Non-health effects may also be included. When required, the perspective will be widened to include sectors that do not bear the cost of an intervention, but receive some kind of benefit from it.

Interventions with a social care focus

For decision problems where the interventions evaluated have a clear focus on social care outcomes, the reference case on 'interventions with a social care focus' should be chosen. For the base-case analysis, a cost–utility analysis should be done using a cost per QALY approach where possible.

Public sector funding of social care for individual service users is subject to eligibility criteria based on a needs assessment and a financial assessment (means test). Therefore users of social care may have to fund, or partly fund, their own care. A public sector perspective on costs should still be adopted, but should consider different scenarios of funding.

A public sector perspective is likely to be a local authority perspective for many social care interventions, but downstream costs that affect other public sector bodies may be taken into account where relevant, especially if they are a direct consequence of the primary aim of the intervention. When individuals may pay a contribution towards their social care, 2 further perspectives may also be pertinent: a societal perspective (which takes account of changes to the amount that individuals and private firms pay towards the cost of care, on top of the public sector contributions) and an individual perspective (which accounts for changes in individual payments only). The value of unpaid care may also be included in sensitivity analysis, or to aid decision-making. The value of unpaid care should be set at the market value of paid care. Productivity costs should usually be excluded from both the reference-case and non-reference-case analyses; exceptions can only be made with the agreement of NICE staff with responsibility for quality assurance.

For social care interventions, the usual perspective on outcomes will be all effects on people for whom services are delivered including, when relevant, family members and/or informal carers. When required, the perspective may be widened to include sectors that do not bear the cost of an intervention, but receive some kind of benefit from it.

Other perspectives

Other perspectives (for example, employers) may also be used to capture significant costs and effects that are material to the interventions. If other perspectives are used, this should be agreed with NICE staff with responsibility for quality assurance before use.

7.4 Reviewing economic evaluations

Identifying and examining published economic evidence that is relevant to the review questions is an important component of guideline development. The general approach to reviewing economic evaluations should be systematic, focused and pragmatic. The principal search strategy (see section 5.4), including search strategies for health economic evidence, should be posted on the NICE website 6 weeks before consultation on the draft guideline.

Searching for economic evidence

The approach to searching for economic evidence should be systematic. The strategies and criteria used should be stated explicitly in the guideline and applied consistently.

The advice in chapter 5 about identifying the evidence may be relevant to the systematic search for economic evaluations. The types of searches that might be needed are described below.

Initial scoping search to identify economic evaluations

A scoping search may be performed to look for economic evaluations relevant to current practice in the UK and therefore likely to be relevant to decision-making by the committee (see chapter 3). This should cover areas likely to be included in the scope (see chapter 2).

Economic databases (see appendix G) should be searched using the population terms used in the evidence review. Other databases relevant to the topic and likely to include relevant economic evaluations should also be searched using the population terms with a published economics search filter (see section 5.4). At the initial scoping stage, it may be efficient to limit any searches of databases that are sources for NHS economic evaluation database (EED) to studies indexed after December 2014 when the searches to identify studies for NHS EED ceased.

Economic evaluations of social care interventions may be published in journals that are not identified through standard searches. Pragmatic searches based on references of key articles and contacting authors should be considered for identifying relevant papers.

Further systematic search to identify economic evaluations

For some review questions a full systematic search, covering all appropriate sources (appendix G), should be performed to identify all relevant economic evaluations. There are several methods for identifying economic evaluations and the developer should choose the appropriate method and record the reasons for the choice in the search protocol.

  • All relevant review questions could be covered by a single search using the population search terms, combined with a search filter where appropriate, to identify economic evaluations and health-state utility data.

  • Another approach may be to use the search strategies derived with/from the review question(s) combined with a search filter(s) to identify economic evaluations and health-state utility data. If using this approach, it may be necessary to adapt strategies in some databases to ensure adequate sensitivity (Wood et al. 2017).

  • Another option is to identify economic evaluations and quality-of-life data alongside screening for evidence for effectiveness. Further guidance on searching for economic evaluations is available from SuRe Info.

Selecting relevant economic evaluations

The process for sifting and selecting economic evaluations for assessment is essentially the same as for effectiveness studies (see section 6.1). It should be targeted to identify the papers that are most relevant to current UK practice and therefore likely to inform the committee's decision-making.

Inclusion criteria for sifting and selecting papers for each review should specify populations and interventions relevant to the review question. They should also specify:

  • An appropriate date range, because older studies may reflect outdated practices.

  • The country or setting, because studies conducted in other countries might not be relevant to the UK. In some cases it may be appropriate to limit consideration to the UK or countries with similar healthcare systems.

The review should also usually focus on economic evaluations that compare both the costs and consequences of the alternative interventions under consideration. Cost–utility, cost–benefit, cost-effectiveness, cost-minimisation or cost–consequences analyses (see box 7.1) can be considered depending on what the committee deems to be the most relevant perspective and likely outcomes for the question. Non-comparative costing studies, 'burden of disease' studies and 'cost of illness' studies should usually be excluded; but non-comparative costing studies (such as econometric, efficiency, simulation, micro-costing and resource use, and time-series) may be included for some service delivery questions. Sometimes, the published economic evidence is extremely sparse. In such cases, the inclusion criteria for studies may be broadened. The decision to do this is taken by the developer in consultation with NICE staff with responsibility for quality assurance and, when appropriate, with the committee or its chair.

Assessing the quality of economic evaluations

All economic evaluations relevant to the guideline should be appraised using the methodology checklists (see appendix H). These should be used to appraise published economic evaluations, as well as unpublished papers, such as studies submitted by registered stakeholders and academic papers that are not yet published. The same criteria should be applied to any new economic evaluations conducted for the guideline (see section 7.6).

Exclusion of economic evaluations will depend on the applicability of evidence to the NICE decision-making context (usually the reference case), the amount of higher-quality evidence and the degree of certainty about the cost effectiveness of an intervention (when all the evidence is considered as a whole). Lower-quality studies are more likely to be excluded when cost effectiveness (or lack of it) can be reliably established without them.

Sometimes reported sensitivity analyses indicate whether the results of an evaluation or study are robust despite methodological limitations. If there is no sensitivity analysis, judgement is needed to assess whether a limitation would be likely to change the results and conclusions. If necessary, the health technology assessment checklist for decision-analytic models (Philips et al. 2004) may also be used to give a more detailed assessment of the methodological quality of economic evaluations and modelling studies. Judgements made, and reasons for these judgements, should be recorded in the guideline.

Summarising and presenting results for economic evaluations

Cost-effectiveness or net benefit estimates from published or unpublished studies, or from original economic evaluations conducted for the guideline, should be presented in the guideline, for example, using an 'economic evidence profile' (see appendix H). This should include relevant economic information (applicability, limitations, costs, effects, cost-effectiveness and/or net benefit estimates as appropriate). It should be explicitly stated if economic information is not available or if it is not thought to be relevant to the review question.

A short evidence statement that summarises the key features of the evidence on cost effectiveness should be included in the evidence review.

7.5 Prioritising questions for further economic analysis

If a high-quality economic analysis that addresses a key issue and is relevant to current practice has already been published, then further modelling may not be needed. However, often the economic literature is not sufficiently robust or applicable. Original economic analyses should only be performed if an existing analysis cannot easily be adapted to answer the question.

Economic plans

The full economic plan initially identifies key areas of the scope as priorities for further economic analysis and outlines proposed methods for addressing review questions about cost effectiveness. The full economic plan may be modified during development of the guideline; for example, as evidence is reviewed, it may become apparent that further economic evaluation is not needed for some areas that were initially prioritised. A version of the economic plan setting out the questions prioritised for further economic analysis, the population, the interventions and the type of economic analysis is published on the NICE website at least 6 weeks before the guideline goes out for consultation (see section 4.5). The reasons for the final choice of priorities for economic analysis should be explained in the guideline.

Discussion of the economic plan with the committee early in guideline development is essential to ensure that:

  • the most important questions are selected for economic analysis

  • the methodological approach is appropriate (including the reference case)

  • all important effects and resource costs are included

  • effects and outcomes relating to a broader societal perspective are included if relevant

  • additional effects and outcomes not related to health or social care are included if they are relevant

  • economic evidence is available to support recommendations that are likely to lead to substantial costs.

The number and complexity of new analyses depends on the priority areas and the information needed for decision-making by the committee. Selection of questions for further economic analysis, including modelling, should be based on systematic consideration of the potential value of economic analysis across all key issues.

Economic analysis is potentially useful for any question in which an intervention, service or programme is compared with another. It may also be appropriate in comparing different combinations or sequences of interventions, as well as individual components of the service or intervention. However, the broad scope of some guidelines means that it may not be practical to conduct original economic analysis for every component.

The decision about whether to carry out an economic analysis therefore depends on:

  • the potential overall expected benefit and resource implications of an intervention both for individual people and the population as a whole

  • the degree of uncertainty in the economic evidence review and the likelihood that economic analysis will clarify matters.

Economic modelling may not be warranted if:

  • It is not possible to estimate cost effectiveness. However, in this case, a 'scenario' or 'threshold' analysis may be useful.

  • The intervention has no likelihood of being cost saving and its harms outweigh its benefits.

  • The published evidence of cost effectiveness is so reliable that further economic analysis is not needed.

  • The benefits sufficiently outweigh the costs (that is, it is obvious that the intervention is cost effective) or the costs sufficiently outweigh the benefits (that is, it is obvious that the intervention is not cost effective).

  • An intervention has very small costs, very small benefits and very small budget impact.

7.6 Approaches to original economic evaluation

General principles

Regardless of the methodological approach taken, the general principles described below should be observed. Any variation from these principles should be described and justified in the guideline. The decision problem should be clearly stated. This should include a definition and justification of the interventions or programmes being assessed and the relevant groups using services (including carers).

Developing conceptual models linked to topic areas or review questions may help the health economist to decide what key information is needed for developing effectiveness and cost-effectiveness analyses (see chapter 2 for details). Models developed for public health and service delivery topics are likely to relate to several review questions, so most recommendations will be underpinned by some form of modelled analysis.

The choice of model structure is a key aspect of the design-oriented conceptual model. Brennan's taxonomy of model structures (Brennan et al. 2006) should be considered for guidance on which types of models may be appropriate to the decision problem.

Even if a fully modelled analysis is not possible, there may be value in the process of model development, because this will help to structure committee discussions. For example, a model might be able to demonstrate how a change in service will affect demand for a downstream service or intervention.

For service delivery questions, the key challenge is linking changes in service to a health benefit. This obviously poses a challenge when conducting health economic analyses, but it will also be difficult finding high-quality evidence of effectiveness. Modelling using scenario analysis is usually needed to generate the health effects used within the health economic analyses. Because of the considerable resource and health impact of any recommendations on service delivery, its cost effectiveness must be considered, either analytically or qualitatively (see appendix A).

Economic analysis should include comparison of all relevant alternatives for specified groups of people affected by the intervention or using services. Any differences between the review question(s) and the economic analysis should be clearly acknowledged, justified, approved by the committee and explained in the guideline. The interventions or services included in the analysis should be described in enough detail to allow stakeholders to understand exactly what is being assessed. This is particularly important when calculating the cost effectiveness of services.

An economic analysis should be underpinned by the best-quality evidence. The evidence should be based on and be consistent with that identified for the relevant review question. If expert opinion is used to derive information used in the economic analysis, this should be clearly stated and justified in the guideline.

The structure of any economic model should be discussed and agreed with the committee early in guideline development. The reasons for the structure of the model should be clear. Potential alternatives should be identified and considered for use in sensitivity analysis. If existing economic models are being used, or are informing a new analysis, the way these models are adapted or used should be clear.

For service delivery questions, any analysis will need to consider resource constraints. These might be monetary, but might also be resources such as staff, beds, equipment and so on. However, affordability should not be the sole consideration for service recommendations; the impact of any proposed changes on quality of care needs to be considered.

Before presenting final results to a committee for decision-making, all economic evaluations should undergo rigorous quality assessment and validation to assess inputs, identify logical, mathematical and computational errors, and review the plausibility of outputs. The HM Treasury's review of quality assurance of government models (2013) provides guidance on developing the environment and processes required to promote effective quality assurance. This process should be documented.

Quality assurance of an economic evaluation may take various forms at different stages in development, as detailed in the HM Treasury Aqua Book (2015). It can range from basic steps that should always occur, such as disciplined version control, extensive developer testing of their own model, and independent testing by a colleague with the necessary technical knowledge, to external testing by an independent third party and independent analytical audit of all data and methods used. For developer health economists testing their own evaluation, or those of others ('model busting'), useful and practical validation methods include:

  • 1‑way and n‑way sensitivity analyses, including null values and extreme values (Krahn et al. 1997)

  • ensuring that the model results can be explained, for example, the logic and reason underlying the effect of a particular scenario analysis on results

  • ensuring that predictions of intermediate endpoints (for example, event rate counts) and final endpoints (for example, undiscounted life expectancy) are plausible, including comparison with source materials.

Results should be reported of any analyses conducted to demonstrate external validity. However, relevant data should not be omitted just to facilitate external validation (for example, not including trials so that they can be used for subsequent validation).

Conventions on reporting economic evaluations should be followed (see Drummond et al. 1996 and Husereau et al. 2013) to ensure that reporting of methods and results is transparent. For time horizons that extend beyond 10 years, it may be useful to report discounted costs and effects for the short (1–3 years) and medium (5–10 years) term. The following results should be presented where available and relevant:

  • endpoints from the analysis, such as life years gained, number of events and survival

  • disaggregated costs

  • total and incremental costs and effects for all options.

When comparing multiple mutually exclusive options, a fully incremental approach should be adopted that compares the interventions sequentially in rank order of cost or outcome, with each strategy compared with the next non-dominated alternative in ranking. Comparisons with a common baseline intervention should not be used for decision-making, although should be included in the incremental analysis, if it reflects a relevant option.

Any comparison of interventions in an economic model that are not based on head-to-head trial comparisons should be carefully evaluated for the between-study heterogeneity, and potential for modifiers of treatment effect should explored. Limitations should be noted and clearly discussed in the guideline.

Economic model(s) developed for the guideline are available to registered stakeholders during consultation on the guideline. These models should be fully executable and clearly presented.

Different approaches to economic analysis

There are different approaches to economic analysis (see box 7.1 for examples). If economic analysis is needed, the most appropriate approach should be considered early during the development of a guideline, and reflect the content of the guideline scope.

Cost–utility analysis is a form of cost-effectiveness analysis that uses utility as a common outcome. It considers people's quality of life and the length of life they will gain as a result of an intervention or a programme. The health effects are expressed as QALYs, an outcome that can be compared between different populations and disease areas. Costs of resources, and their valuation, should be related to the prices relevant to the sector.

If a cost–utility analysis is not possible (for example, when outcomes cannot be expressed using a utility measure such as the QALY), a cost–consequences analysis may be considered. Cost–consequences analysis can consider all the relevant health and non-health effects of an intervention across different sectors and reports them without aggregation. A cost–consequences analysis that includes most or all of the potential outcomes of an intervention will be more useful than an analysis that only reports 1 or 2 outcomes.

A cost–consequences analysis is useful when different outcomes cannot be incorporated into an index measure. It is helpful to produce a table that summarises all the costs and outcomes and enables the options to be considered in a concise and consistent manner. Outcomes that can be monetised are quantified and presented in monetary terms. Some effects may be quantified but cannot readily be put into monetary form (for more details see the Department for Transport's Transport Analysis Guidance [TAG] unit 2.11). Some effects cannot readily be quantified (such as reductions in the degree of bullying or discrimination) and should be considered by decision-making committees as part of a cost–consequences analysis alongside effects that can be quantified.

All effects (even if they cannot be quantified) and costs of an intervention are considered when deciding which interventions represent the best value. Effectively, cost–consequences analysis provides a 'balance sheet' of outcomes that decision-makers can weigh up against the costs of an intervention (including related future costs).

If, for example, a commissioner wants to ensure the maximum health gain for the whole population, they might prioritise the incremental cost per QALY gained. But if reducing health inequalities is the priority, they might focus on interventions that work best for the most disadvantaged groups, even if they are more costly and could reduce the health gain achieved in the population as a whole.

Cost-effectiveness analysis uses a measure of outcome (a life year saved, a death averted, a patient-year free of symptoms) and assesses the cost per unit of achieving this outcome by different means. The outcome is not separately valued, only quantified; so the study takes no view on whether the cost is worth incurring, only focusing on the cost of different methods of achieving units of outcome.

Cost-minimisation analysis is the simplest form of economic analysis, which can be used when the health effects of an intervention are the same as those of the status quo, and when there are no other criteria for whether the intervention should be recommended. For example, cost-minimisation analysis could be used to decide whether a doctor or nurse should give routine injections when it is found that both are equally effective at giving injections (on average). In cost-minimisation analysis, an intervention is cost effective only if its net cost is lower than that of the status quo. The disadvantage of cost-minimisation analysis is that the health effects of an intervention cannot often be considered equal to those of the status quo.

Cost–benefit analysis considers health and non-health effects but converts them into monetary values, which can then be aggregated. Once this has been done, 'decision rules' are used to decide which interventions to undertake. Several metrics are available for reporting the results of cost–benefit analysis. Two commonly used metrics are the 'benefit‑cost‑ratio' (BCR) and the 'net present value' (NPV) – see the Department for Transport's Transport Analysis Guidance (TAG) Unit A1.1 for more information.

Cost–utility analysis is required routinely by NICE for the economic evaluation of health-related interventions, programmes and services, for several reasons:

  • When used in conjunction with an NHS and PSS perspective, it provides a single yardstick or 'currency' for measuring the impact of interventions. It also allows interventions to be compared so that resources may be allocated more efficiently.

  • Where possible, NICE programmes use a common method of cost-effectiveness analysis that allows comparisons between programmes.

However, because local government is largely responsible for implementing public health and wellbeing programmes and for commissioning social care, NICE has broadened its approach for the appraisal of interventions in these areas. Local government is responsible not only for the health of individuals and communities, but also for their overall welfare. The tools used for economic evaluation must reflect a wider remit than health and allow greater local variation. The nature of the evidence and that of the outcomes being measured may place more emphasis on cost–consequences analysis and cost–benefit analysis for interventions in these areas.

The type of economic analysis that should be considered is informed by the setting specified in the scope of the guideline, and the extent to which the effects resulting from the intervention extend beyond health.

There is often a trade‑off between the range of new analyses that can be conducted and the complexity of each piece of analysis. Simple methods may be used if these can provide the committee with enough information on which to base a decision. For example, if an intervention is associated with better effectiveness and fewer adverse effects than its comparator, then an estimate of cost may be all that is needed. Or a simple decision tree may provide a sufficiently reliable estimate of cost effectiveness. In other situations a more complex approach, such as Markov modelling or discrete event simulation, may be warranted.

Measuring and valuing effects for health interventions

The measurement of changes in health-related quality of life should be reported directly from people using services (or their carers). The value placed on health-related quality of life of people using services (or their carers) should be based on a valuation of public preferences elicited from a representative sample of the UK population, using a choice-based valuation method such as the time trade‑off or standard gamble. The QALY is the measure of health effects preferred by NICE, and the EQ‑5D is NICE's preferred instrument to measure health-related quality of life in adults.

For some economic analyses, a flexible approach may be needed, reflecting the nature of effects delivered by different interventions or programmes. If health effects are relevant, the EQ‑5D‑based QALY should be used. When EQ‑5D data are not available from the relevant clinical studies included in the clinical evidence review, EQ‑5D data can be sourced from the literature. The methods used for identifying the data should be systematic and transparent. The justification for choosing a particular data set should be clearly explained. When more than 1 plausible set of EQ‑5D data is available, sensitivity analyses should be carried out to show the impact of the alternative utility values.

When EQ‑5D data are not available, published mapped EQ‑5D data should be used, or they may be estimated by mapping other health-related quality-of-life measures or health-related effects observed in the relevant studies to the EQ‑5D if data are available. The mapping function chosen should be based on data sets containing both health-related quality-of-life measures. The statistical properties of the mapping function should be fully described, its choice justified, and it should be adequately demonstrated how well the function fits the data. Sensitivity analyses exploring variation in the use of the mapping algorithms on the outputs should be presented.

In some circumstances, EQ‑5D data may not be the most appropriate or may not be available. Qualitative empirical evidence on the lack of content validity for the EQ‑5D should be provided, demonstrating that key dimensions of health are missing. This should be supported by evidence that shows that EQ‑5D performs poorly on tests of construct validity and responsiveness in a particular patient group. This evidence should be derived from a synthesis of peer-reviewed literature. In these circumstances, alternative health-related quality of life measures may be used and must be accompanied by a carefully detailed account of the methods used to generate the data, their validity, and how these methods affect the utility values.

When necessary, consideration should be given to alternative standardised and validated preference-based measures of health-related quality of life that have been designed specifically for use in children. The standard version of the EQ‑5D has not been designed for use in children. An alternative version for children aged 7 to 12 years is available, but a validated UK valuation set is not yet available.

As outlined in NICE's guide to the methods of technology appraisal 2013 and an accompanying position statement on use of the EQ-5D-5L valuation set (2017), the EQ‑5D 5‑level (5L) valuation set is not currently recommended for use by NICE. Guideline developers should:

  • Use the 3L valuation set for reference-case analyses, where available.

  • If data are available to allow mapping of EQ‑5D‑5L data to 3L, use the mapping function developed by van Hout et al. (2012) when several mapping functions are available (Hernandez Alava et al. 2017), for consistency with the current guide to the methods of technology appraisal.

The QALY remains the most suitable measure for assessing the impact of services, because it can incorporate effects from extension to life and experience of care. It can also include the trade‑offs of benefits and adverse events. However, if linking effects to a QALY gain is not possible, links to a clinically relevant or a related outcome should be considered. Outcomes should be optimised for the lowest resource use. The link (either direct or indirect) of any surrogate outcome, such as a process outcome (for example, bed days), to a clinical outcome needs to be justified. However, when QALYs are not used, issues such as trade‑offs between different beneficial and harmful effects need to be considered.

Measuring and valuing effects for non-health interventions

For some decision problems (such as for interventions with a social care focus), the intended outcomes of interventions are broader than improvements in health status. Here broader, preference-weighted measures of outcomes, based on specific instruments, may be more appropriate. For example, social care quality-of-life measures are being developed and NICE will consider using 'social care QALYs' if validated, such as the ASCOT (Adult Social Care Outcome Toolkit) set of instruments used by the Department of Health and Social Care in the Adult Social Care Outcomes Framework indicator on social care-related quality of life.

Similarly, depending on the topic, and on the intended effects of the interventions and programmes, the economic analysis may also consider effects in terms of capability and wellbeing. For capability effects, use of the ICECAP‑O (Investigating Choice Experiments for the Preferences of Older People CAPability measure for Older people) or ICECAP‑A (Investigating Choice Experiments for the Preferences of Older People CAPability measure for Adults) instruments may be considered by NICE when developing methodology in the future. If an intervention is associated with both health- and non-health-related effects, it may be helpful to present these elements separately.

Economic analysis for interventions funded by the NHS and PSS with health outcomes

Economic analyses conducted for decisions about interventions with health outcomes funded by the NHS and PSS should usually follow the reference case in table 7.1 described in NICE's guide to the methods of technology appraisal 2013. Advice on how to follow approaches described in NICE's guide to the methods of technology appraisal 2013 is provided by the technical support documents developed by NICE's Decision Support Unit. Departures from the reference case may sometimes be appropriate; for example, when there are not enough data to estimate QALYs gained. Any such departures must be agreed with members of NICE staff with a quality assurance role and highlighted in the guideline with reasons given.

Economic analysis for interventions funded by the public sector with health and non-health outcomes

The usual perspective for the economic analysis of public health interventions is that of the public sector. This may be simplified to a local government perspective if few costs and effects apply to other government agencies.

Whenever there are multiple outcomes, a cost–consequences analysis is usually needed, and the committee weighs up the changes to the various outcomes against the changes in costs in an open and transparent manner. However, for the base-case analysis, a cost–utility analysis should be undertaken using a cost per QALY approach where possible.

A societal perspective may be used, and will usually be carried out using cost–benefit analysis. When a societal perspective is used, it must be agreed with NICE staff with responsibility for quality assurance and highlighted in the guideline with reasons given.

Economic analysis for interventions with a social care focus

For social care interventions, the perspective on outcomes should be all effects on people for whom services are delivered (people using services and/or carers). Effects on people using services and carers (whether expressed in terms of health effects, social care quality of life, capability or wellbeing) are the intended outcomes of social care interventions and programmes. Although holistic effects on people using services, their families and carers may represent the ideal perspective on outcomes, a pragmatic and flexible approach is needed to address different perspectives, recognising that improved outcomes for people using services and carers may not always coincide.

Whenever there are multiple outcomes, a cost–consequences analysis is usually needed, and the committee weighs up the changes to the various outcomes against the changes in costs in an open and transparent manner. However, for the base-case analysis, a cost–utility analysis should be undertaken using a cost per QALY approach where possible.

Any economic model should take account of the proportion of care that is publicly funded or self-funded. Scenario analysis may also be useful to take account of any known differences between local authorities in terms of how they apply eligibility criteria. Scenario analysis should also be considered if the cost of social care varies depending on whether it is paid for by local authorities or by individual service users; the value of unpaid care should also be taken into account where appropriate.

It is envisaged that the analytical difficulties involved in creating clear, transparent decision rules around the costs that should be considered, and for which interventions and outcomes, will be particularly problematic for social care. These should be discussed with the committee before any economic analysis is undertaken and an approach agreed.

Identification and selection of model inputs

An economic analysis uses decision-analytic techniques with outcome, cost and utility data from the best available published sources.

The reference case across all perspectives (table 7.1) states that evidence on effects should be obtained from a systematic review. Some inputs, such as costs, may have standard sources that are appropriate, such as national list prices or a national audit, but for others appropriate data will need to be sourced.

Additional searches may be needed; for example, if searches for evidence on effects do not provide the information needed for economic modelling. Additional information may be needed on:

  • disease prognosis

  • the relationship between short- and long-term outcomes

  • quality of life

  • adverse events

  • resource use or costs.

Although it is desirable to conduct systematic literature reviews for all such inputs, this is time-consuming and other pragmatic options for identifying inputs may be used. Informal searches should aim to satisfy the principle of 'saturation' (that is, to 'identify the breadth of information needs relevant to a model and sufficient information such that further efforts to identify more information would add nothing to the analysis' (Kaltenthaler et al. 2011). Studies identified in the review of evidence on effects should be scrutinised for other relevant data, and attention should be paid to the sources of parameters in analyses included in the systematic review of published economic evaluations. Alternatives could include asking committee members and other experts for suitable evidence or eliciting their opinions, for example, using formal consensus methods such as the Delphi technique or the nominal-group technique. If a systematic review is not possible, transparent processes for identifying model inputs should be reported; the internal quality and external validity of each potential data source should be assessed and their selection justified. If more than 1 suitable source of evidence is found, consideration should be given to synthesis and/or exploration of alternative values in sensitivity analyses. Further guidance on searching and selecting evidence for key model inputs is available from Kaltenthaler et al. (2011) and Paisley (2016).

Data from registries and audits may be used to inform both estimates of effectiveness and any modelling, particularly for service delivery questions. To obtain such data, it may be necessary to negotiate access with the organisations and individuals that hold the data, or to ask them to provide a summary for inclusion in the guidance if published reports are insufficient. Any processes used for accessing data will need to be reported in the health economic plan and in the guideline. Given the difficulties that organisations may have in extracting audit data, such requests should be focused and targeted: for example, identifying a specific audit and requesting results from the previous 3 years.

For some questions, there may be good reason to believe that relevant and useful information exists outside of literature databases or validated national data sources. Examples include ongoing research, a relatively new intervention and studies that have been published only as abstracts. Typically, the method for requesting information from stakeholders is through a call for evidence (see section 5.5).

For some guidelines, econometric studies provide a supplementary source of evidence and data for bespoke economic models. For these studies, the database 'Econlit' should be searched as a minimum.

Some information on unit costs may be found in the Personal Social Services Research Unit report on unit costs of health and social care or the Department of Health's reference costs (provider perspective). Information on resource impact costings can be found in NICE's methods guide on resource impact assessment. Some information about public services may be better obtained from national statistics or databases, rather than from published studies. Philips et al. (2004) provide a useful guide to searching for data for use in economic models.

In cases where current costs are not available, costs from previous years should be adjusted to present value using inflation indices appropriate to the cost perspective, such as the hospital and community health services (HCHS) index and the PSS pay and prices index, available from the PSSRU report on unit costs of health and social care or the Office for National Statistics (ONS) consumer price index.

Wherever possible, costs relevant to the UK healthcare system should be used. However, in cases where only costs from other countries are available these should be converted to Pounds Sterling using an exchange rate from an appropriate and current source (such as HM Revenue and Customs or Organisation for Economic Co-operation and Development).

As outlined in NICE's guide to the methods of technology appraisal 2013, the public list prices for technologies (for example, medicines or medical devices) should be used in the reference-case analysis. When there are nationally available price reductions (for example, for medicines procured for use in secondary care through contracts negotiated by the NHS Commercial Medicines Unit), the reduced price should be used in the reference-case analysis to best reflect the price relevant to the NHS. The Commercial Medicines Unit publishes information on the prices paid for some generic medicines by NHS trusts through its Electronic Market Information Tool (eMIT), focusing on medicines in the 'National Generics Programme Framework' for England. Analyses based on price reductions for the NHS will be considered only when the reduced prices are transparent and can be consistently available across the NHS, and when the period for which the specified price is available is guaranteed. When a reduced price is available through a patient access scheme that has been agreed with the Department of Health and Social Care, the analyses should include the costs associated with the scheme. If the price is not listed on eMIT, then the current price listed on the British National Formulary (BNF) should be used. For medicines that are predominantly dispensed in the community, prices should be based on the Drug Tariff. In the absence of a published list price and a price agreed by a national institution (as may be the case for some devices), an alternative price may be considered, provided that it is nationally and publicly available. If no other information is available on costs, local costs obtained from the committee may be used.

Preference-based quality-of-life data are often needed for economic models. Many of the search filters available are highly sensitive and so, although they identify relevant studies, they also detect a large amount of irrelevant data. An initial broad literature search for quality-of-life data may be a good option, but the amount of information identified may be unmanageable (depending on the key issue being addressed). It may be more appropriate and manageable to incorporate a quality of life search filter when performing additional searches for key issues of high economic priority. When searching bibliographic databases for health-state utility values, specific techniques outlined in Ara (2017) and Golder et al. (2005) and Papaioannou et al. (2010) may be useful, and specific search filters have been developed that may increase sensitivity (Arber et al. 2017). The provision of quality-of-life data should be guided by the health economist at an early stage during guideline development so that the information specialist can adopt an appropriate strategy. Resources for identifying useful utility data for economic modelling are the dedicated registries of health-state utility values such as ScHARRHUD and Tufts CEA Registry and the technical support documents developed by NICE's Decision Support Unit.

Exploring uncertainty

The committee should discuss any potential bias and limitations of economic models. Sensitivity analysis should be used to explore the impact that potential sources of bias and uncertainty could have on model results.

Deterministic sensitivity analysis should be used to explore key assumptions used in the modelling. This should test whether and how the model results change under alternative, plausible scenarios. Common examples of when deterministic sensitivity analysis could be conducted are:

  • when there is uncertainty about the most appropriate assumption to use for extrapolation of costs and effects beyond the trial follow‑up period

  • when there is uncertainty about how the pathway of care is most appropriately represented in the analysis

  • when there may be economies of scale (for example, when appraising diagnostic technologies)

  • for infectious disease transmission models.

Deterministic sensitivity analysis should also be used to test any bias resulting from the data sources selected for key model inputs.

Probabilistic sensitivity analysis can be used to account for uncertainty arising from imprecision in model inputs. The use of probabilistic sensitivity analysis will often be specified in the health economic plan. Any uncertainty associated with all inputs can be simultaneously reflected in the results. In non-linear decision models where outputs are a result of a multiplicative function (for example, in Markov models), probabilistic methods also provide the best estimates of mean costs and outcomes. The choice of distributions used should be justified; for example, in relation to the type of parameter and the method of its estimation. Presentation of the results of probabilistic sensitivity analysis could include scatter plots or confidence ellipses, with an option for including cost-effectiveness acceptability curves and frontiers.

When a probabilistic sensitivity analysis is carried out, a value of information analysis may be considered to indicate whether more research is necessary, either before recommending an intervention or in conjunction with a recommendation. The circumstances in which a value of information analysis should be considered will depend on whether more information is likely to be available soon and whether this information is likely to influence the decision to recommend the intervention.

When probabilistic methods are unsuitable, the impact of parameter uncertainty should be thoroughly explored using deterministic sensitivity analysis, and the decision not to use probabilistic methods should be justified in the guideline.

Consideration can be given to including structural assumptions and the inclusion or exclusion of data sources in probabilistic sensitivity analysis. In this case, the method used to select the distribution should be outlined in the guideline (Jackson et al. 2011).

Discounting

Cost-effectiveness results should reflect the present value of the stream of costs and benefits accruing over the time horizon of the analysis. For the reference case, the same annual discount rate should be used for both costs and benefits. NICE considers that it is usually appropriate to discount costs and health effects at the same annual rate of 3.5%.

Sensitivity analyses using 1.5% as an alternative rate for both costs and health effects may be presented alongside the reference‑case analysis, particularly for public health guidance. When treatment restores people who would otherwise die or have a very severely impaired life to full or near full health, and when this is sustained over a very long period (normally at least 30 years), cost-effectiveness analyses are very sensitive to the discount rate used. In this circumstance, analyses that use a non-reference-case discount rate for costs and outcomes may be considered. A discount rate of 1.5% for costs and benefits may be considered by the committee if it is highly likely that, on the basis of the evidence presented, long-term health benefits are likely to be achieved. However, the committee will need to be satisfied that the recommendation does not commit the funder to significant irrecoverable costs.

Subgroup analysis

The relevance of subgroup analysis to decision-making should be discussed with the committee. When appropriate, economic analyses should estimate the cost effectiveness of an intervention in each subgroup.

Local considerations

For service delivery questions, cost-effectiveness analyses may need to account for local factors, such as the expected number of procedures and the availability of staff and equipment at different times of the day, week and year. Service delivery models may need to incorporate the fact that each local provider may be starting from a different baseline of identified factors (for example, the number of consultants available at weekends). It is therefore important that these factors are identified and considered by the committee. Where possible, results obtained from the analysis should include both the national average and identified local scenarios to ensure that service delivery recommendations are robust to local variation.

Service failures

Service designs under consideration might result in occasional service failure – that is, where the service does not operate as planned. For example, a service for treating myocardial infarction may have fewer places where people can be treated at weekends compared with weekdays as a result of reduced staffing. Therefore more people will need to travel by ambulance and the journey time will also be longer. Given the limited number of ambulances, a small proportion may be delayed, resulting in consequences in terms of costs and QALYs. Such possible service failures should be taken into account in effectiveness and economic modelling. This effectively means that analyses should incorporate the 'side effects' of service designs.

Service demand

Introducing a new service or increasing capacity will often result in an increase in demand. This could mean that a service does not achieve the predicted effectiveness because there is more demand than was planned for. This should be addressed either in the analysis or in considerations.

Equity considerations

NICE's economic evaluation of healthcare and public health interventions does not include equity weighting – a QALY has the same weight for all population groups.

It is important to recognise that care provision, specifically social care, may be means tested, and that this affects the economic perspective in terms of who bears costs – the public sector or the person using services or their family. Economic evaluation should reflect the intentions of the system. Equity considerations relevant to specific topics, and how these were addressed in economic evaluation, must be reported.

7.7 Using economic evidence to formulate guideline recommendations

For an economic analysis to be useful, it must inform the guideline recommendations. The committee should discuss cost effectiveness in parallel with general effectiveness when formulating recommendations (see chapter 9).

Within the context of NICE's principles on social value judgements, the committee should be encouraged to consider recommendations that:

  • increase effectiveness at an acceptable level of increased cost or

  • are less effective than current practice, but free up sufficient resources that can be re‑invested in public sector care or services to increase the welfare of the population receiving care.

The committee's interpretations and discussions should be clearly presented in the guideline. This should include a discussion of potential sources of bias and uncertainty. It should also include the results of sensitivity analyses in the consideration of uncertainty, as well as any additional considerations that are thought to be relevant. It should be explicitly stated if economic evidence is not available, or if it is not thought to be relevant to the question.

Recommendations for interventions informed by cost–utility analysis

If there is strong evidence that an intervention dominates the alternatives (that is, it is both more effective and less costly), it should normally be recommended. However, if 1 intervention is more effective but also more costly than another, then the incremental cost-effectiveness ratio (ICER) should be considered.

Health effects

The cost per QALY gained should be calculated as the difference in mean cost divided by the difference in mean QALYs for 1 intervention compared with the other.

If 1 intervention appears to be more effective than another, the committee has to decide whether it represents reasonable 'value for money' as indicated by the relevant ICER. In doing so, the committee should also refer to NICE's principles on social value judgements (also see below).

'NICE has never identified an ICER above which interventions should not be recommended and below which they should. However, in general, interventions with an ICER of less than £20,000 per QALY gained are considered to be cost effective. Where advisory bodies consider that particular interventions with an ICER of less than £20,000 per QALY gained should not be provided by the NHS they should provide explicit reasons (for example, that there are significant limitations to the generalisability of the evidence for effectiveness). Above a most plausible ICER of £20,000 per QALY gained, judgements about the acceptability of the intervention as an effective use of NHS resources will specifically take account of the following factors.

The degree of certainty around the ICER. In particular, advisory bodies will be more cautious about recommending a technology when they are less certain about the ICERs presented in the cost-effectiveness analysis.

The presence of strong reasons indicating that the assessment of the change in the quality of life has been inadequately captured, and may therefore misrepresent, the health gain.

When the intervention is an innovation that adds demonstrable and distinct substantial benefits that may not have been adequately captured in the measurement of health gain.

As the ICER of an intervention increases in the £20,000 to £30,000 range, an advisory body's judgement about its acceptability as an effective use of NHS resources should make explicit reference to the relevant factors considered above. Above a most plausible ICER of £30,000 per QALY gained, advisory bodies will need to make an increasingly stronger case for supporting the intervention as an effective use of NHS resources with respect to the factors considered above.'

When assessing the cost-effectiveness of competing courses of action, the committee should not give particular priority to any intervention or approach that is currently offered. In any situation where 'current practice', compared with an alternative approach, generates an ICER above a level that would normally be considered cost effective, the case for continuing to invest in it should be carefully considered, based on similar levels of evidence and considerations that would apply to an investment decision. The committee should be mindful of whether the intervention is consuming more resource than its value is contributing based on NICE's cost per QALY threshold.

Equity considerations

In the reference case, an additional QALY should receive the same weight regardless of any other characteristics of the people receiving the health benefit.

The estimation of QALYs, as defined in the reference case, implies a particular position regarding the comparison of health gained between individuals. Therefore, in the reference case, an additional QALY is of equal value regardless of other characteristics of the individuals, such as their socio-demographic characteristics, their age, or their level of health. The guideline committee has discretion to consider a different equity position, and may do so in certain circumstances and when instructed by the NICE Board (see below).

End of life considerations

In the reference case, the committee will regard all QALYs as being of equal weight. However, the committee can accept analysis that explores a QALY weighting that is different from that of the reference case when an intervention concerns a 'life-extending treatment at the end of life'.

For a 'life-extending treatment at the end of life', all of the following criteria must be met:

  • the treatment is indicated for patients with a short life expectancy, normally less than 24 months and

  • there is enough evidence to indicate that the treatment has the prospect of offering an extension to life, normally of a mean value of at least an additional 3 months, compared with current NHS treatment.

In addition, the committee will need to be satisfied that:

  • the estimates of the extension to life are sufficiently robust and can be shown or reasonably inferred from either progression-free survival or overall survival (taking account of trials in which crossover has occurred and has been accounted for in the effectiveness review) and

  • the assumptions used in the reference case economic modelling are plausible, objective and robust.

When the conditions described above are met, the committee should consider:

  • the impact of giving greater weight to QALYs achieved in the later stages of terminal diseases, using the assumption that the extended survival period is experienced at the full quality of life anticipated for a healthy person of the same age and

  • the magnitude of the additional weight that would need to be assigned to the QALY benefits in this patient group for the cost effectiveness of the technology to fall within the normal range of maximum acceptable ICERs, with a maximum weight of 1.7.

Non-health effects

Outside the health sector, it is more difficult to judge whether the benefits accruing to the non-health sectors are cost effective, but it may be possible to undertake cost–utility analysis based on measures of social care-related quality of life. The committee should take into account the factors it considers most appropriate when making decisions about recommendations. These could include non-health-related outcomes that are valued by the rest of the public sector, including social care. It is possible that over time, and as the methodology develops (including the establishment of recognised standard measures of utility for social care), there will be more formal methods for assessing cost effectiveness outside the health sector.

Recommendations for interventions informed by cost–benefit analysis

When considering cost–benefit analysis, the committee should be aware that an aggregate of individual 'willingness to pay' (WTP) is likely to be more than public-sector WTP, sometimes by quite a margin. If a conversion factor has been used to estimate public sector WTP from an aggregate of individual WTP, the committee should take this into account. In the absence of a conversion factor, the committee should consider the possible discrepancy in WTP when making recommendations that rely on a cost–benefit analysis.

The committee should also attempt to determine whether any adjustment should be made to convert 'ability‑to‑pay' estimates into those that prioritise on the basis of need and the ability of an intervention to meet that need.

The committee should not recommend interventions with an estimated negative net present value (NPV) unless other factors such as social value judgements are likely to outweigh the costs. Given a choice of interventions with positive NPVs, committees should prefer the intervention that maximises the NPV, unless other objectives override the economic loss incurred by choosing an intervention that does not maximise NPV.

Care must be taken with published cost–benefit analyses to ensure that the value of all the health and relevant non-health effects have been included. Older cost–benefit analyses, in particular, often consist of initial costs (called 'costs') and subsequent cost savings (called 'benefits') and fail to include monetarised health effects and all relevant non-health effects.

Recommendations for interventions informed by cost–consequences analysis

The committee should ensure that, where possible, the different sets of consequences do not double count costs or effects. The way that the sets of consequences have been implicitly weighted should be recorded as openly, transparently and accurately as possible. Cost–consequences analysis then requires the decision-maker to decide which interventions represent the best value using a systematic and transparent process. Various tools, such as multi-criteria decision analysis (MCDA), are available to support this part of the process, although attention needs to be given to any weightings used, particularly with reference to the NICE reference case and NICE's principles on social value judgements.

Recommendations for interventions informed by cost-effectiveness analysis

If there is strong evidence that an intervention dominates the alternatives (that is, it is both more effective and less costly), it should normally be recommended. However, if one intervention is more effective but also more costly than another, then the ICER should be considered. If one intervention appears to be more effective than another, the committee has to decide whether it represents reasonable 'value for money' as indicated by the relevant ICER.

The committee should use an established ICER threshold (see the section on cost–utility analysis). In the absence of an established threshold, the committee should estimate a threshold it thinks would represent reasonable 'value for money' as indicated by the relevant ICER.

The committee should take account of NICE's principles on social value judgements when making its decisions.

Recommendations for interventions informed by cost-minimisation analysis

Cost minimisation can be used when the difference in effects between an intervention and its comparator is known to be small and the cost difference is large (for example, whether doctors or nurses should give routine injections). If it cannot be assumed from prior knowledge that the difference in effects is sufficiently small, ideally the difference should be determined by an equivalence trial, which usually requires a larger sample than a trial to determine superiority or non-inferiority. For this reason, cost-minimisation analysis is only applicable in a relatively small number of cases.

Recommendations when there is no economic evidence

When no relevant published studies are found, and a new economic analysis is not prioritised, the committee should make a qualitative judgement about cost effectiveness by considering potential differences in resource use and cost between the options alongside the results of the review of evidence of effectiveness. This may include considering information about unit costs, which should be presented in the guideline. The committee's considerations when assessing cost effectiveness in the absence of evidence should be explained in the guideline.

Further considerations

Decisions about whether to recommend interventions should not be based on cost effectiveness alone. The committee should also take into account other factors, such as the need to prevent discrimination and to promote equity. The committee should consider trade‑offs between efficient and equitable allocations of resources. These factors should be explained in the guideline.

7.8 References and further reading

Al-Janabi H, van Exel J, Brouwer W et al. (2016) A framework to include family health spillovers in economic evaluation. Medical Decision Making 36: 176–86

Anderson R (2010) Systematic reviews of economic evaluations: utility or futility? Health Economics 19: 350–64

Ara RM, Brazier J, Peasgood T et al. (2017) The identification, review and synthesis of HSUV's from the literature. Pharamcoeconomics 35 (Suppl 1):43–55

Arber M, Garcia S, Veale T et al. (2016) Performance of search filters to identify health state utility studies. Value in Health 19: A390–1

Arber M, Garcia S, Veale T et al. (2017) Performance of Ovid MEDLINE search filters to identify health state utility studies. International Journal of Technology Assessment in Health Care 33: 472–80

Brennan A, Chick SE, Davies R (2006) A taxonomy of model structures for economic evaluation of health technologies. Health Economics 15: 1295–1310

Briggs A, Claxton K, Sculpher K (2006) Decision modelling for health economic evaluation. Oxford: Oxford University Press

Caro JJ, Möller J (2016) Advantages and disadvantages of discrete-event simulation for health economic analyses. Expert Review of Pharmacoeconomics and Outcomes Research 16: 327–9

Centre for Reviews and Dissemination (2007) NHS economic evaluation database handbook [online]. NHS economic evaluation database handbook

Chiou CF, Hay JW, Wallace JF et al. (2003) Development and validation of a grading system for the quality of cost-effectiveness studies. Medical Care 41: 32–44

Cooper NJ, Sutton AJ, Ades AE et al. (2007) Use of evidence in economic decision models: practical issues and methodological challenges. Health Economics 16: 1277–86

Department of Energy and Climate Change (2014) Quality assurance: guidance for models.

Drummond MF, Jefferson TO (1996) Guidelines for authors and peer reviewers of economic submissions to the BMJ. British Medical Journal 313: 275–83

Drummond MF, McGuire A (2001) Economic evaluation in health care: merging theory with practice. Oxford: Oxford University Press

Drummond MF, Sculpher MJ, Klaxton K et al. (2015) Methods for the economic evaluation of health care programmes, 4th edition. Oxford: Oxford University Press

Eccles M, Mason J (2001) How to develop cost-conscious guidelines. Health Technology Assessment 5: 1–69

Evers SMAA, Goossens M, de Vet H et al. (2005) Criteria list for assessment of methodological quality of economic evaluations: consensus on health economic criteria. International Journal of Technology Assessment in Health Care 21: 240–5

Golder S, Glanville J, Ginnelly L (2005) Populating decision-analytic models: the feasibility and efficiency of database searching for individual parameters. International Journal of Technology Assessment in Health Care 21: 305–11

Hernandez Alava M, Wailoo A, Pudney S (2017) Methods for mapping between the EQ-5D-5L and the 3L. NICE Decision Support Unit report [online; accessed 7 September 2018]

HM Treasury (2015) The Aqua Book: guidance on producing quality analysis for government. [online; accessed 3 September 2018]

HM Treasury (2013) Review of quality assurance of government analytical models: final report. [online; accessed 3 September 2018]

Husereau D, Drummond M, Petrou S et al. (2013) Consolidated Health Economic Evaluation Reporting Standards (CHEERS) statement. British Medical Journal 346: 1

Jackson CH, Bojke L, Thompson G et al. (2011) A framework for addressing structural uncertainty in decision models. Medical Decision Making 31: 662–74

Kaltenthaler E, Tappenden P, Paisley S (2011) NICE DSU Technical support document 13: identifying and reviewing evidence to inform the conceptualisation and population of cost-effectiveness models. [online; accessed 3 September 2018]

Krahn MD, Naglie G, Naimark D et al. (1997) Primer on medical decision analysis: Part 4 – analyzing the model and interpreting the results. Medical Decision Making 17: 142–51

Longworth L, Rowen D (2011) NICE DSU Technical support document 10: the use of mapping methods to estimate health state utility values. [online; accessed 3 September 2018]

National Audit Office (2016). Framework to review models. [online; accessed 3 September 2018]

NHS Centre for Reviews and Dissemination (2001) Improving access to cost-effectiveness information for health care decision making: the NHS Economic Evaluation Database. CRD report number 6, 2nd edition. York: NHS Centre for Reviews and Dissemination, University of York.

National Institute for Health and Clinical Excellence (2012) Social care guidance development methodology workshop December 2011: report on group discussions

National Institute for Health and Care Excellence (2017) Assessing resource impact process manual: guidelines

NICE Decision Support Unit (2011) Technical support document series [accessed 3 September 2018]

Paisley S (2016) Identification of evidence for key parameters in decision-analytic models of cost-effectiveness: a description of sources and a recommended minimum search requirement. Pharmacoeconomics 34: 597–608

Papaioannou D, Brazier JE, Paisley S (2011) NICE DSU Technical support document 9: the identification, review and synthesis of health state utility values from the literature. [online; accessed 3 September 2018]

Philips Z, Ginnelly L, Sculpher M et al. (2004) Review of guidelines for good practice in decision-analytic modelling in health technology assessment. Health Technology Assessment 8: 1–158

Raftery J, editor (1999–2001) Economics notes series. British Medical Journal [accessed 3 September 2018]

Van Hout B, Janssen M, Feng Y et al. (2012) Interim scoring for the EQ‑5D‑5L: mapping the EQ‑5D‑5L to EQ‑5D‑3L value sets. Value in Health 15: 708–15

Wanless D (2004) Securing good health for the whole population. London: HM Treasury

Wood H, Arber M, Isojarvi J et al. (2017) Sources used to find studies for systematic review of economic evaluations. Presentation at the HTAi Annual Meeting. Rome, Italy, June 17 to 21 2017