9 The evidence
The evidence statements from 3 reviews are provided by external contractors (see Supporting evidence).
This section lists how the evidence statements and the expert papers link to the relevant recommendations. It also sets out a brief summary of findings from the economic analysis and the fieldwork.
The evidence statements are short summaries of evidence, in a review, report or paper (provided by an expert in the topic area). Each statement has a short code indicating which document the evidence has come from. The letters in the code refer to the type of document the statement is from, and the numbers refer to the document number, and the number of the evidence statement in the document.
Evidence statement number 1.1 indicates that the linked statement is numbered 1 in review 1. Evidence statement number 2.1.3 indicates that the linked statement is numbered 1.3 in review 2. Evidence statement number 3.3.4 indicates that the linked statement is numbered 3.4 in review 3. EP1 indicates that expert paper 1 is linked to a recommendation.
The reviews, expert papers and economic analysis are available online. Where a recommendation is not directly taken from the evidence statements, but is inferred from the evidence, this is indicated by IDE (inference derived from the evidence).
Recommendation 1: EP3, EP4, EP6–9, EP13
Recommendation 2: evidence statements 3.1.1, 3.1.2, 3.1.3 EP 10, EP11
Recommendation 3: EP4–6, EP9, EP13
Recommendation 4: EP1, EP3–5, EP14
Recommendation 5: EP1–3, EP5, EP14
Recommendation 6: EP1–3
Recommendation 7: evidence statements 1.2, 1.4, 1.6, 1.7, 1.9, 1.10–1.19, 1.20, 1.21, 2.1.8, 2.3.7, 2.3.11, 2.3.13, 2.3.17, 2.4.4, 2.4.5, 2.4.8, 2.5.5, 2.5.6, 2.5.7, 2.5.9, 2.5.11, 2.5.15, 3.3.3, 3.3.4, 3.3.6, 3.3.7, EP14
Recommendation 8: EP1–3, EP5, EP6, EP9
Recommendation 9: EP10–12
Recommendation 10: evidence statements 1.2, 1.4, 1.6, 1.7, 1.9, 1.10–1.19, 1.20, 1.21, 2.4.4, 2.4.5, 2.5.5, 2.5.6, 3.3.3, 3.3.4, 3.3.6, 3.3.7, EP14
Recommendation 11: EP5, EP10–12
Recommendation 12: evidence statements: 3.3.1–3, 3.2.1, 3.2.2, 3.3.1–9, EP5, EP10–12
Recommendation 13: EP5, EP10–12
Recommendation 14: EP5, EP10–12
Recommendation 15: EP1–3
Recommendation 16: EP1–3, EP14
Recommendation 17: IDE
Review 1 identified 79 interventions dealing with 6 behaviours: smoking, diet, physical activity, alcohol, sexual health and multiple health targets. All interventions fall well below the accepted £20,000–£30,000 costs per quality-adjusted life year (QALY) threshold. However, sensitivity analyses suggest that some may have incremental cost-effectiveness ratios (ICERs) above this threshold. In this review, sexual health interventions were least cost effective but no other characteristics or behaviour change techniques were related to cost-effectiveness estimates.
Review 2 identified 251 interventions across the 6 behaviours, of which 102 provided cost–utility estimates (£/QALY). Using the upper estimate and lower threshold (the most cautious approach), 85% of interventions were identified as cost effective. Using the lower estimates, smoking cessation interventions were significantly more cost effective than interventions targeting multiple behaviours.
Across all interventions, those targeting the general population had better cost–utility results and were more likely to be cost effective than those aimed at vulnerable populations. Regression analyses across, as well as within, behaviours suggests there is little or no consistent association between the presence of an individual behaviour change technique (or cluster of behaviour change techniques) and an intervention being cost effective.
The authors of the reviews state that the findings need to be interpreted cautiously given:
the different search strategies for reviews 1 (based on interventions already assessed by NICE as cost effective) and 2 (based on the search strategy used for evidence review 2)
reliance on incomplete information in published papers
heterogeneity in economic analyses
lack of consensus for a definition of 'choice architecture'
bias in reporting of study findings.
Fieldwork aimed to test the relevance, usefulness and feasibility of putting the recommendations into practice. The PDG considered the findings when developing the final recommendations. For details, see Behaviour change (partial update of PH6) – fieldwork report.
Fieldwork participants who are involved in, or support, behaviour change activities were fairly positive about the recommendations and their potential to help change an individual's behaviour. This included practitioners, commissioners, service providers, health and wellbeing board members, national bodies (from both the public and private sectors), royal colleges and academics.
Many participants welcomed the emphasis on evidence-based investment, but were keen to ensure this should not stifle innovation or narrow the options available to commissioners.
They described the commissioning recommendations as bold and ambitious. While recognising that they may be difficult to implement in the current climate, participants did not believe the approach should be diluted.
There were some concerns that independent evaluation may be viewed as unaffordable. It was suggested that evaluation should be built into the original design or service specification to ensure it does take place.
Fieldwork participants did not think the recommendations offered a new approach, but they agreed that the measures had not been implemented universally. They believed wider, more systematic implementation would be achieved if there was a clearer definition of the techniques and training requirements for staff and commissioners.