Process and methods
- Service delivery
- Primary data analytics
- Chapter 2 – the scope
- Chapter 3 – decision-making committees
- Chapter 4 – developing review questions and planning the evidence review
- Chapter 5 – identifying the evidence
- Chapter 6 – reviewing the evidence
- Chapter 7 – economic evaluation
- Chapter 8 – linking to other guidance
- Chapter 9 – writing the guideline
- Chapter 10 – validation
- Chapter 12 – implementation
- Chapter 13 – surveillance
- Chapter 14 – updating guidelines
October 2018: Major changes from the 2014 guidelines manual are shown below.
Methods for developing recommendations on service delivery have been incorporated into the manual, with information added to the chapters on scoping, searching, evidence submission and economics. We have added a new appendix (appendix A) with detailed advice on developing review questions in this area.
NICE is currently exploring the place of primary data analytics in our work and further advice will be shared as this develops.
We encourage developers to list areas where evidence is lacking and details of stakeholders who might provide information in a call for evidence or who might identify expert witnesses.
We are clear that guidelines don't usually include key issues covered by bodies such as the Department of Health and Social Care, NHS England or Public Health England.
We remind developers that guidelines don't usually cover training requirements. However, recommendations may cover the need for specific knowledge and skills for a particular aspect of care.
We encourage developers to think about other related NICE guidance in development and promote cross-representation across committees when topics are closely related.
We include advice for developers about seeking expert testimony from children and vulnerable groups, including use of video recording and giving testimony in private session.
We have made changes to ensure consistency with the updated code of practice for declaring and dealing with conflicts of interest. We have also clarified the involvement of tobacco companies as respondents rather than stakeholders.
We indicate that core outcome sets (one source is the COMET database) should be used if suitable based on quality and validity. We give standards for assessing the suitability of core outcome sets.
We include review questions that assess diagnostic prediction models and prognostic prediction models and link to external sources of further advice.
We have included a standard template for review protocols as an appendix (appendix I). Registration of the review protocol on the PROSPERO database is now mandatory.
We have included new sources, tools and approaches to searching, as well as a new prompt for identifying MHRA safety information.
We now recommend GRADE as the first approach to quality assessment for all guidelines, including those covering public health and social care topics. We recommend GRADE-CERQual for qualitative evidence.
We now have preferred 'checklists' for assessing the quality of the evidence (see appendix H). Use of any other checklist should be agreed in advance with NICE staff with a quality assurance role.
We now indicate that an agreed proportion of papers should be sifted by 2 analysts (not less than 10%) because duplicate sifting of all papers is time consuming and there are other ways of ensuring that relevant papers aren't missed. We have included details of using a machine learning algorithm for priority screening.
We include advice on the minimum outputs and reporting standards for network meta-analyses (see appendix K) and how these apply to developing NICE guidelines.
For base-case analysis, we recommend a cost–utility analysis using a cost per QALY. This will allow more consistent decisions related to costs.
We have clarified that the same levels of evidence and considerations should be used for disinvestment and investment decisions.
We have added information on end of life criteria in line with technology appraisal methods.
We advise linking to technology appraisal recommendation in the NICE Pathway rather than incorporating TA recommendations verbatim in a guideline.
We have removed the details on updating technology appraisals within a guideline and have added a link to the policy from the Department of Health and Social Care.
We include advice for developers on what to do when similar review questions are covered in other guidelines. Options include linking to the recommendations in the other guideline, using the evidence review to make new recommendations or doing a new systematic review.
We have simplified advice on writing guidelines, and a separate writing guide with more details and examples will be coming soon.
The section on supporting shared decision-making has been clarified, and includes information on when a separate decision aid could be produced. The writing guide includes more detail on summarising evidence in the guideline to support a professional's discussion with the person making the decision.
We have added new advice on what to do when recommendations are made for the use of CE-marked devices outside their instructions for use. This includes standard footnote wording.
We have clarified advice on recommendations on the off-label use of medicines.
We have defined the types of 'additional consultation' that can inform development. There is more information about the changes in appendix B.
Information on how we work and the tools we produce has been updated. A new section on how we work with other organisations, including endorsing resources, has also been added.
We highlight the role of the new Guideline Recommendations Implementation Panel.
We have focused the process on event-driven checks of guidelines as well as a standard check every 5 years.
We plan themed surveillance of guidelines covering similar populations or settings to ensure that the process is efficient.
We have revised the process for considering whether to remove a guideline from the static list.
We have indicated that we may refresh some recommendations following an event-driven or standard check.
We have added information about the quality assurance of the surveillance process.