Process and methods
11 The Committee's assessment of efficacy and safety evidence
This section describes how the Committee weighs the evidence presented to it. In particular, it explores specific factors underpinning the Committee's consideration of efficacy (section 11.4) and safety (section 11.5). This section also describes how evidence and commentary received as part of the consultation process are considered by the Committee when producing its final recommendations.
The Committee makes recommendations about the procedure on the basis of the evidence relating to its efficacy and safety. Both efficacy and safety can be affected by certain variables about which published evidence provides little or no helpful information. For example, the individual operator and the different devices used to do procedures are often important in this context.
The outcomes of many procedures are influenced by the training, experience and aptitude of the operator. This applies particularly to procedures that need great technical skill, such as complex laparoscopic operations. Many procedures are said to have a 'learning curve'; this can affect outcomes in published series used as evidence, as well as the outcomes for clinicians who start doing new procedures.
Specialist advisers are a valuable source of advice about procedures that present technical challenges or for which special training is desirable. These considerations may influence the Committee's recommendations about the procedure, and are often translated into recommendations about training.
Some procedures need to be carried out with a particular device or involve implanting a device. This introduces important variables that need to be taken into account in NICE guidance:
Evidence may only be available for a particular device or devices, even though others may be in use.
New devices may be introduced into the market at any time during the development of the guidance, or after it has been published.
The technology of devices may advance rapidly. This means that both efficacy and safety outcomes reported in the published literature may not accord with current practice using more technologically advanced devices; further technological progress may further alter outcomes.
The Committee makes recommendations based on the available evidence, while bearing in mind that it is evaluating the procedure rather than a specific device. The guidance may refer to the potentially important influence of different devices on the safety or efficacy of the procedure, or to rapid technological developments described by the specialist advisers, companies or other sources.
Comparison of a procedure's efficacy with that of established procedures is appropriate when they are used to treat the same condition and there are well established alternatives. This also applies to safety: the frequency and severity of complications of any established procedure are used as a benchmark against which the complications associated with a new procedure are judged.
The relevance to the Committee's decision of comparative efficacy varies, depending on what other procedures or treatments are in use for the condition. Typical scenarios are:
There are a number of different established procedures. Judgements about efficacy are based on an overview of the available evidence on efficacy of the established procedures, but there is no need for any specific comparisons.
The procedure is intended to replace a single, well‑established, procedure. Comparative evidence is needed to show that the new procedure is at least as efficacious as the existing one (also taking into account other advantages that the new procedure may have for patients).
The procedure is an addition to an established one, intended to enhance efficacy. Comparative evidence is needed to show that adding the new procedure to the established one increases efficacy.
No procedure or treatment exists for the condition, or those that are used do not have proven efficacy. There can be no consideration of comparative efficacy and any comparison must be against the natural history of the condition and/or sham (placebo).
Comparison of efficacy is straightforward when randomised studies comparing established and new procedures are available. The aim of such comparison is to ensure that a new procedure works at least as well as established treatments; evidence of superior efficacy is neither necessary nor usually expected. A new procedure may have other advantages, such as being less invasive or allowing faster recovery. The most important aspect of any comparison of the safety profile of the new procedure with that of established procedures is to ensure that the new procedure is not less safe.
Often, however, direct comparisons are not available, and judgements about the efficacy and safety of a new or established procedure need to be made indirectly or on the basis of the opinions of specialist advisers.
Comparison can be particularly difficult when published data about an established procedure are limited. For some common and well‑established procedures, there is little evidence on their efficacy for certain indications, or on their safety profile, particularly about the incidence of uncommon but serious complications.
The Committee gives precedence to outcome measures directly relevant to patients and their quality of life when making decisions relating to efficacy.
The Committee considers the nature of benefits, their magnitude, the ways in which they can be assessed and their duration. All these criteria need to be considered in the context of the natural history of the condition being treated or investigated, and compared with outcomes after established treatment options. There also needs to be evidence of sufficient benefit to justify subjecting a patient to a procedure and its risks. Minor improvements in outcome measures that do not seem to translate into real clinical improvements will not support a decision that a procedure is efficacious.
Evidence of improved survival, reduced morbidity or improved quality of life carries more weight in decision‑making than surrogate outcomes (such as those shown by imaging or biochemical markers). The Committee may identify outcome measures for the procedure that it considers to be particularly informative and suggest these for future research and audit.
IPAC often considers evidence from single‑arm studies such as case reports and case series. Occasionally, the Committee may decide that more information is needed from studies that compare an active treatment against a sham procedure or standard treatment. Then, guidance may recommend that comparative studies are done.
When NICE develops guidance on a diagnostic procedure it is important to ensure that the assessment encompasses the value to patients of the diagnostic information generated by the procedure. The programme does not have the remit or methods to evaluate subsequent treatment in the management pathway, which may be influenced by the results of a diagnostic test. However, to arrive at a reasonable view of the efficacy of the diagnostic test used in the procedure, the Committee takes into account whether it can reasonably be considered to change clinical decision‑making and subsequent management in a way that is likely to benefit patients.
The scientific literature for diagnostic tests consists largely of studies of analytical and clinical validity. Evidence on the impact of diagnostic technologies on final patient outcomes (clinical utility) is generally limited. To conduct an assessment for interventional procedures guidance, NICE seeks specialist advice on the clinical utility of the diagnostic procedure so it can provide information on whether the diagnostic procedure can plausibly inform clinical decision‑making and so benefit patients. The Committee considers analytical and clinical validity data on the diagnostic procedure only in the context of advice that it has plausible clinical utility.
This is almost always important. A procedure that does not provide benefit in the short term is unlikely to be considered efficacious. For some procedures, evidence of short‑term efficacy may be the only requirement. For example, for a new procedure to treat an acute illness, the expectation of long‑term benefit is implicit once the condition has been treated and the patient has recovered.
This can be a problem for procedures that have not been used long enough to allow for lengthy follow‑up studies, and can mean the evidence on long‑term efficacy is small in quantity or of poor quality. Examples of procedures that must have durable results to be considered efficacious are insertion of prosthetic joint components, procedures to relieve urinary or faecal incontinence, and procedures intended to cure cancers.
No procedure is completely safe; all interventions are associated with risks. Decisions relating to safety need to be made in the context of the natural history of the condition being treated or investigated, and the alternative treatments available.
It is important to point out the difference between a recommendation based on the Committee's assessment that the evidence on safety is adequate and the concept that a procedure is safe. If the Committee considers that evidence on safety is adequate in quantity and quality, this means that there were sufficient data to inform a decision about safety. A procedure may nevertheless be associated with significant risks of serious complications, but it is considered that enough is known about those complications and their frequency to construct recommendations for the procedure's use.
When assessing safety, both the seriousness and frequency of adverse events are considered. A low risk of very damaging complications is generally considered to be a more significant safety issue than a high risk of minor complications. Most importantly, patients (or their parents or carers, when appropriate) should be informed and should understand the risks when offered the procedure. This always means telling them the known risks, and it may also mean telling them that there is uncertainty about the frequency of complications – in particular uncommon and serious ones. This consideration informs the Committee's recommendations on consent.
The number of reported cases considered adequate to make or support a decision relating to the safety of a procedure is influenced by:
the natural history of the condition
the prevalence of the condition
the expectation of likely adverse events.
For a procedure that is used to treat a rare but rapidly fatal condition, safety data based on only a few reported cases may be considered adequate. In contrast, if a procedure is for a common condition that is not a serious threat to health, and theoretical concerns have been raised about a possible uncommon but serious complication, very large numbers of well‑reported cases may be needed to adequately assess its safety.
Decisions relating to safety are strongly influenced by the completeness with which adverse events appear to have been reported in the available studies and case series. Some studies make clear that safety outcomes have not been reported at all, whereas other studies present complications in great detail (to the extent that some of these outcomes may be judged as expected sequelae of the procedure). Particular difficulties arise in making decisions about safety when:
studies do not report any adverse events but fail to make clear whether none occurred, or whether events were simply not recorded or reported
specialist advisers refer to specific theoretical complications as matters for concern (and even cite anecdotal complications known to them), but there are no reports of these complications in the published literature
the frequency of adverse events varies markedly between studies
several different devices may be used for the procedure.
In making decisions relating to safety, the Committee generally adopts a proportionate risk‑averse approach, preferring to take account of higher complication rates and advice that raises concerns rather than low complication rates (when studies vary) and more optimistic advice. The Committee will also take into account the quality of the evidence base because variation in safety findings between studies may be related to study quality. A precautionary approach is especially important when considering procedures for long‑term conditions with good overall prognosis.
The Committee takes account of the impact of complications on patients' quality of life, informed by advice from both patients and specialists. Lay members of the Committee in particular are able to make contributions on this matter.
This is always important and includes complications (morbidity and mortality) during the procedure and shortly afterwards. Interventional specialties commonly use the first 30 days after the procedure as the interval for 'postoperative complications' in reported series.
Some procedures pose risks of adverse events that only become apparent in the longer term. The likelihood of these occurring may either be suggested by the nature of the procedure (for example, insertion of a prosthesis) or raised by specialist advisers on the basis of their experience. Lack of long‑term safety data is a frequent problem. If there is uncertainty or concern about long‑term safety in the context of the severity of the condition being treated, the Committee may decide that the safety data are altogether inadequate. If the risk of delayed adverse events is only theoretical or sufficiently remote, the decision may be simply to advise reporting of these if and when they occur, to inform future practice.