3 Approach to evidence generation

3.1 Evidence gaps and ongoing studies

Some technologies have ongoing studies that may address the evidence gaps. Wellmind Health, SelfBack and getUBetter are doing comparative studies, which will complete within the evidence generation period.

Table 1 summarises the evidence gaps and the existing evidence for each technology. Information about evidence status is derived from the external assessment group's report. Evidence not meeting the scope and inclusion criteria is not included.

Table 1 Summary of the evidence gaps and ongoing studies

Technology

Musculoskeletal disability and quality of life

Adherence

Healthcare resource use

Placement of technology in the clinical pathway

getUBetter (getUbetter)

No relevant evidence identified

Ongoing study

No relevant evidence identified

Ongoing study

No relevant evidence identified

Ongoing study

No relevant evidence identified

Hinge Health (Hinge Health)

Limited information available

Limited information available

No relevant evidence identified

No relevant evidence identified

Kaia (Kaia Health)

Limited information available

Limited information available

Limited information available

No relevant evidence identified

Pathway through Pain (Wellmind Health)

No relevant evidence identified

No relevant evidence identified

Ongoing study

No relevant evidence identified

Ongoing study

No relevant evidence identified

SelfBack (SelfBack Consortium)

Limited information available

Limited information available

No relevant evidence identified

Ongoing study

Limited information available

3.2 Data sources

This topic is likely to need primary data collection for certain outcomes. Information is also needed from primary and secondary care services. Data collection can be supported using routinely collected data.

Local or regional data collections, such as the subnational secure data environments (SDEs), could be used to collect data to address the evidence gaps. SDEs are data storage and access platforms that bring together many sources of data, such as from primary and secondary care, to enable research and analysis. Subnational SDEs are designed to be agile. They can be modified to suit the needs of new projects, for example, helping to collect pain scores and quality-of-life data. The data environment of an SDE supports linkage. Also, it may allow researchers a more comprehensive view of medical history, diagnoses, treatments and outcomes related to low back pain, as well as resource use. The West Midlands subnational SDE specialises in collecting data about musculoskeletal health and may be particularly well suited.

It is possible that some data may be generated through the technologies themselves, such as starting treatment and engagement outcomes. This data can be integrated with other data collected.

The quality and coverage of real-world data collections are of key importance when used in generating evidence. NICE's real-world evidence framework provides detailed guidance on assessing the suitability of a real-world data source to answer research questions. Active monitoring and follow up through a central coordinating point is an effective and viable approach of ensuring good-quality data with high coverage.

3.3 Evidence collection plan

A mixed methods approach is suggested to address the evidence gaps, for example, a prospective cohort or a before and after study in combination with a qualitative survey. Such a study should include people with low back pain who would be expected to be offered one of the technologies in the real world. It should compare the use of the technologies alongside standard care with standard care alone and include an embedded qualitative study.

The technologies may be applied at different points in the care pathway, for example, primary care or alongside specialist services such as physiotherapy. Depending on the setting, different experimental designs may offer greater robustness:

  • When there is limited variation and well-defined standard care across services, a prospective comparative cohort study is suggested.

  • When there is considerable variation between services, a before and after study is suggested.

Data collection should follow a predefined protocol. Quality assurance processes should be put in place to ensure the integrity and consistency of data collection. See NICE's real-world evidence framework, which provides guidance on the planning, conduct and reporting of real-world evidence studies. This document also provides best practice principles for robustly design real-world evidence when assessing comparative treatment effects.

Prospective comparative cohort study

In this type of study, data should be collected from healthcare services where the technology is offered for low back pain. It should be compared with other similar services where the technology is not offered. It is important that people in both services are included and followed up from the point at which they would be offered the technology. This should be in line with the intended use of the technology in the clinical pathway.

High-quality data on patient characteristics is needed to correct for any important differences between comparison groups, for example, using propensity score methods. Important confounding factors should be identified with input from clinical experts during protocol development.

Before and after study

In a before and after study, data should be collected and compared before and after implementing the intervention in the same setting. This is to ensure that service performance and configuration are accounted for in the final outcomes and to reduce the risk of bias.

In this study design, after an enrolment period, data should be collected for people in the period after implementing the technology. The data collection period should be long enough to ensure that there is sufficient follow-up data for the standard care group. The digital technology should then be implemented in the service. Data should then be collected from people having care through the technologies.

Companies should provide clear descriptions of the services and settings in which the study is done. Such a study could be done at a single centre or ideally, replicated across multiple centres. This would show how the technology can be implemented across a range of services and so be representative of the variety in the NHS. Outcomes may reflect other changes that occur over time in the population, unrelated to the interventions. Additional robustness could be achieved by collecting data in a centre that has not implemented the technology but is as similar as possible (in terms of clinical practice and patient characteristics) to a site in which the technology is being used. This could control for changes over time that might have occurred anyway as described above, it is important that participants are followed up from the point at which they would be offered the technology. High-quality data on patient characteristics is also needed to correct for any important differences between comparison groups

Qualitative survey

Feedback should be collected through a survey or structured interviews with people who have low back pain using the technologies.

The robustness of survey results depends on:

  • comprehensive distribution across people who are eligible

  • the sample of respondents being representative of the population of potential users.

3.4 Data to be collected

Prospective comparative cohort or before and after study

The following information should be collected:

  • eligibility criteria (for example, the indication for referral) and point of starting follow up, which should be consistent between comparison groups

  • type of low back pain at baseline and how this was defined (that is, acute or chronic. for example, chronic back pain may be defined as symptoms for 3 months or longer)

  • type of healthcare professional referring the person with back pain to the technology

  • MSK‑HQ at baseline, 30 days, and 6 and 12 months

  • ideally, EQ‑5D‑5L at baseline, 30 days, and 6 and 12 months

  • use of the technology including:

    • number of people offered the technology

    • number and proportion referred

    • number and proportion who started using the technology

    • engagement at 30 days and over time

    • number and proportion who stopped by 30 days and over time

    • reasons why people stopped using the technologies (for example, whether engagement stopped because of improvements in symptoms, lack of improvement or other reasons)

  • information on healthcare resource use, collected at 30 days, and 6 and 12 months:

    • number of GP appointments per patient

    • number of physiotherapist appointments per patient

    • number of visits to the emergency department per patient

  • additional prespecified resource use outcomes, which could focus on areas in which the technologies are expected to have the most benefit, for example:

    • number of occupational therapist appointments

    • number of cognitive behavioural therapy (CBT) or acceptance and commitment therapy (ACT) sessions

    • number of secondary care appointments

    • number of people starting or stopping pharmacological treatment

    • number of people with imaging referrals

    • surgical referrals or surgical intent

  • patient characteristics at baseline, including:

    • important confounders such as age, sex, comorbidities and concomitant interventions

    • other characteristics that may be related to likelihood of choosing to access the technology, for example, socioeconomic status, language or ethnicity

  • the number and type of adverse event presented using the technology.

Qualitative survey study

Outcomes to be collected from people with the condition:
  • Perceptions from people with back pain about whether the technologies help alleviate pain and help them to return to normal daily activity

  • Satisfaction, engagement and accessibility of the technology, including barriers encountered by people using and continuing to use the technologies

Information about the technologies:

  • Information about how the technologies were developed

  • Information about how people are referred to the technology and at what point in their clinical pathway

  • Information about any updates to the technologies

See the NICE evidence standards framework for digital health technologies.

3.5 Evidence generation period

This will be about 3 years to allow for setting up, implementation, data collection, analysis and reporting.