Appendix

Appendix

Contents

Data tables

Table 1: Overview of the Barnes et al. (2015) study

Table 2: Summary of results of the Barnes et al. (2015) study

Table 3: Overview of the Morris et al. (2014) study

Table 4: Summary of results of the Morris et al. (2014) study

Table 1 Overview of the Barnes et al. (2015) study

Study component

Description

Objectives/hypotheses

To assess the speed and accuracy of calculations using the Mersey Burns app in comparison with a Lund and Browder paper chart when a burn is assessed by medical students and clinicians.

Study design

Validation study.

Setting

Simulated clinical environment.

Intervention and comparator

Intervention: Mersey Burns app (version not stated)

Comparator: Lund and Browder paper chart and manual fluid calculation (Parkland formula used in the student study but method not stated for the clinician study).

Inclusion/exclusion criteria

Not applicable.

Primary outcomes

Speed and accuracy of total body surface area (TBSA) and fluid calculations, and user satisfaction.

Methods

Two studies were conducted; the clinician study (first study) was used to inform the design of the student study (second study).

Clinician study: Clinicians were shown a photograph of a child with a burn injury and were asked to calculate TBSA and devise a fluid resuscitation and maintenance fluid protocol. A standard paper chart to estimate TBSA was provided. Four of the plastic surgery staff assessed the same burn with the Mersey Burns app. Statistical tests: t tests and analysis of variance.

Student study: Students were given a 1‑hour lecture on burns management and fluid resuscitation involving demonstrations of the Lund and Browder chart and the Mersey Burns app. Students were then presented with a prosthetic burn simulation of a mixed burn injury and asked to calculate the TBSA and a fluid resuscitation protocol using both the Lund and Browder chart with a calculator and Mersey Burns app. Fluid calculations based on the TBSA calculated by each student were manually checked by 2 authors. Preference and ease of use were also assessed. The order of the app and chart were randomised. Statistical tests: Chi square and Student t tests.

Participants

Clinician pilot study with 10 plastic surgery consultants & specialist trainees and 10 emergency doctors.

Student study with 42 senior undergraduate medical students (University of Liverpool) with no previous experience of burns management.

Results

Clinician study: no significant difference in the calculations between the app and the paper chart for TBSA, fluid rate or fluid requirement. Significant difference in the variance between the app and the paper chart for total fluid (p<0.05) and background fluid (p<0.0001), with the paper chart showing greater variance.

40% of clinicians were uncertain how to calculate background fluid requirements in children and did not attempt to do so. These were not included in variance calculations.

Student study: no significant difference for TBSA calculation between the app and the paper chart. Time to completion was significantly faster with the app. Accuracy of fluid calculation for the first 8 hours and the following 16 hours was correct in 100% cases using the app compared with the paper chart, with 62% of cases being accurate for 8‑hour fluids and 64% for 16‑hour fluids. Total fluid volume calculated was correct using the app in 100% of cases, and 81% of cases using the paper chart. Students favoured the app in the following categories: preference in emergency setting, confidence in output, accuracy, speed, ease of calculation, overall use (p<0.0001) and shading (p=0.0007).

Conclusions

The Mersey Burns app, when used by medical students with no previous experience of burns management, facilitated quicker and more accurate calculations than the Lund and Browder chart with manual fluid calculation. Students preferred the app.

Abbreviations: CI, confidence interval; TBSA, total burn surface area.

Table 2 Summary of results from Barnes et al. (2015) study

Mersey Burns app

Lund and Browder paper chart

Analysis

Primary outcome: TBSA percentage calculation (%, mean±SD)

Clinician study: 15.4±1.58 (range 13.2 to 17.0).

Student study: 17.53±5.56 (range 12.4 to 38.5).

Clinician study: 17.4±3.56 (range 13.5 to 26.8).

Student study: 17.52±5.45 (range 11.5 to 38.0).

Clinician study: no significant difference (p‑value not reported).

Student study: p=0.7 (no significant difference).

Selected secondary outcomes:

Cases of correct total fluid calculations when compared with manual check by study authors

Student study: 100%

Student study: 81% (34/42).

Clinician study: not reported.

Student study: 95% CI 0.17 (0.05 to 0.28).

Clinician study showed a lower variance in fluid calculations using the app, p<0.05.

Accuracy of fluid rate calculation

Clinician study: not reported.

Student study: 100% for first 8 hours and the following 16 hours.

Student study: for first 8 hours 62% (26/42), 0.33 (95% CI 0.17 to 0.49). Following 16 hours 64% (27/42), 0.33 (95% CI 0.18 to 0.48).

Clinician study: no significant difference in calculation or variance.

Student study: first 8 hours p=0.0002, following 16 hours p<0.0001.

Time to completion of calculations (minutes, mean±SD)

4.6±1.217 (range 3–7).

11.7±2.775 (range 6–17).

Mean difference 7.133 (95% CI 6.09 to 8.18).

Accuracy of calculations

Student study: Calculations were more likely to be accurate with the app.

Student study: p<0.001.

Preferences

Clinician study: not applicable.

Student study: students favoured the app in the following categories: preference in emergency setting, confidence in output, accuracy, speed, ease of calculation, overall use (p<0.0001) and shading (p=0.0007).

Abbreviations: CI, confidence interval; TBSA, total burn surface area.

Table 3 Overview of the Morris et al. (2014) study

Study component

Description

Objectives/hypotheses

To compare the accuracy and perceived usability of 2 smartphone apps and a general‑purpose electronic calculator for calculating fluid requirements.

Study design

Validation study.

Setting

Clinical simulated environment. Participants were recruited from November 2012 to February 2013.

Intervention and comparator

Intervention: Mersey Burns app, version not stated (CE marked by MHRA).

Intervention: uBurn app, version not stated (not licensed for clinical use when the study was conducted).

Comparator: General‑purpose electronic calculator for calculating fluid requirements using the Parkland formula.

Inclusion/exclusion criteria

Not applicable.

Primary outcomes

Speed and accuracy of fluid requirement calculations, ease of use for each method and preference.

Methods

Bespoke software randomly generated simulated clinical data, randomly allocated the sequence of calculation methods, recorded participants' responses and response times and calculated error magnitude. Participants calculated fluid requirements for 9 scenarios (3 for each: calculator, uBurn, Mersey Burns), rated ease of use (VAS) and preference (ranking), and made written comments. Data were analysed using ANOVA, Tukey's HSD test, Chi‑squared test to consider impact of age and qualitative methods for free text responses.

Participants

34 participants of various clinical grades from a regional burns centre: consultant surgeons (5), consultant anaesthetists (2), SpR plastic surgery (8), SHO plastic surgery (12), SHO anaesthetics (1), nurse (6).

All participants had previous experience of performing calculations using the Parkland formula; 82.4% (n=28) routinely used a calculator for determining fluid requirements.

Results

There was no significant difference in the incidence or magnitude of errors. Both apps were significantly faster than the calculator but not significantly different to each other. All methods showed a learning effect (p<0.001). The calculator was the easiest to use with a mean score (SD) of 12.3 (2.1) followed by Mersey Burns with 11.8 (2.7) and then uBurn with 11.3 (2.7). These differences were not significant. Preference ranking followed the same pattern with mean rankings (SD) of 1.85 (0.17), 1.94 (0.74) and 2.18 (0.90) for the calculator, Mersey Burns and uBurn respectively (not significant at p=0.05).

Conclusions

Both uBurn and the Mersey Burns apps were faster than the general‑purpose calculator, though this is unlikely to be of clinical significance in practice. All 3 methods demonstrated similar rates and magnitude of error, and similar evidence of a learning effect. Both apps were deemed to be appropriate methods to aid estimation of fluid requirements for adult burns.

Abbreviations: ANOVA, analysis of variance; HSD, honestly significant difference.

Table 4 Summary of results from Morris et al. (2014) study

Mersey Burns app

uBurn app

Calculator method

Analysis

Primary outcome:

Response time (seconds, mean±SD)

69.0±35.6

71.7±42.9

86.7±50.7

p=0.006 (ANOVA)

Tukey's HSD test found the calculator to be significantly slower than both uBurn (p=0.013) and Mersey Burns (p=0.017). The difference between the 2 apps was not significant.

Selected secondary outcomes:

Propensity for error

9.8%

7.8%

16.7%

p=0.065

There was no evidence of age or gender affecting the results.

Learning effect

There was strong evidence of learning across all 3 methods with response time falling dramatically with repeated attempts (p<0.001).

Preference: Score (mean±SD)

11.8±2.7

11.3±2.7

12.3±2.1

Measure using a VAS ranging from 'very difficult' to 'very easy'. Differences were not statistically significant.

Preference: Ranking (mean±SD)

1.94±0.74

2.18±0.90

1.85±0.17

Differences were not statistically significant.

Qualitative analysis

Summary of the strengths and weakness of uBurn app

Strengths

  • Allows patient weight to be entered in 1 kg increments.

  • Pre‑hospital fluid taken into account.

  • The entire calculation was shown on 1 page, so there was no need to navigate back and forth.

  • Emphasised rate of fluid administration rather than total volume.

  • It is possible to enter data quicker with a numeric key pad rather than a slider/wheel.

Weaknesses

  • Episode of data loss when a tab was accidentally pressed.

  • Does not emphasise importance of excluding erythema in assessment.

  • Does not allow for variations of original Parkland formula for example 3 ml/kg/%TBSA.

  • Slider interface made data entry slow and ''fiddly''.

  • Option for multiple units of measurement (kg, lbs, minutes or hours) increased complexity and possible error.

  • Does not emphasise that app and formulae are only guidelines.

Summary of the strengths and weakness of Mersey Burns app

Strengths

  • Interface was more intuitive and easier overall.

  • Option to estimate TBSA by drawing on touch screen.

Weaknesses

  • No option to account for pre‑hospital fluids.

  • Navigating between pages was needed during a calculation.

  • Weight increments of 5 kg could affect accuracy.

  • Appeared to erroneously display formula as 2 ml/kg instead of 2 ml/kg/TBSA %.

Abbreviations: ANOVA, analysis of variance; HSD, honestly significant difference; TBSA, total body surface area; VAS, visual analogue scale.