Process and methods

Appendix 2 Checklists

Please note that these checklists have not been previously used in guideline development.

1.1 Checklist: cost–benefit analysis (CBA) studies

Study identification

Include author, title, reference, year of publication

Guidance topic:

Question no:

Checklist completed by:

Yes/ Partly/ No /Unclear /NA

Comments

1.1 Is there a well-defined question?

1.2 Is there a comprehensive description of alternatives?

1.3 Was one of the alternatives designated as the comparator against which the intervention was evaluated?

1.4 Is the perspective stated?

1.5 Are all important and relevant costs and outcomes for each alternative identified?

Check to see if the study is of money-costs and 'benefits' which are savings of future money-costs.

1.6 Has the effectiveness of the intervention been established?

1.7 Are costs and outcomes measured accurately?

1.8 Are costs and outcomes valued credibly?

1.9 Have all important and relevant costs and outcomes for each alternative been quantified in money terms?

If not, state which items were not quantified, and the likely extent of their importance in terms of influencing the benefit/cost ratio.

1.10 Are costs and outcomes adjusted for differential timing?

1.11 Has at least 1 of net present value, benefit/cost ratio and payback period been estimated?

1.12 Were any assumptions of materiality made?

1.13 Were all assumptions reasonable in the circumstances in which they were made, and were they justified?

1.14 Were sensitivity analyses conducted to investigate uncertainty in estimates of cost or benefits?

1.15 To what extent do study results include all issues of concern to users?

1.16 Are the results generalisable to the setting of interest in the review?

  • Country differences.

  • Question of interest differs from the CBA question being reviewed.

1.17 Overall assessment: Minor limitations/Potentially serious limitations/Very serious limitations

Other comments:

Notes on Checklist: cost–benefit analysis (CBA) studies

Definition:

Cost–benefit analysis is one of the tools used to carry out an economic evaluation. The costs and benefits are measured using the same monetary units (for example, pounds sterling) to see whether the benefits exceed the costs.

Source:

Adapted from Methods for the development of NICE public health guidance (third edition, 2012).

For all questions:
  • answer 'yes' if the study fully meets the criterion

  • answer 'partly' if the study largely meets the criterion but differs in some important respect

  • answer 'no' if the study deviates substantively from the criterion

  • answer 'unclear' if the report provides insufficient information to judge whether the study complies with the criterion

  • answer 'NA (not applicable)' if the criterion is not relevant in a particular instance.

For 'partly' or 'no' responses, use the comments column to explain how the study deviates from the criterion.

Overall assessment:

The overall methodological study quality of the economic evaluation should be classified as 1 of the following:

  • Minor limitations The study meets all quality criteria, or fails to meet 1 or more quality criteria but this is unlikely to change the conclusions about cost benefit.

  • Potentially serious limitations The study fails to meet 1 or more quality criteria, and this could change the conclusions about cost benefit.

  • Very serious limitations The study fails to meet 1 or more quality criteria, and this is highly likely to change the conclusions about cost benefit. Such studies should usually be excluded from further consideration.

1.2 Checklist: cost–consequence analysis (CCA) studies

Study identification

Include author, title, reference, year of publication

Guidance topic:

Question no:

Checklist completed by:

Yes/ Partly/ No /Unclear /NA

Comments

1.1 Is there a well-defined question?

1.2 Is there a comprehensive description of alternatives?

1.3 Was one of the alternatives designated as the comparator against which the intervention was evaluated?

1.4 Is the perspective stated?

1.5 Who determined the set of outcomes that were collected to act as consequences?

1.6 Are all important and relevant costs and outcomes for each alternative identified?

1.7 Has effectiveness been established?

1.8 Are costs and outcomes measured accurately?

1.9 Are costs and outcomes valued credibly?

1.10 Have all important and relevant costs and outcomes for each alternative been quantified?

  • If not, state which items were not quantified.

  • Were they still used in the CCA and how were they used?

1.11 Are all costs and outcomes adjusted for differential timing?

1.12 Were any assumptions of materiality made to restrict the number of consequences considered?

1.13 Was any analysis of correlation between consequences carried out to help control for double counting?

1.14 Was there any indication of the relative importance of the different consequences by a suggested weighting of them?

1.15 Were there any theoretical relationships between consequences that could have been taken into account in determining weights?

1.16 Were the consequences considered one by one to see if a decision could be made based on a single consequence?

1.17 Were the consequences considered in subgroups of all the consequences in the analysis to see if a decision could be made based on a particular subgroup?

1.18 Was an MCDA (multiple criteria decision analysis) or other published method of aggregation of consequences attempted?

1.19 Were all assumptions reasonable in the circumstances in which they were made, and were they justified?

1.20 Were sensitivity analyses conducted to investigate uncertainty in estimates of cost or benefits?

1.21 How far do study results include all issues of concern to users?

1.22 Are the results generalisable to the setting of interest in the review?

  • Country differences.

  • Question of interest differs from the CCA question being reviewed.

1.23 Overall assessment: Minor limitations/Potentially serious limitations/Very serious limitations

Other comments:

Notes on Checklist: cost–consequence analysis (CCA) studies

Definition:

Cost–consequence analysis is one of the tools used to carry out an economic evaluation. This compares the costs (such as treatment and hospital care) and the consequences (such as health outcomes) of a test or treatment with those of a suitable alternative. Unlike cost–benefit analysis or cost-effectiveness analysis, it does not attempt to summarise outcomes in a single measure (like the quality-adjusted life year) or in financial terms. Instead, outcomes are shown in their natural units (some of which may be monetary) and it is left to decision-makers to determine whether, overall, the treatment is worth carrying out.

For all questions:
  • answer 'yes' if the study fully meets the criterion

  • answer 'partly' if the study largely meets the criterion but differs in some important respect

  • answer 'no' if the study deviates substantively from the criterion

  • answer 'unclear' if the report provides insufficient information to judge whether the study complies with the criterion

  • answer 'NA (not applicable)' if the criterion is not relevant in a particular instance.

For 'partly' or 'no' responses, use the comments column to explain how the study deviates from the criterion.

Overall assessment:

The overall methodological study quality of the economic evaluation should be classified as 1 of the following:

  • Minor limitations The study meets all quality criteria, or fails to meet 1 or more quality criteria but this is unlikely to change the conclusions about cost consequences.

  • Potentially serious limitations The study fails to meet 1 or more quality criteria, and this could change the conclusions about cost consequences.

  • Very serious limitations The study fails to meet 1 or more quality criteria, and this is highly likely to change the conclusions about cost consequences. Such studies should usually be excluded from further consideration.

1.3 Checklist: audit

Study identification

Include author, title, reference, year of publication

Guidance topic:

Question no:

Checklist completed by:

Yes/ Partly/ No /Unclear /NA

Comments

1 Objectives

1.1 Are the objectives of the audit clearly stated?

1.2 The clinical audit topic reflects a local service, speciality or national priority which merits evaluation and where care could be improved or refined through clinical audit

2 Design

2.1 The clinical audit measures against standards

2.2 The clinical audit standards are based upon the best available evidence

2.3 The clinical audit standards are referenced to their source

2.4 The clinical audit standards are expressed in a form that enables measurement

2.5 The patient group to whom the clinical audit standards apply is clearly defined

2.6 The clinical audit standards take full account of patient priorities and patient-defined outcomes

2.7 The timetable for the clinical audit is described, including timescales for completion and re-audit where necessary

3 Methodology

3.1 The methodology and data collection process is described in detail

3.2 Systematic consideration is given to ethics, data confidentiality and consent issues, and Caldicott principles are applied

3.3 The methods used in the audit are recorded so that re-audit can be undertaken later in the clinical audit cycle

3.4 If a sample of the population was audited, the method for sampling is that which is best suited to measuring performance against the standards and is as scientifically reliable as possible

3.5 Is the sample size sufficient to generate meaningful results?

3.6 When necessary, the sample allows for adjustment for case mix

3.7 The clinical audit uses pre-existing data sets where possible

3.8 The data collection tool(s) and process have been validated

3.9 The data collection process aims to ensure complete capture of data

4 Analysis

4.1 Data are analysed, and feedback of the results is given so that momentum of the clinical audit is maintained in line with the agreed timetable

4.2 Results of the clinical audit are presented in the most appropriate manner for each potential audience to ensure that the audit results stimulate and support action planning

4.3 The results are communicated effectively to all key stakeholders, including patients

5 Sustaining improvement

5.1 The topic is re-audited to complete the clinical audit cycle if necessary

5.2 Where recommended action has not been achieved in full, the topic is re-audited at agreed intervals

5.3 The results of re-audit are recorded and disseminated appropriately, including to patients

Notes on Checklist: audit

For all questions:
  • answer 'yes' if the study fully meets the criterion

  • answer 'partly' if the study largely meets the criterion but differs in some important respect

  • answer 'no' if the study deviates substantively from the criterion

  • answer 'unclear' if the report provides insufficient information to judge whether the study complies with the criterion

  • answer 'NA (not applicable)' if the criterion is not relevant in a particular instance.

For 'partly' or 'no' responses, use the comments column to explain how the study deviates from the criterion.

Definition:

Clinical audit is a quality improvement process that seeks to improve patient care and outcomes through systematic review of care against explicit criteria and the implementation of change. Aspects of the structure, process and outcome of care are selected and systematically evaluated against explicit criteria. Where indicated, changes are implemented at an individual, team or service level and further monitoring is used to confirm improvement in healthcare delivery. (This definition appears in Principles for best practice in clinical audit (2002) and was endorsed by NICE.).

An audit is an examination or review that establishes the extent to which a condition, process or performance conforms to predetermined standards or criteria. Assessment or review of any aspect of healthcare to determine its quality; audits may be carried out on the provision of care, compliance with regulations, community response or completeness of records. (From: Porta M (2008) A dictionary of epidemiology [fifth edition]. Oxford: Oxford University Press.)

1.4 Checklist: surveys

Study identification

Include author, title, reference, year of publication

 

Guidance topic:

Question no:

 

Checklist completed by:

 

Yes/ Partly/ No /Unclear /NA

Comments

 

1 Objectives

1.1 Are the objectives of the study clearly stated?

2 Design

2.1 Is the research design clearly specified and appropriate for the research aims?

2.2 Is there a clear description of context?

2.3 If an existing tool was used, are references to the original work provided?

2.4 If a new tool was used, have its reliability and validity been reported?

2.5 Is there a clear description of the survey population and the sample frame used to identify this population?

2.6 Do the authors provide a description of how representative the sample is of the underlying population?

2.7 Did the subject represent the full spectrum of the population of interest?

2.8 Is the study large enough to achieve its objectives? Have sample size estimates been performed?

2.9 Were all subjects accounted for?

2.10 Were all appropriate outcomes considered?

2.11 Has ethical approval been obtained if appropriate?

2.12 What measures were made to contact non-responders?

2.13 What was the response rate?

3 Measurement and observation

3.1 Is it clear what was measured, how it was measured and what the outcomes were?

3.2 Are the measurements valid?

3.3 Are the measurements reliable?

3.4 Are the measurements reproducible?

4 Presentation of results

4.1 Are the basic data adequately described?

4.2 Are the results presented clearly, objectively and in sufficient detail to enable readers to make their own judgement?

4.3 Are the results internally consistent, i.e. do the numbers add up properly?

5 Analysis

5.1 Are the data suitable for analysis?

5.2 Is there a clear description of the methods of data collection and analysis?

5.3 Are the methods appropriate for the data?

5.4 Are any statistics correctly performed and interpreted?

5.5 Is the method for calculating response rate provided?

5.6 Are the methods for handling missing data provided?

5.7 Is information given on how non-respondents differ from respondents?

6 Discussion

6.1 Are the results discussed in relation to existing knowledge on the subject and study objectives?

6.2 Are the limitations of the study (taking into account potential sources of bias) stated?

6.3 Can the results be generalised?

6.4 Have attempts been made to establish 'reliability' and 'validity' of analysis (appropriate to methodology)?

7 Interpretation

7.1 Are the authors' conclusions justified by the data? Do the researchers display enough data to support their interpretations and conclusions?

Notes for Checklist: surveys

For all questions:
  • answer 'yes' if the study fully meets the criterion

  • answer 'partly' if the study largely meets the criterion but differs in some important respect

  • answer 'no' if the study deviates substantively from the criterion

  • answer 'unclear' if the report provides insufficient information to judge whether the study complies with the criterion

  • answer 'NA (not applicable)' if the criterion is not relevant in a particular instance.

For 'partly' or 'no' responses, use the comments column to explain how the study deviates from the criterion.

Definition:

A survey is a data collection tool used to gather information about individuals. Surveys are commonly used in clinical research to collect self-report data from study participants. A survey may focus on factual information about individuals, or it may aim to collect the opinions of the survey takers. A population survey may be conducted by face-to-face inquiry, self-completed questionnaires, telephone, postal service or in some other way. (From: Porta M (2008) A dictionary of epidemiology [fifth edition]. Oxford: Oxford University Press.)

Sources:
  • BestBETs critical appraisal worksheet 'Survey (including pre-test probabilities)'

  • Personal communication from Dr Susan Kirk (School of Nursing, Midwifery and Social Work, The University of Manchester) and Michelle Maden (Edge Hill University Library and Information Resources Centre).

  • Bennett C, Khangura S, Brehaut JC et al. (2011) Reporting guidelines for survey research: an analysis of published guidance and reporting practices. PLoS Medicine 8: e1001069

  • Crombie IK (1996) The pocket guide to critical appraisal: a handbook for healthcare professionals. London: BMJ Publishing

1.5 Checklist: studies of national, regional or local reports, assessments or evaluations

Study identification

Include author, title, reference, year of publication

Guidance topic:

Question no:

Checklist completed by:

Yes/ Partly/ No /Unclear /NA

Comments

1. Authority

1.1 Does the report identify who is responsible for the intellectual content?

1.2 Are they reputable?

2 Accuracy

2.1 Does the item have a clearly stated aim or brief?

2.2 Does it have a stated methodology?

2.3 Has it been peer-reviewed?

2.4 Has it been edited by a reputable authority?

3 Coverage

3.1 Are any limits clearly stated?

4 Objectivity

4.1 Is the author's standpoint clear?

4.2 Does the work seem to be balanced in presentation?

5 Date

5.1 Does the item have a clearly stated date related to content?

6 Significance

6.1 Is the item meaningful?

6.2 Does it add context?

6.3 Does it strengthen or refute a current position?

6.4 Would the research area be lesser without it?

Other comments:

Notes for Checklist: studies of national, regional or local reports, assessments or evaluations

For all questions:
  • answer 'yes' if the study fully meets the criterion

  • answer 'partly' if the study largely meets the criterion but differs in some important respect

  • answer 'no' if the study deviates substantively from the criterion

  • answer 'unclear' if the report provides insufficient information to judge whether the study complies with the criterion

  • answer 'NA (not applicable)' if the criterion is not relevant in a particular instance.

For 'partly' or 'no' responses, use the comments column to explain how the study deviates from the criterion.

Definition:

The Fourth International Conference on Grey Literature held in Washington, DC, in October 1999 defined grey literature as: 'that which is produced on all levels of government, academics, business and industry in print and electronic formats, but which is not controlled by commercial publishers'.

Source:

The AACODS checklist (adapted by NICE).

1.6 Checklist: longitudinal studies

Study identification

Include author, title, reference, year of publication

Guidance topic:

Question no:

Checklist completed by:

Yes/ Partly/ No /Unclear /NA

Comments

1.1 Are the objectives of the study clearly stated?

1.2 Was the study ethical?

2. Sampling

2.1 Were all members of the cohort entered at the beginning?

2.2 Did the sampling scheme allow a representative sample?

3. Participation

3.1 Was loss to follow-up low – i.e. less than 20%?

3.2 Was completion rate on individual items of the assessment instrument high?

4. Measurement

4.1 Were valid measures of disease (case definition) and risks used?

4.2 Were the data gathered using the best-accepted techniques? (e.g. trained telephone interviewers or examiners, mail questionnaire)

4.3 Were the data tested for accuracy and reliability?

4.4 Are the age/sex distributions similar?

4.5 Is there evidence of any systematic differences in prevalence or trends in disease between this group and the patients being considered?

4.6 Is there evidence of any systematic differences in important environmental, behavioural or healthcare access factors between this group and the patients being considered?

Other comments:

Notes for Checklist: longitudinal studies

For all questions:
  • answer 'yes' if the study fully meets the criterion

  • answer 'partly' if the study largely meets the criterion but differs in some important respect

  • answer 'no' if the study deviates substantively from the criterion

  • answer 'unclear' if the report provides insufficient information to judge whether the study complies with the criterion

  • answer 'NA (not applicable)' if the criterion is not relevant in a particular instance.

For 'partly' or 'no' responses, use the comments column to explain how the study deviates from the criterion.

Definition:

In a longitudinal study, subjects are followed over time with continuous or repeated monitoring of risk factors or health outcomes, or both. Such investigations vary enormously in their size and complexity. At one extreme a large population may be studied over decades. For example, the longitudinal study of the Office of Population Censuses and Surveys prospectively follows a 1% sample of the British population that was initially identified at the 1971 census. Outcomes such as mortality and incidence of cancer have been related to employment status, housing and other variables measured at successive censuses. At the other extreme, some longitudinal studies follow up relatively small groups for a few days or weeks. Thus, firemen acutely exposed to noxious fumes might be monitored to identify any immediate effects. (From: Epidemiology for the uninitiated [fourth edition]. London: BMJ.)

1.7 Checklist: cross-sectional studies

Study identification

Include author, title, reference, year of publication

Guidance topic:

Question no:

Checklist completed by:

Yes/ Partly/ No /Unclear /NA

Comments

1 Objectives

1.1 Are the objectives of the study clearly stated?

2 Design

2.1 Is the research design clearly specified and appropriate for the research aims?

2.2 Were the subjects recruited in an acceptable way?

2.3 Was the sample representative of a defined population?

3 Measurement and observation

3.1 Is it clear what was measured, how it was measured and what the outcomes were?

3.2 Are the measurements valid?

3.3 Was the setting for data collection justified?

3.4 Were all important outcomes/results considered?

4 Analysis

4.1 Are tables/graphs adequately labelled and understandable?

4.2 Are the authors' choice and use of statistical methods appropriate, if employed?

4.3 Is there an in-depth description of the analysis process?

4.4 Are sufficient data presented to support the findings?

5 Discussion

5.1 Are the results discussed in relation to existing knowledge on the subject and study objectives?

5.2 Can the results be generalised?

Notes for Checklist: cross-sectional studies

For all questions:
  • answer 'yes' if the study fully meets the criterion

  • answer 'partly' if the study largely meets the criterion but differs in some important respect

  • answer 'no' if the study deviates substantively from the criterion

  • answer 'unclear' if the report provides insufficient information to judge whether the study complies with the criterion

  • answer 'NA (not applicable)' if the criterion is not relevant in a particular instance.

For 'partly' or 'no' responses, use the comments column to explain how the study deviates from the criterion.

Definition:

A cross-sectional study is a study that examines the relationship between diseases (or other health-related characteristics) and other variables of interest as they exist in a defined population at one particular time. The presence or absence of a disease and the presence or absence of the other variables are determined in each member of the study population or in a representative sample at one particular time. The relationship between a variable and the disease can be examined (1) in terms of the prevalence of the disease in different population subgroups defined according to the presence or absence of the variables and (2) in terms of the presence or absence of the variables in people with the disease compared with those without the disease. Note that disease prevalence rather than incidence is normally recorded in a cross-sectional study. The temporal sequence of cause and effect cannot necessarily be determined in a cross-sectional study. (Adapted from: Porta M (2008) A dictionary of epidemiology [fifth edition]. Oxford: Oxford University Press.)

Sources:
  • Cardiff University

  • Wordpress

1.8 Checklist: secondary data studies

Study identification

Include author, title, reference, year of publication

Guidance topic:

Question no:

Checklist completed by:

Yes/ Partly/ No /Unclear /NA

Comments

Screening questions

1.1 Does the study address a clearly focused issue?

1.2 Is a good case made for the approach that the authors have taken?

1.3 Is there a direct comparison (for example, service configurations or models) that provides an additional frame of reference?

Methods

1.4 Were those involved in collection of data also involved in delivering a service to the user group?

1.5 Were the methods used for selecting the users appropriate and clearly described?

Results

1.6 Was the data collection instrument/method reliable?

1.7 What was the response rate and how representative was the sample under study?

1.8 Are the results complete and have they been analysed in an easily interpretable way?

1.9 Are any limitations in the methodology (that might have influenced results) identified and discussed?

1.10 Are the conclusions based on an honest and objective interpretation of the results?

Interpretation

1.11 Can the results be applied to other service users?

Other comments:

Notes for Checklist: secondary data studies

For all questions:
  • answer 'yes' if the study fully meets the criterion

  • answer 'partly' if the study largely meets the criterion but differs in some important respect

  • answer 'no' if the study deviates substantively from the criterion

  • answer 'unclear' if the report provides insufficient information to judge whether the study complies with the criterion

  • answer 'NA (not applicable)' if the criterion is not relevant in a particular instance.

For 'partly' or 'no' responses, use the comments column to explain how the study deviates from the criterion.

Definition:

Secondary data are data that have been already collected by and readily available from other sources.

1.1 Does the study address a clearly focused issue?

An issue can be 'focused' in terms of:

  • the population (user group) studied

  • the intervention (service or facility) provided

  • the outcomes (quantifiable or qualitative) measured.

1.2 Is a good case made for the approach that the authors have taken?

Do the authors state how they identified the problem and provide a justification for why they have chosen to examine it? Do they state in what way their chosen methodology is appropriate to the question?

Consider, too, whether the study:

  • refers to previous work that has looked at the same user group

  • refers to previous work that has looked at the same service or facility

  • utilises a methodology or data collection instruments that have been used in previous user studies.

1.3 Is there a direct comparison (for example, service configurations or models) that provides an additional frame of reference?

This may be either external or internal; for example, contrast with, or similarity to:

  • other studies

  • other user groups within the study

  • the same group at different geographical locations or at a different time period.

1.4 Were those involved in collection of data also involved in delivering a service to the user group?

It may not always be possible to separate researchers from service deliverers, but consider whether the service deliverers' perspective has been acknowledged explicitly and to what extent the questions in the user study have been generated elsewhere (for example, a previously trialled or validated instrument or from a focus group).

1.5 Were the methods used for selecting the users appropriate and clearly described?

Type of sample: Is it a convenience sample? Were participants self-selecting? Were key informants identified? Is it a randomly selected sample? Is it a comprehensive census or survey?

Size of sample: Has a sample size calculation been undertaken?

Representativeness of sample: Was the planned sample of users representative of all users (actual and eligible) who might be included in the study? Do the demographics of the sample (such as age, sex, staff grade, location) accurately reflect the demographics of the total population? Are any interests or motivations behind participation clearly identified? Are non-users included in the sampling frame?

1.6 Was the data collection instrument/method reliable?

If there is a questionnaire, survey form or interview schedule, do the authors include it in their report? Do they refer to where a full copy might be found? Has the data collection instrument been used before? Have the authors adapted an existing questionnaire and, if so, have they used it appropriately?

1.7 What was the response rate and how representative was the sample under study?

Consider not only the actual percentage of responses but also whether any specific subgroups were either over-represented or under-represented. Are reasons for non-response discussed? Have non-users been included in the analysis of responses?

1.8 Are the results complete and have they been analysed in an easily interpretable way?

Consider choices involved in analysis and in presentation. Have all variables identified earlier in the study been analysed? If not, why not?

1.9 Are any limitations in the methodology (that might have influenced results) identified and discussed?

Consider whether the authors give a clear picture of how the study might best be done. Would it be possible for you to replicate the study from the information given? Is there enough detail of any data collection instrument for you to reproduce it?

1.10 Are the conclusions based on an honest and objective interpretation of the results?

Do the authors base their conclusions on findings from their experimental data? Can you be sure that they are not presenting their data merely to substantiate some preconceived ideas?

1.11 Can the results be applied to other service users?

The burden of proof is on you to identify any ways in which your local population might differ from that in the study.

1.9 Checklist: grey literature

Study identification

Include author, title, reference, year of publication

Guidance topic:

Question no:

Checklist completed by:

Yes/ Partly/ No /Unclear /NA

Comments

Authority

Identifying who is responsible for the intellectual content.

Individual author:

  • Associated with a reputable organisation?

  • Professional qualifications or considerable experience?

  • Produced/published other work (grey/black) in the field?

  • Recognised expert, identified in other sources?

  • Cited by others? (use Google Scholar as a quick check)

  • Higher degree student under 'expert' supervision?

Organisation or group:

  • Is the organisation reputable? (e.g. WHO)

  • Is the organisation an authority in the field?

Does the item have a detailed reference list or bibliography?

Accuracy

Does the item have a clearly stated aim or brief?

Does the item meet its aims?

Does the item have a stated methodology?

Has the item been peer reviewed?

Has the item been edited by a reputable authority?

Is the item supported by authoritative, documented references or credible sources?

Is the item representative of work in the field?

If no, is it a valid counterbalance?

Is any data collection explicit and appropriate for the research?

If the item is secondary material (e.g. a policy brief of a technical report), does it provide an accurate, unbiased interpretation or analysis of the original document?

Coverage

Are any limits to the item clearly stated?

Objectivity

Is the author's standpoint clear?

Does the work seem to be balanced in presentation?

Date

Does the item have a clearly stated date related to content?

If no date is given, but can be accurately ascertained, is there a valid reason for its absence?

Has key contemporary material been included in the bibliography?

Significance

Is the item meaningful (i.e. does it incorporate feasibility, utility and relevance)?

Does it add context?

Does it enrich or add something unique to the research?

Does it strengthen or refute a current position?

Would the research area be lesser without it?

Is it integral, representative, typical?

Does it have impact (in the sense of influencing the work or behaviour of others)?

Notes for Checklist: grey literature

For all questions:
  • answer 'yes' if the study fully meets the criterion

  • answer 'partly' if the study largely meets the criterion but differs in some important respect

  • answer 'no' if the study deviates substantively from the criterion

  • answer 'unclear' if the report provides insufficient information to judge whether the study complies with the criterion

  • answer 'NA (not applicable)' if the criterion is not relevant in a particular instance.

For 'partly' or 'no' responses, use the comments column to explain how the study deviates from the criterion.

Definition:

The Fourth International Conference on Grey Literature held in Washington, DC, in October 1999 defined grey literature as: 'that which is produced on all levels of government, academics, business and industry in print and electronic formats, but which is not controlled by commercial publishers.' [sic]

Grey literature includes theses or dissertations (reviewed by examiners who are subject specialists); conference papers (often peer-reviewed or presented by those with specialist knowledge) and various types of reports from those working in the field. All of these fall into the 'expert opinion'.

Sources:

AACODS: archived at the Flinders Academic Commons.

Coverage:

All items have parameters that define their content coverage. These limits might mean that a work refers to a particular population group, or that it excluded certain types of publication. A report could be designed to answer a particular question, or be based on statistics from a particular survey.

Objectivity:

It is important to identify bias, particularly if it is unstated or unacknowledged.

Date:

For the item to inform your research, it needs to have a date that confirms relevance. No easily discernible date is a strong concern.

Significance:

This is a value judgment of the item, in the context of the relevant research area.

1.10 Checklist: systematic reviews (non-randomised controlled trials)

Study identification

Include author, title, reference, year of publication

Guidance topic:

Question no:

Checklist completed by:

Yes/ Partly/ No/Unclear /NA

Comments

Reporting of background

1.1 Reporting of background should include:

  • definition of problem

  • hypothesis statement

  • description of study outcome(s)

  • type of exposure or intervention used

  • type of study designs used

  • study population

Reporting of search strategy

1.2 Reporting of search strategy should include:

  • qualifications of searchers (e.g. librarians and investigators)

  • search strategy, including time period included in the synthesis and keywords

  • effort to include all available studies, including contact with authors

  • databases and registries searched

  • use of hand searching (e.g. reference lists of obtained articles)

  • list of citations located and those excluded, including justification

  • method of addressing articles published in languages other than English

  • method of handling abstracts and unpublished studies

  • description of any contact with authors

Reporting of methods

1.3 Reporting of methods should include:

  • description of relevance or appropriateness of studies assembled for assessing the hypotheses to be tested

  • rationale for the selection and coding of data (e.g. sound clinical principles or convenience)

  • documentation of how data were classified and coded (e.g. multiple raters, blinding, and inter-rater reliability)

  • assessment of confounding (e.g. comparability of cases and controls in studies if appropriate)

  • assessment of study quality, including blinding of quality assessors; stratification or regression on possible predictors of study results

  • assessment of heterogeneity

  • description of statistical methods (e.g. complete description of fixed or random effects models, justification of whether the chosen models account for predictors of study results, dose–response models, or cumulative meta-analysis) in sufficient detail to be replicated

  • provision of appropriate tables and graphics

Reporting of results

1.4 Reporting of results should include:

  • graphic summarising individual study estimates and overall estimate

  • table giving descriptive information for each study included

  • results of sensitivity testing (e.g. subgroup analysis)

  • indication of statistical uncertainty of findings

Reporting of discussion

1.5 Reporting of discussion should include:

  • quantitative assessment of bias (e.g. publication bias)

  • justification for exclusion (e.g. exclusion of non-English-language citations)

  • assessment of quality of included studies

Reporting of conclusions

1.6 Reporting of conclusions should include:

  • consideration of alternative explanations for observed results

  • generalisation of the conclusions (i.e. appropriate for the data presented and within the domain of the literature review)

  • recommendations for future research

  • disclosure of funding source

Other comments:

Notes for Checklist: systematic reviews (non-randomised controlled trials)

For all questions:
  • answer 'yes' if the study fully meets the criterion

  • answer 'partly' if the study largely meets the criterion but differs in some important respect

  • answer 'no' if the study deviates substantively from the criterion

  • answer 'unclear' if the report provides insufficient information to judge whether the study complies with the criterion

  • answer 'NA (not applicable)' if the criterion is not relevant in a particular instance.

For 'partly' or 'no' responses, use the comments column to explain how the study deviates from the criterion.

Definition:

A non-randomised controlled trial is an experimental study in which people are allocated to different interventions using methods that are not random.

A systematic review uses explicit and systematic methods to identify, appraise and summarise the literature according to predetermined criteria. If the methods and criteria used to do this are not described or are not sufficiently detailed, it is not possible to make a thorough evaluation of the quality of the review.

Source:

From: Stroup DF, Berlin JA, Morton SC et al. (2000) Meta-analysis of observational studies in epidemiology: a proposal for reporting. Meta-analysis Of Observational Studies in Epidemiology (MOOSE) group. JAMA 283: 2008–12

If this checklist is not considered appropriate, the NICE checklist for systematic reviews and meta-analyses (appendix B of 'The guidelines manual') can be used.

1.11 Checklist: mixed-methods reviews

For a mixed-methods study, use section 1 for appraising the qualitative component, the appropriate section (2, 3 or 4) for the quantitative component, and section 5 for the mixed-methods component.

Study identification

Include author, title, reference, year of publication

Guideline topic:

Question no:

Checklist completed by:

Yes/ Partly/ No /Unclear /NA

Comments

Section 1 – qualitative studies

1.1. Are the sources of qualitative data (archives, documents, informants, observations) relevant to address the research question?

Consider whether (a) the selection of the participants is clear, and appropriate to collect relevant and rich data; and (b) reasons why certain potential participants chose not to participate are explained

1.2. Is the process for analysing qualitative data relevant to address the research question?

Consider whether (a) the method of data collection is clear (in-depth interviews and/or group interviews, and/or observations and/or documentary sources); (b) the form of the data is clear (tape recording, video material, and/or field notes, for instance); (c) changes are explained when methods are altered during the study; and (d) the qualitative data analysis addresses the question

1.3. Is appropriate consideration given to how findings relate to the context, such as the setting, in which the data were collected?

Consider whether the study context, and how findings relate to the context or characteristics of the context, are explained (how findings are influenced by or influence the context). 'For example, a researcher wishing to observe care in an acute hospital around the clock may not be able to study more than one hospital. Here, it is essential to take care to describe the context and particulars of the case [the hospital] and to flag up for the reader the similarities and differences between the case and other settings of the same type' (Mays and Pope, 1995 [a] )

1.4. Is appropriate consideration given to how findings relate to researchers' influence; for example, through their interactions with participants?

Consider whether (a) researchers critically explain how findings relate to their perspective, role and interactions with participants (how the research process is influenced by or influences the researcher); (b) the researcher's role is influential at all stages (formulation of a research question, data collection, data analysis and interpretation of findings); and (c) researchers explain their reaction to critical events that occurred during the study

Section 2 – quantitative studies (randomised controlled trials)

2.1. Is there a clear description of the randomisation (or an appropriate sequence generation)?

In a randomised controlled trial, the allocation of a participant (or a data collection unit, e.g. a school) into the intervention or control group is based solely on chance, and researchers describe how the randomisation schedule is generated. A simple statement, such as 'we randomly allocated' or 'using a randomised design' is insufficient.

Simple randomisation is defined as allocation of participants to groups by chance by following a predetermined plan/sequence. Usually it is achieved by referring to a published list of random numbers, or to a list of random assignments generated by a computer.

Sequence generation: The rule for allocating interventions to participants must be specified, based on some chance (random) process. Researchers should provide sufficient detail to allow readers' appraisal of whether it produces comparable groups. Examples include blocked randomisation (to ensure particular allocation ratios to the intervention groups), stratified randomisation (randomisation performed separately within strata) and minimisation (to make small groups closely similar with respect to several characteristics).

2.2. Is there a clear description of the allocation concealment (or blinding when applicable)?

The allocation concealment protects assignment sequence until allocation. For example, researchers and participants are unaware of the assignment sequence up to the point of allocation; group assignment is concealed in opaque envelopes until allocation.

The blinding protects assignment sequence after allocation. For example, researchers and/or participants are unaware of the group a participant is allocated to during the course of the study.

2.3. Are there complete outcome data (80% or above)?

For example, almost all the participants contributed to almost all measures.

2.4. Is there low withdrawal/drop-out (below 20%)?

For example, almost all the participants completed the study

Section 3 – quantitative studies (including non-randomised controlled trial, cohort study, case–control study, cross-sectional study)

3.1. Are participants (organisations) recruited in a way that minimises selection bias?

At the recruitment stage:

  • for cohort studies, consider whether the exposed (or with intervention) and non-exposed (or without intervention) groups are recruited from the same population

  • for case–control studies, consider whether the same inclusion and exclusion criteria were applied to cases and controls, and whether recruitment was done independently of the intervention or exposure status

  • for cross-sectional analytical studies, consider whether the sample is representative of the population.

3.2. Are measurements appropriate (clear origin, or validity known, or standard instrument; and absence of contamination between groups when appropriate) regarding the exposure/intervention and outcomes?

At the data collection stage:

  • consider whether (a) the variables are clearly defined and accurately measured; (b) the measurements are justified and appropriate for answering the research question; and (c) the measurements reflect what they are supposed to measure.

  • for non-randomised controlled trials, the intervention is assigned by researchers, and so consider whether there was absence/presence of a contamination.

  • consider whether the control group may be indirectly exposed to the intervention through family or community relationships.

3.3. In the groups being compared (exposed versus non-exposed; with intervention versus without; cases versus controls), are the participants comparable, or do researchers take into account (control for) the difference between these groups?

At the data analysis stage:

  • for cohort, case–control and cross-sectional studies, consider whether (a) the most important factors are taken into account in the analysis; (b) a table lists key demographic information comparing both groups, and there are no obvious dissimilarities between groups that may account for any differences in outcomes, or dissimilarities are taken into account in the analysis.

3.4. Are there complete outcome data (80% or above), and, when applicable, an acceptable response rate (60% or above), or an acceptable follow-up rate for cohort studies (depending on the duration of follow-up)?

Section 4 – quantitative descriptive studies (including incidence or prevalence study without comparison group, case series or case report)

4.1. Is the sampling strategy relevant to address the quantitative research question (quantitative aspect of the mixed-methods question)?

Consider whether (a) the source of the sample is relevant to the population under study; (b) when appropriate, there is a standard procedure for sampling, and the sample size is justified (using power calculation, for instance)

4.2. Is the sample representative of the population understudy?

Consider whether (a) inclusion and exclusion criteria are explained; and (b) reasons why certain eligible individuals chose not to participate are explained

4.3. Are measurements appropriate (clear origin, or validity known, or standard instrument)?

Consider whether (a) the variables are clearly defined and accurately measured; (b) measurements are justified and appropriate for answering the research question; and (c) the measurements reflect what they are supposed to measure

4.4. Is there an acceptable response rate (60% or above)?

The response rate is not pertinent for case series and case reports (for example, there is no expectation that a case series would include all patients in a similar situation)

Section 5 – mixed methods 1 (including sequential explanatory design, sequential exploratory design, triangulation design and embedded design)

5.1. Is the mixed-methods research design relevant to address the qualitative and quantitative research questions (or objectives), or the qualitative and quantitative aspects of the mixed-methods question?

For example, the rationale for integrating qualitative and quantitative methods to answer the research question is explained

5.2. Is the integration of qualitative and quantitative data (or results) relevant to address the research question?

For example, there is evidence that data gathered by both research methods was brought together to form a complete picture and answer the research question; the authors explain when integration occurred (during the data collection analysis or/and during the interpretation of qualitative and quantitative results); they explain how integration occurred and who participated in this integration

5.3. Is appropriate consideration given to the limitations associated with this integration, such as the divergence of qualitative and quantitative data (or results)?

1 Mixed-methods study designs

A. Sequential explanatory design: The quantitative component is followed by the qualitative. The purpose is to explain quantitative results using qualitative findings. For example, the quantitative results guide the selection of qualitative data sources and data collection, and the qualitative findings contribute to the interpretation of quantitative results.

B. Sequential exploratory design: The qualitative component is followed by the quantitative. The purpose is to explore, develop and test an instrument (or taxonomy), or a conceptual framework (or theoretical model). For example, the qualitative findings inform the quantitative data collection, and the quantitative results allow a generalisation of the qualitative findings.

C. Triangulation design: The qualitative and quantitative components are concomitant. The purpose is to examine the same phenomenon by interpreting qualitative and quantitative results (bringing data analysis together at the interpretation stage), or by integrating qualitative and quantitative datasets (for example, data on same cases), or by transforming data (for example, quantisation of qualitative data).

D. Embedded design: The qualitative and quantitative components are concomitant. The purpose is to support a qualitative study with a quantitative substudy (measures), or to better understand a specific issue of a quantitative study using a qualitative substudy; quantisation, the efficacy or the implementation of an intervention based on the views of participants.

Key references: Creswell and Plano Clark (2007)[b]; O'Cathain (2010)[c].

[a] Mays N, Pope C (1995) Qualitative research: rigour and qualitative research.BMJ 311: 109–12.

[b] Creswell JW. Plano Clark VL (2007) Designing and conducting mixed methods research. Thousand Oaks, CA: Sage Publications.

[c] O'Cathain A (2010) Assessing the quality of mixed methods research: toward a comprehensive framework. In: Tashakkori A, Teddlie C (editors), Handbook of mixed methods research, 2nd edition, pp. 531–55. Thousand Oaks, CA: Sage Publications.

Notes for Checklist: mixed-methods reviews

For all questions:
  • answer 'yes' if the study fully meets the criterion

  • answer 'partly' if the study largely meets the criterion but differs in some important respect

  • answer 'no' if the study deviates substantively from the criterion

  • answer 'unclear' if the report provides insufficient information to judge whether the study complies with the criterion

  • answer 'NA (not applicable)' if the criterion is not relevant in a particular instance.

For 'partly' or 'no' responses, use the comments column to explain how the study deviates from the criterion.

Definition:

Mixed-methods reviews evaluate studies that employ qualitative, quantitative and mixed methodology.

Sources: