Appraise
Is the evidence appropriate for my setting? How do I know I can trust the evidence?
Having found the evidence relevant to your project (see Step 2, Accessing Evidence), the next step towards using evidence is to establish whether the evidence is trustworthy and relevant. This means looking at how it has been produced to help you judge its quality and decide whether the findings are appropriate to your setting. This is called critical appraisal and includes asking some simple questions such as:
- What is the design of the study and methods used?
- What are the main findings/results?
- Are the results relevant locally?
Critical appraisal tools
There are many tools available to help you break down the steps of critical appraisal. You can search for these online but a few suggestions are given here:- A set of widely used and practical tools from the Critical Appraisal Skills Programme for looking at different types of research evidence
- A tool for appraising surveys from the Centre for Evidence Based Management
- The AACODs tool for appraising grey literature from Monash University
- PEOPLE
- Who were the participants? Are people selected from a database or other source that might exclude some people? Is the number of people included explained by the authors? If so, does it meet the stated sample size requirement? If not, does the sample size seem good enough?
- If participants were split into two or more groups (as in a trial), were the two groups similar at the outset of the study, in terms of numbers and characteristics such as age, gender or health status, for example?
- MEASUREMENTS
- Is there scope for bias in the measurements used? Were outcome measures subjective such as relying on self-reported symptoms, or objective such as using a recognised scale e.g. EQ-5D quality of life measure? Apply the same to any exposure measurements (e.g. in a cohort or case control study). If a questionnaire was used, was this an existing tool? Were any new tools piloted before use? Were any important outcomes missing?
- Was there a baseline measured before the intervention (if applicable)? Was the time between measurements sufficient to see a change and whether this lasted? Was the rate of drop-out from the study reasonable? (or does it suggest a problem with the intervention or methodology?) Does it reduce the sample size significantly?
- RESULTS
- What was the main result? Did they consider the influence of certain variables on outcomes, such as age, gender, ethnicity? If applicable, are there statistics about how likely the results are due to chance factors (look for p values <0.05)? Is there a confidence interval and is this narrow? (this is the range of values which we are 95% confident contains the ‘true’ value for the population. Narrow range=more confidence; wide interval=less confidence because the ‘true’ value could be quite different from the one seen. Close or below zero (or no difference) reject the results.)
- TRUST
- From the above, are you satisfied that the sample is comparable to your population, that measurements are robust, and do you trust the study’s findings? You may want a stronger piece of evidence for certain decisions. But no decision should rest on a single piece of evidence!
- PEOPLE
- Who were the participants? Were the right people included, i.e., those with characteristics or experience relating to the topic of the study? Is the number of people explained by the authors? Is this adequate for a qualitative study? (remember this is not aiming to be representative)
- METHODS
- How was information gathered from participants? Did this method suit the topic being discussed? i.e., was it suited to individual or group discussion; could other methods have been used? Was this process transparent (are the questions shown)? Was it recorded in some way to aid the analysis process? Did more than one person help analyse the data?
- RESULTS
- What were the main findings? Were these agreed among the researchers? Were these discussed with the participants? Are quotes cited? Do these seem to match the interpretation?
- TRUST
- From the above, is the study credible? Do the findings seem plausible? Do you trust the study? Should it inform your thinking?
Want some help?
If you are less confident of your critical appraisal skills (or lack the time required to do it), you may prefer to access specialist help. Healthcare librarians are skilled in critical appraisal and may be able to offer help. See here for details of local libraries.
Critical appraisal training
If you wish to develop your skills in critical thinking and reading, a selection of online training resources are given here:
- For the complete beginner, this short workbook produced by UWE, Bristol takes you through the process of appraising health information
- In this learning resource from the University of Manchester you can find out more about different types of study design and what to look for in a critical appraisal
- This interactive tool produced by the University of Glasgow guides you through a series of appraisal questions to help you interpret a published health research article.
The following reading may also be helpful:
Next step
Once you’ve found relevant evidence and appraised it, you will need to think about how to apply it to your context, the next step in the cycle.