3Appraise

Is the evidence appropriate for my setting? How do I know I can trust the evidence?

Having found the evidence relevant to your project (see accessing evidence), the next step towards using evidence is to establish whether the evidence is trustworthy and relevant.

This means looking at how it has been produced to help you judge its quality and decide whether the findings are appropriate to your setting. This process is known as critical appraisal and includes asking some simple questions of the evidence such as:

  • What is the design of the study and methods used?
  • What are the main findings/results?
  • Are the results relevant locally?

Below you will find further guidance about tools which are used to structure the appraisal process and the specific questions to ask.

Using your own judgement is key to looking critically at evidence to enable you to reach a decision about its quality. Whether it is ‘good enough’ will depend on what decision you are hoping it will inform – it may not be clear cut and there is often not necessarily a ‘right’ answer.

Whatever your role or experience level is, it is important to look critically at evidence and not take it at face value.

Critical appraisal tools

There are many tools available to help you break down the steps of critical appraisal. You can search for these online but a few suggestions are:

Even if you don’t have time for a complete appraisal, applying some simple questions to help you look at the details of a study can be better than nothing. Below are two sets of simple questions you might use – one for quantitative evidence and one for qualitative evidence, each with four areas to focus on.

For quantitative evidence

Look at details such as:

People

  1. Who were the participants? Are people selected from a database or other source that might exclude some people?
  2. Is the number of people included explained by the authors? If so, does it meet the stated sample size requirement? If not, does the sample size seem good enough?
  3. If participants were split into two or more groups (as in a trial), were the two groups similar at the outset of the study, in terms of numbers and characteristics such as age, gender or health status, for example?

Measurements

  1. Is there scope for bias in the measurements used?
  2. Were outcome measures subjective such as relying on self-reported symptoms, or objective such as using a recognised scale e.g. EQ-5D quality of life measure? Apply the same to any exposure measurements (e.g. in a cohort or case control study).
  3. If a questionnaire was used, was this an existing tool?
  4. Were any new tools piloted before use?
  5. Were any important outcomes missing?
  6. Was there a baseline measured before the intervention (if applicable)?
  7. Was the time between measurements sufficient to see a change and whether this lasted?
  8. Was the rate of drop-out from the study reasonable? (or does it suggest a problem with the intervention or methodology?)
  9. Does it reduce the sample size significantly?

Results

  1. What was the main result?
  2. Did they consider the influence of certain variables on outcomes, such as age, gender, ethnicity?
  3. If applicable, are there statistics about how likely the results are due to chance factors (look for p values <0.05)?
  4. Is there a confidence interval and is this narrow? (this is the range of values which we are 95% confident contains the ‘true’ value for the population. Narrow range=more confidence; wide interval=less confidence because the ‘true’ value could be quite different from the one seen. Close or below zero (or no difference) reject the results.)

Trust

  • From the above, are you satisfied that the sample is comparable to your population, that measurements are robust, and do you trust the study’s findings? You may want a stronger piece of evidence for certain decisions. But no decision should rest on a single piece of evidence!

For qualitative evidence

Look at details such as:

People

  1. Who were the participants?
  2. Were the right people included, i.e. those with characteristics or experience relating to the topic of the study?
  3. Is the number of people explained by the authors?
  4. Is this adequate for a qualitative study? (remember this is not aiming to be representative)

Methods

  1. How was information gathered from participants?
  2. Did this method suit the topic being discussed? Was it suited to individual or group discussion; could other methods have been used?
  3. Was this process transparent (are the questions shown)?
  4. Was it recorded in some way to aid the analysis process?
  5. Did more than one person help analyse the data?

Results

  1. What were the main findings?
  2. Were these agreed among the researchers?
  3. Were these discussed with the participants?
  4. Are quotes cited?
  5. Do these seem to match the interpretation?

Trust

  1. From the above, is the study credible?
  2. Do the findings seem plausible?
  3. Do you trust the study?
  4. Should it inform your thinking?

All of these questions should be possible to answer by careful reading of the article. They are intended to prompt reflection on the way the evidence has been produced and you will need to reach your own decision about how important each detail is. Remember no evidence is perfect! The aim is to decide is it good enough?

Want some help?

If you are less confident of your critical appraisal skills (or lack the time required to do it), you may prefer to access specialist help. Healthcare librarians are skilled in critical appraisal and may be able to offer help. See details of local libraries.

Critical appraisal training

If you wish to develop your skills in critical thinking and reading, a selection of online training resources are given here:

The following reading may also be helpful:

Next step

Once you’ve found relevant evidence and appraised it, you will need to think about how to apply it to your context, the next step in the cycle.

Also see the FAQs

The Evaluation and Evidence toolkits go hand in hand. Using and generating evidence to inform decision making is vital to improving services and people’s lives.

About the toolkits