This section outlines the key components of step 4, which involves the development of evidence-based audit criteria and the review of current practice against the evidence-based criteria by conducting a clinical audit.

Evidence-based audit criteria

Clinical audit can be defined as “a quality improvement process that seeks to improve patient care and outcomes through systematic review of care against explicit criteria and the implementation of change” (p.1) (Hart, 2002).

It is important to highlight that audit is a systematic process, not one that is ad hoc. An audit proposal or plan should be clear and easy for others to read, and make an assessment about its purpose, quality and outcomes.

Clinical audit is a critical process; it asks what is happening here. It forms the basic process in continuous quality assessment (Pearson, Field & Jordan 2009). Audit evaluates current practices. It is not a form of competency assessment, performance appraisal or a disciplinary process. However, if we implement clinical audit without communicating effectively and involving clinicians, they may feel as if it is a negative process. If audit is seen as a management tool that will be used to punish staff, then the likelihood that it will be used effectively to improve the quality of care will be diminished. The ultimate aim of audit is to improve outcomes, whatever these end points are (for example, reduced levels of pain, infection, spiritual distress).

Audit can have a number of goals: (Pearson, Field & Jordan 2009)

  • It can broadly address components of clinical effectiveness in the ongoing goal of improving the quality of health care.
  • It can provide the means whereby clinical units and organisations can assess and compare their work with established guidance. This may also be useful for bench-marking within or across organisations.
  • It can promote self-assessment in practitioners, which can provide professional and practice development as well as add to an overall quality agenda.

An important part of implementing evidence into practice is the ability to collect data related to clinical activities via the process of clinical audit, and develop a standardised work plan incorporating problem identification, action planning and action taking. Each audit criterion needs to be generated from evidence within the literature and, at the very least, have an evidence summary supporting it (Pearson, Field & Jordan 2009).

Clinical audits should have a number of attributes. They should:

  • Be professionally led
  • Be seen as an educational process, even if that is only represented by raising awareness
  • Form part of routine clinical practice, which is where it is most effective
  • Be based on the setting of standards
  • Generate results that can be used to improve patient outcomes
  • Involve management in process and outcome

If the above is what audit should be, the following is what audit should not be:

  • A measure of satisfactory progress for staff training
  • A system of ensuring that staff in training are making satisfactory progress; this is probably a function of organisations and professional educational bodies, as well as the individual
  • A performance appraisal of posts in organisational terms, such as monitoring of quantity of activity or time keeping
  • A disciplinary mechanism if results show less than optimum care
  • Research that is concerned with establishing new knowledge, although this issue is not always clear cut, with a large grey area or overlap between the two
  • An assessment of need, which may be an outcome from the audit.

Development of JBI evidence-based audit criteria

Audit criteria in this context can be considered “well-defined standards set on the principles of evidence-based healthcare” (p.250) (Esposito & Canton 2014). Teamwork is essential in the development of audit criteria in determining the long-term success/sustainability of an implementation project. This stage requires expert skills in searching subject matter databases and critically appraising the research publications. At JBI, we undertake this complex and time-consuming process for the busy clinician. JBI audit criteria are evidence-based standards of care intended to assist the evaluation of current practice against best practice. Audit criteria are developed from topics related to policy or practice and are derived from the recommendations made within an Evidence Summary.

Audit methods


Audit sampling can be as rigorous as that used for any research or can be based on convenience.  There are no hard and fast rules, but there are benefits and disadvantages to certain methods of sampling and determining sample size. 

Firstly, consider if you have adequate time, money and support within the audit schedule to examine a large population, such as every patient in the health centre or every person admitted with chest pain. Secondly, consider if the topic is one that may be significant enough to warrant ongoing audit, such as outcomes post-surgery, length of stay or diagnosis-related group data. Thirdly, know what other participants and key figures consider appropriate. If this leads to the decision to choose a sample, a form of random selection is generally considered appropriate.

With random sampling methods, each member of the population has the same probability of being included. This significantly reduces the risk of selection bias; hence, the quality of the results have greater validity. Conversely, non-probability sampling or convenience sampling means there is no way of knowing the probability of being selected; hence, the representativeness of the sample is also unknown. 

Bias occurs when the results of an audit are influenced by the sampling methods used, outside influences or even the perspective of the people involved. Time-related bias may occur if the topic involved assessment of older adults with flu-like symptoms, and the audit was conducted in summer (i.e. a time of year when the condition being observed is less prevalent). Equally, retrospective audits, which review past occurrences, may lack relevance to current practice. It is important to take time to review the methods to be undertaken in the project and consider whether bias is being introduced.

Data collection methods


Observation allows the collection of data as events occur. Debate over the degree of involvement (e.g. direct or indirect observation, participatory or non-participatory observation) continues (Anguera et al. 2018; Bergold & Thomas 2019). The argument is based around the benefits of objectivity versus understanding. It may be that one method better suits your question or is more appropriate. For example, if you are auditing the provision of care in your own clinical area, you may feel obliged to participate in activities that you are actively auditing. There are no hard and fast rules, but you should be transparent about your involvement, as it may influence the results.


Questionnaires are notoriously difficult to perfect but can be useful for obtaining feedback from patients and others. The way the questionnaire is constructed can direct the answer or limit the scope of responses. For example, a yes/no response to the question “Did nurses respond to your requests for analgesia promptly” is clearly black or white, but the word “promptly” may be interpreted in different ways, particularly when the action of many analgesics are time-related. In addition, nurses may not always respond in the same way, depending on how the word “respond” is interpreted.

A more sensitive question might include the use of a scale, such as always, mostly, often, rarely or never. A question with a scale avoids prompting the answer, but interpretation of certain words, for example “mostly”, could raise questions of exactly how often that is. Questionnaires that are mailed tend to get low response rates (less than 50% and often as low as 20 or 30%); therefore, follow-up is usually required. However, questionnaires continue to be widely used because of their functionality and versatility.


Interviews can gain detailed descriptions and be of benefit in sensitive topics, but are reliant on the participants’ ability to express their views. Issues to consider include interviewer skill, reliability of using more than one interviewer, place and time of interview, and time needed to conduct interviews. In addition, the issue of power has to be addressed. The interviewer is often in a privileged position that allows him or her to manipulate the responses. For example, if a nurse on the ward conducts the interview with patients for the audit, would patients be truthful about their care, particularly if their pain relief rarely arrived promptly?

Document review

Document review (e.g. from case notes or records) sounds easy, but often when collecting data, it can be difficult to record a clear response to questions, and amendments may have to be made.   

Pilot the methods for data collection

A pilot test with a small sample will provide evidence that the correct information is being collected. Pilots indicate if a data collection form is ambiguous, too complex, or missing the mark altogether.  Pilot testing takes time, but saves time and money in the long run.

Go to Step 5: Implement changes to practice using GRiP

2020 © Joanna Briggs Institute. All Rights Reserved