Expert Insights

Latest news stories and opinions about the Dental, GP and Care Industries. For your ease of use, we have established categories under which you can source the relevant articles and news items.

09th January 2012


Every business conducts audits, whether they realise it or not. Even the confirmed micro-manager who spends their working day interfering and critiquing their subordinate’s work is auditing in a fashion. However, that well known example demonstrates what is often wrong with auditing – it is not based on a pre-agreed standard of performance.

This frequent lack highlights the difference between an audit and an inspection.  If the purpose of the exercise is for a person or organisation external to a process to pass judgement on the effectiveness of that process, then that is an inspection.  An inspection process itself may make some claims to be relevant to the quality development of the inspected service, but that is not really the case.  An inspection has few if any of the attributes required for the process to be developmental, such as inclusion of the subject staff and an empowering approach, a reporting format which promotes continuous improvement, both supported by a general objective of being part of a process of improvement, rather than a snapshot judgement.

An audit, by contrast, embraces each of these elements, and is an essential part of a continuous improvement system.  But first, let us identify what your author calls “false audits”; processes which call themselves audits but are nothing of the sort.

The “I know what I am looking for” auditor.  In this process, the auditor or auditing organisation does not tell the audit subject in any detail what they are looking for.  To the audit subject, this is a meaningless external process with no relationship to their working environment; they can learn nothing from it.

The stylised report audit. Some so-called auditors produce reports which are essentially meaningless to the subject.  The reports are templated, incomplete and impersonal, making it difficult or impossible for the reader to match what they are reading to their own working environment.

The verbal feedback audit. Usually very interesting at the time, and many audit subjects enjoy verbal feedback.  However, verbal feedback is not effectively recorded and soon fades or is open to misinterpretation.  Verbal feedback is good, but only as an adjunct to a full written feedback.

The surprise audit. The justification for the surprise audit is that it reduces window dressing, which all of us do for a planned audit.  However, this is in many ways a cover-up for a process which is not rigorous.  The window dressing problem, which is admittedly real, can easily be overcome by adjustment of the standards, and tracking of records.  The expected standards for a planned audit may be higher than for a surprise audit, to compensate for window dressing.  As a health authority inspector once said to your author “If you can’t get it pretty well perfect when you know I am coming, what do you think I have to assume is going on when I am not looking?” When conducting a planned audit, the auditor needs to be skilled enough to asses when historical records do not support the picture being presented.  Planned audits are often a cover for the fact that the assessor/auditor does not have the investigative skills to find the true situation.  Surprise audits are in fact inspections, not audits.

So, after criticising many audits, what are the elements of a sound auditing process?  These include:

  1. Sound objectives. The objective of an audit should be to provide information for improvement, and everyone involved should know and understand that.  Any suspicion that the audit is for some other reason (such as gaining the manager a bonus, or simply “checking up on us”) reduces its potential, and often destroys it.
  2. Known parameters. Any audit should be conducted against a fully revealed set of criteria, and a known scope, both of which have been the subject of an extensive training of all staff involved, and checking of their understanding.  Judging anyone’s performance against a standard they did not know existed is a recipe for a management disaster.
  3. Involvement. Audits should involve the audit subjects; the audit subjects should be involved in the data gathering process, and fully aware of which processes and structures are being looked at to complement their prior knowledge of the standards.
  4. Output oriented. The audit process, and the auditor, should make it clear from action and communication that the object of the audit is the output of the organisation i.e. the customer’s view of quality.  Any auditing of process and structures not obviously related to customers should be clearly explained and identified as an essential part of the overall process for satisfying customers.
  5. Inclusive. The audit process should be designed to encourage a contribution from all staff and customers, without any one section dominating or intimidating any other.   Auditing also should not be confined to only part of an organisation, which may indicate to other parts that they have no contribution to make and are not valued.
  6. Judgement of sustainability and consistency. An audit should not be a snapshot.  The auditor should test and judge the ability of the organisation to sustain the results that they see on the day.  A wonderful result on the day set against a run of the mill background should result in a judgement of “run of the mill”, with a rider to point to what can be achieved as an encouragement to better things.  The reverse is also true; a bad day set against a good background performance should not over-ride a positive feedback.
  7. Trained auditor. Effective auditing requires very good people skills.  It is not a job for the grumpy detail obsessed middle manager for whom the organisation cannot find an otherwise useful job.  A detail obsessed person is likely to lose sight of the overall objective (see 1), become bogged down in that detail, and cause friction with the audit subjects.  An auditor needs to be able to tread the fine line between being personable enough to gain people’s confidence in order to extract the information needed, but independent enough to resist the inevitable attempts to suck them in.  An auditor needs to be critical and not easily fooled, in a quiet way, also needs the ability to drill down into an area where they have concerns.  A practical audit, if it is not to be and be  seen as a bureaucratic imposition, needs to avoid checking everything in full detail, but be designed to examine  mission critical areas, look for weaknesses, and drill down into detail only when required to.
  8. Appropriate auditor. In a continuous improvement system, the first line audit is carried out by the process team themselves, or one of them.  The audit starts with self-audit – in that way it becomes part of the process itself, and not an external imposition.  The job of the second line auditor, who will not be a team member, is to first of all assess if the first line audit is taking place, and meets the organisation’s auditing and continuous improvement standards.  The second line auditor may be the departmental manager, or overall manager.
  9. Team approach. This is not always possible in the small organisation, but all audits benefit from being carried out by at least two people.  Teams see more, and reach better conclusions.
  10. Not a tick list. Many audits are pages and pages of tick lists, which are not audits, but are in fact inspections.  If you wanted to design an audit system which immediately turns everyone off the process, and is the antithesis of Point 1, you would design a tick list.  They are usually so voluminous and complex that the user is driven inevitability towards the implicit objective, which is to tick everything as either present or done.  That is the opposite of what an audit is supposed to do.  Effective audits are designed to cause comments, not ticks.  A good audit outcome is a list of comments and ideas for improvement, since those are the tools that people naturally use when they feel the need to improve their working processes and outcomes.  The QCS audit system was specifically designed to avoid as far as possible ticking, to encourage the user to think about a whole process or outcome and develop an intelligent assessment of it, and then to make notes about how it might be improved, even if it meets the current standard.

One technique to be considered, which militates against the occasional tendency of managers to try and talk up the performance of their own area, is peer review.  In peer review, the manager from one service will lead or be part of a team carrying out an audit on a parallel service.  For instance, the manager of one domiciliary care office working with the manager of another office to audit that office.  This is, from personal experience, a very effective way of damping out cover-ups, standardising processes throughout an organisation, and promoting team working.  However, from experience, be advised to build peer review on top of an already working team approach, and do not use it to try and promote team working where it does not already exist.  Peer review is a useful consequence of and promoter of team working, not a method of developing it.

Note that this article has said nothing about the detailed criteria you audit against.  That is up to the individual service, and will usually be a mix of statutory requirements, customer requirements and the culture of the organisation, and as such is peculiar to each organisation.  But the principles of good, and bad, auditing apply to every service, and in fact to every organisation.

Topics: General

Leave a Reply

Partners with the UK's smartest companies

SCIE Access Skills DAA NC
Join over 19,000 users already using the QCS Management System!
Start Free Trial
Back to Top
Start FREE Trial Click here