I recently contributed to a thread on the subject of EFQM RADAR on the CQI LinkedIn discussion forum, and that got me to writing this reflective post. I have always found the EFQM Model in general a bit of an enigma. My previous posts on EFQM themes get a lot of visits, and it certainly has its fans. The thing I tend not to find is that any of my customers (and I work with some pretty impressive people) actually use it in anger to any extent. I myself worked extensively with the model from about 1994 up till around 2003, when it finally ground me down. The EFQM Model, taken lock stock and barrel, only really works well from an academic perspective. It talks a good game but in practical terms it takes more than it gives.
That said, it does have redeeming features (again recounted in this earlier post) one of which being the RADAR assessment framework, which can easily be disaggregated from the EFQM Model and used in isolation either to evaluate, review or even to help design a business process or function.
So what exactly is RADAR?
For those of you who don’t know, the EFQM assessment process uses a numerical scoring system. Its not an exact science, of course, but it does help assessors rank and rate comparative data between companies and processes, and maintain a consistent approach. At the moment that scoring system is based on the “RADAR” Scorecard (see below)
The EFQM RADAR assessment approach uses a type of balanced scorecard to encourage the assessor (or auditor for that matter) to ask a series of key questions, namely;
R. To what extent are results used to set targets for process performance?
A. To what extent is a clear approach (procedures for example) defined and understood?
D. To what extent is the approach deployed (i.e. does everyone follow the approach or is deployment patchy)?
A. To what extent is the process assessed (i.e. measurement that asks the question “is it working the way it should”)?
R. To what extent is the process reviewed? (i.e. a review of whether the overall approach is still relevant and suitable)?
Strengths of the RADAR approach
The key strengths of the RADAR approach (in my opinion) are as follows:
- It forces the user to evaluate the big, joined up picture, not just elements in isolation
- It forces the user to evaluate the flow of the PDCA cycle through a process
- It actually STARTS (the first “R”) by asking “to what extent does the process achieve its desired results
This final point, for me, is the key. The starting point for the evaluation of the process is to establish to what extent the process is achieving its aims. Whether it is EFFECTIVE in other words. I could suggest that in some other evaluative frameworks (quality auditing, for example) the effectiveness of the process is often relegated to the status of afterthought, and sometimes not even that. Conformance is king, in other words. Subsequent stages of the evaluation process (the remaining “ADAR”) more or less follow a recognisable PDCA process to establish (assuming the process is found to be effective) whether this effectiveness is controlled and therefore sustainable. I penned a post a couple of years ago suggesting ways it could be incorporated into the quality auditing process.
Now, I am not saying it is the answer we have all been looking for. All I am saying is that it could work well for some. Personally I like the thought process. I think it works