Skip to content

Joining the conversation about EPR usability

Paul Volkaerts NervecentreNervecentre CEO, Paul Volkaerts, explains why the topic of EPR usability is critically important and why usability is about much more than system implementation and training.

In 2022, NHS England facilitated the most comprehensive survey of EPR usability ever conducted in the NHS, using the International KLAS methodology and facilitated by the excellent team at Ethical Healthcare. EPR usability is such an important subject – because EPRs that are not well adopted will not deliver the safety and operational efficiencies that justify their high price tags. Ensuring an EPR is well used is perhaps the single most important objective of a deployment.

The team at Ethical Healthcare are doing great work to bring focus to this important subject through white papers and social media, leading with the core message that “focusing on implementation is more important than focusing on choice of supplier”. Whilst there are truths behind this message, like everything – the simplification of the message leaves important nuances that also need to be considered. I want to thank the team at Ethical for inviting industry to participate in a conversation around usability.

Does supplier choice really not contribute to EPR usability?

The argument that supplier choice does not contribute towards usability is based upon the statistical spread of usability scores that each supplier attained. Because most suppliers scored a mixture of good and poor scores, it leads to the argument that implementation is more important. This is reinforced by the KLAS evidence about the percentage impact of training on usability scores relative to supplier choice.

But this does not wholly pass the common-sense test for me. I use lots of software, sometimes I need no training, and I can become an advanced user both quickly and self-sufficiently; other times, I need training to do the basics and never get past that stage.

There are statistical outliers in the KLAS report that show that supplier choice does indeed matter. One supplier scored only negative scores, you presumably would not conclude that through focused implementation and training, you could achieve high usability with that supplier.

Nervecentre scored only positive scores, including 4 of the top 8 scores. These statistical outliers have largely been omitted from the published analysis – probably because of marketing sensitivities – but that does affect the drawn conclusions.

Does your usability score always reflect your implementation effort?

Whilst everyone would acknowledge that good training and implementation are beneficial, I find this message an oversimplification that places too much of the burden of responsibility onto the trust implementation team and absolves suppliers.

The job of suppliers is to build products that are intuitive, easy to implement and easy to learn. The implementation effort required to deploy an EPR and attain good usability scores is not the same for all suppliers – it cannot possibly be – and that feels like a key factor that is not being discussed.

I strongly suspect that there is no linearity at all between the usability scores achieved and the time/money spent on implementation and training, so the general assumption that those trusts with high usability must have done a better job of implementation feels unsatisfying.  I’m sure there is truth in it somewhere, but like all analysis – the bit where you jump from presenting evidence to “therefore we conclude that…” is the part that often falls down. Further study is required to know exactly what went well for those trusts that scored well. For instance, I would like to see how long ago their EPR was deployed. Was there a big bang or a slow rollout of features?  Understanding how these impact usability is essential in drawing correct conclusions.

Is it realistic to increase implementation effort?

We are entering a period of high-volume EPR deployment, where NHS England requires 25-50 trusts to deploy full EPRs over the next two years. This is an order of magnitude greater than the volume of deployments over the last two years, but the number of experienced project managers, trainers, and subject matter experts available will not grow at the required rate. So how are we going to increase implementation and training effort?  

The only way to achieve this is to examine EPR deployment methodologies – to look for ways that allow trusts to deploy safely with fewer resources. I have been involved in a number of workshops about deploying more quickly or reducing the negative impact of a big bang deployment, but the answer nearly always requires more people – more support and training usually (once, of course, the basics of having good senior/clinical engagement are there).

There appears to be limited desire to discuss different approaches, a feeling that we just need to “keep calm and carry on with big-bang deployments”, and eventually, we will get to the end of the pain. But if we do that, the pain could last for the next decade. Now, as we head into a high volume of EPR deployments, this is exactly when that conversation is needed – to explore working smarter instead of throwing more people at the problem. 

Some examples of these are:

  • Classroom training versus ward-based training. Classroom training is so expensive in terms of backfill, logistics and trainers; and yet is much less effective than training delivered on the ward. What would it take to train on the wards without needing any backfill whilst users train? This is the Nervecentre approach.
  • Big bang versus progressive rollouts. Nervecentre heavily promotes progressive rollouts, partly because they can be achieved with an order of magnitude fewer resources and still achieve better results in the short and medium term. Going live with an EPMA is hard; why would you go live with a PAS on the same day?  It removes all of your ability to cope when problems arise. You can’t always do everything progressively, but the choice is never binary either.

The Nervecentre way

It is hard to see the results of the KLAS survey as anything other than a cry for help from users. There were some good results but too many poor results. I can’t help but feel that clinicians were not voting for more training but voting for more intuitive systems.

Nervecentre achieves strong usability scores using a tried and tested methodology that involves:

  • Building software that is intuitive and easy to learn, and
  • Rolling it out progressively to reduce the cognitive impact of the initial go-live

And none of that means the programme has to take longer. It is the opposite, you get to enjoy benefits much sooner. But if the whole EPR deployment does take longer because of resource issues, then the cost of that slippage would be bearable because you are not slipping an entire programme.

I was really proud of the Nervecentre usability scores. We had 4 of the top 8 scores, and all of our scores were in the top third. We will be interested to see the next usability survey when we will have more trusts with a full EPR deployed!

This Post Has 0 Comments

Leave a Reply

Back To Top