NHS healthcare professionals who are directing analytics projects and programmes, as well as IT managers in the NHS, cannot help but be engaged in arguments over how far it is desirable to use software algorithms in work previously carried out by people.
Such decision-support software is also deployed in sectors other than healthcare, but its merits and demerits are more sensitive in that sector.
In February 2015, the British Medical Association (BMA) complained that England’s 111 telephone advice service, which was launched in March 2013 and relies mainly on staff using decision-support software, was referring too many callers to GPs and to hospital accident and emergency departments.
The service uses NHS Pathways as its decision-support software, which is also used by English ambulance services to assess 999 callers. The NHS system is managed by the Health and Social Care Information Centre.
The BMA said a comparison of October 2014 with a year previouslyshowed that 111 referred 186% more callers to GPs and 192% more to A&E. The 111 service replaced NHS Direct, a national helpline that had made greater use of clinical staff, and used a system procured from AXA Assistance in 2000.
Charlotte Jones, the BMA’s GP lead on unscheduled care, says the reliance on algorithms is part of the problem. “Computer-based algorithms, by their very nature, have to be relatively risk-averse and take the safest option,” she says. “However, they are not always applicable to the clinical setting or, if they are, they don’t allow for subtleties in symptoms, and symptoms don’t always fall neatly into boxes.
“So the computer algorithms that call handlers have to follow don’t allow handlers to move away from them when common sense or your own individual knowledge calls for it.”
Janette Turner is senior research fellow at the University of Sheffield and director of the medical care research unit at the university's School of Health and Related Research. She agrees that any kind of phone service is limited by the fact that it cannot diagnose.
“You need a clinician face-to-face to make a diagnosis, to look at people and do tests,” she says. “This is about assessing the level of urgency and the level of care. The question is how well algorithms can assess, compared with clinically trained staff.”
Turner and her colleagues assessed pilots of NHS 111 for the Department of Health in 2012. There are problems in comparing 111 with NHS Direct, because the previous service did not handle out-of-hours calls to GP surgeries. “People who called NHS Direct were calling because they weren’t sure what they should do,” she says, whereas many 111 callers have already decided that they want to see a GP.
According to NHS England data, 29% of 111 call time was provided by clinically trained staff in December 2014. “It is true that a far bigger proportion of 111 calls are handled by non-clinical staff,” says Turner.
But 111 tends to put callers through to its clinical staff on the same call or have them ring back in a matter of minutes, whereas NHS Direct often took several hours to do this – and the old service also had a reputation of being over-cautious.
“It wasn’t known as NHS Redirect by the ambulance service for nothing,” says Turner.
However, the Sheffield research on the 111 pilots did show that it resulted in a 3% increase in ambulance call-outs – enough to make a significant impact – although it did not find a significant impact on visits to A&E departments.
Turner agrees with the BMA's Jones that any algorithm-based system is likely to err on the side of caution – partly because those designing the systems will be wary of taking risks, and partly because they lack the information available to someone in the same room as a patient.
But Turner thinks it makes sense to use algorithm-based systems to assess callers initially, because some will want basic advice and others will have straightforward problems. She points out that ambulance services have used non-clinically trained staff equipped with decision-support software for two decades – the same NHS Pathways system used by 111 – although they only have to decide what level of urgency to attach to a call.
The question is not whether to use such software, but what proportion of the work needs to involve clinically trained staff. What we need to do is ensure that any software used, while having to be safe, is also appropriate for patients, and not leading to potentially unnecessary harm with additional inappropriate extra tests.
Jones argues that the English 111 service needs to increase that proportion – and notes that it is doing so. “The call handlers in 111 are given 12 weeks' training and they are not clinical staff,” she says. “There are clinical staff in some of the centres to help them, and that is increasing. Indeed, they are looking at putting GPs and more nurses in to support the decision-making of the individual call handlers when they feel the computer software needs to be overridden.
“What we need to do is ensure that any software used, while having to be safe, is also appropriate for patients, and not leading to potentially unnecessary harm with additional inappropriate extra tests, causing anxiety or inappropriately reassuring people.
“That is where the tension comes in. That clinical judgement, clinical knowledge and experience that develops over a long period of time means you can use that experience for managing individuals.”
The belief that software should support, rather than replace, clinically trained staff is shared by Mateja Jamnik, a senior lecturer at the Computer Laboratory at the University of Cambridge and an expert in artificial intelligence. She says that, as clinical decision-support systems improve, they are likely to need less input from non-clinically trained staff, such as call handlers.
“However, the expert knowledge provided by clinical staff that the callers may be referred to will, as far as I can see, remain a crucial part of the service,” she adds. “But these clinicians will be supported collaboratively by expert systems.”
Replacing clinicians with algorithms completely would be fraught with technical and ethical issues, says Jamnik. “For example, who takes the responsibility for a wrong or harmful decision by a computer program? While we have significant evidence that, in some cases, software can be more reliable than humans to make crucial decisions, I think that, for the foreseeable future, these systems will be designed for and used in collaboration with, and support of, clinicians.
“They will make the role of clinicians much more efficient, and allow them, in addition, to consider new dimensions coming from biomedicine that we were never able to use before.”
Jamnik says using software to support clinicians has already produced some significant case studies of improvements in patient safety. For example, a study in a Boston emergency department, which introduced a decision-support system to help clinicians prescribe a particular drug or course of treatment, saw errors decline by 55%.
In the UK, University Hospitals Birmingham NHS Foundation Trust has reduced error rates in prescribing through its Prescribing Information and Communications System (PICS), which uses the trust’s agreed procedures and policies to advise staff.
I see the future in the hands of human experts, but heavily supported and helped with expert systems that are becoming more accurate all the time.
For example, if a clinician orders a high level of a certain drug, PICS can query this. The user can override the software, but that override action is recorded. Such events are recorded to monitor how staff work, but also to adjust policies and train new doctors. The trust has licensed the software to other parts of the NHS.
“I see the future in the hands of human experts, but heavily supported and helped with expert systems that are becoming more accurate all the time,” says Jamnik, particularly given progress in analysing data on patients with several medical conditions, as well as work on personalised medicines.
“Clinicians cannot, in such complex patient cases, reliably take all the relevant facets into consideration,” she says. “Sometimes, electronic health records and clinical evidence need to be combined, and computers can effectively address this using statistical and machine learning techniques.”
Software can also avoid errors that people are prone to making, such as overestimating the likelihood of events that happen more frequently, she says. “A decision-support system is less biased and therefore provides valid help to a human expert.”
The University of Sheffield’s Turner says there is also potential for NHS staff in the community to use clinical decision-support systems to help them do more. Some 80%-90% of 999 calls are not life-threatening emergencies, she says, and the paramedics dealing with such call-outs could set up more appropriate treatment as part of their visit.
“To enable them to do that, there is probably scope for hand-held devices with support software on them,” Turner says.
For example, for older people who have suffered a less serious fall, the best treatment is for them to be visited by a specialist falls team that can help them make changes that allow them to stay in their own home. This is a better option than taking the patient to A&E, where they could face a long wait and risk infections – and it is also cheaper.
While some workers have, and will, find their jobs replaced by IT, it looks more likely that skilled healthcare professionals will see software support them to become more accurate and efficient.
Given that the ageing UK population and more expensive treatments are increasing demand for NHS services more quickly than economic growth can support – which is true of most developed countries – it looks more likely that the medics of the future will be cyborgs rather than robots.
Source : http://www.computerweekly.com/feature/NHS-111-shows-how-medical-diagnosis-can-be-computerised