AI in the NHS: what do health professionals need to know?

Can artificial intelligence help bring about personalised and preventative healthcare? AI expert Jessica Morley sums up the four key considerations for NHS professionals

Medical practice has always been data-driven in the sense that it is primarily about identifying cause and effect – whether that be the cause of an illness or the effectiveness of a treatment, writes Jessica Morley.

In recent years, the ‘datafication’ of many aspects of life, and the digitisation of medical images and health records, has dramatically increased the amount of data available for these purposes. This has led to a resurgence in interest in the use of artificial intelligence (AI) for healthcare.

Proponents of the data revolution for healthcare suggest that AI can be used to complete a wide range of tasks: from diagnosis to risk stratification, and from the reduction of medical errors to increased productivity. By far the greatest excitement among policymakers, however, stems from the possibilities of precision medicine.

Unlocking the potential of precision medicine

Precision medicine aims to ensure that the right treatment is delivered to the right patient at the right time by taking into account multi-omic data on an individual’s lifestyle, environment, medical and genetic history. AI is seen as being a key technology in bringing about this transformation.

NHS leaders who support this line of research argue that increasing our use of AI in this manner will enable a shift from reactive medicine, initiated after a patient has become ill, to a more preventative model of care. Thus, incorporation of AI has the potential to completely change the way we deliver healthcare in the NHS and – as pointed out in the Topol Review – this has implications for the healthcare workforce.

So, what do healthcare professionals need to know?

1. What AI is (and what it isn’t)

AI is an umbrella term and, despite its current popularity, is often used without being defined. In medicine, the best definition is still the original: “the science of making machines do things that would require intelligence if done by people“.

The ‘science’ here refers to the range of techniques that can be used. For example, decision tree techniques have been used to diagnose breast cancer tumours, support vector machine techniques have been used to classify genes, and ensemble learning methods have been used to predict outcomes for cancer patients.

From this perspective, AI represents a “growing resource of interactive, autonomous, and often self-learning agency” that can be drawn upon when needed. It is not about creating autonomous clinicians to replace the existing human workforce. There are countless aspects of healthcare that require skills that are uniquely human.

2. Datasets on their own are fairly meaningless

Datasets, and even advanced AI techniques, alone are not sufficient to inform medical practice. Data must be transformed before it can be useful; hence the popularity of the ‘DIKW’ pyramid: data → information → knowledge → wisdom.

Transforming data in this way requires tremendous time and skill, and creates additional work – including requiring healthcare providers to make complex judgements, such as whether or not data provided by patients (for example, from their FitBit) is clinically relevant.

However, by far the biggest challenge comes from the way that AI makes use of health data, fundamentally transforming the role that it plays in the care pathway by interchangeably linking patients and their data so that “patients are their genetic profiles, latest blood results, personal information, allergies etc“. This link means that ‘data protection’ encompasses more than data security, consent and anonymisation.

Like other forms of medical intervention, AI can cause real harm and its use requires genuine ethical consideration. With this in mind, healthcare professionals should make themselves aware of policies and standards (for example, the NHS Code of Conduct for data-driven health and care technology) to ensure the safe and ethical development and use of AI for healthcare, and be ready to ensure these standards are enforced.

3. AI isn’t always right, and isn’t right for every task

AI and algorithms are surrounded by a considerable amount of mythology: not least, unfounded beliefs that ‘the algorithm is always right’. Algorithms are socio-technical constructs, however. They are enmeshed in the context in which they were created – meaning that an error by an AI’s human developers, or deployment in a context different to the one it was designed for, are likely to generate incorrect outputs.

It is therefore very important that healthcare professionals feel able to question the suggestions made by AI systems in clinical practice. Consequently, basic computer and data science skills must be included in medical training programmes to educate clinicians about the inherent fallibility of AI systems.

Crucially, this ability to question should not only extend to the ‘decision’ or ‘prediction’ made by an AI system, but also to the logic of deploying AI at all. Artificial intelligence is a powerful technology, but it is not a silver bullet. Some tasks will remain better completed by a human. As Ezio Di Nucci writes in Should we be afraid of medical AI?: “It is paramount that we properly distinguish between different kinds and levels of task that we may or may not legitimately delegate to machine learning algorithms.”

4. Your contribution is vital

This short overview might make it seem like AI is going to increase the workload for an already overstretched workforce, rather than decrease it, but that need not be the case.

Indeed, although many who champion AI for healthcare make it seem as if it is already acting as the panacea for the issues facing the NHS, in truth most of the AI solutions discussed in research literature and in the press are not yet executable on the frontline.

There are numerous reasons for this disconnect between research and reality – both technical and social. But perhaps the biggest reason is because, too often, the technical community and the medical community have operated in silos. Thus, the most important thing that healthcare professionals need to know about AI and precision medicine is that, if the opportunities are to be capitalised on and the risks minimised, they will need to collaborate with AI researchers to help steer research. This will help to ensure that AI is used to tackle some of healthcare’s biggest challenges – not just for the sake of it.

Jessica Morley is policy lead at the University of Oxford’s Evidence-Based Medicine DataLab and AI subject matter expert at NHSX

Please note: This article is for informational or educational purposes, and does not substitute professional medical advice.