| 5 September 2024
Artificial intelligence (AI) continues to impact our lives in new ways every single day. We now rely on AI in a variety of areas of life and work as organisations look to make services quicker and more effective, and healthcare is no different.
This article explores the ways in which AI is already being utilised within the sector, the benefits and challenges of these technologies, and what the future may hold for the healthcare sector.
Current use of AI in healthcare
AI’s ability to store, process and analyse data and perform tasks with increased levels of accuracy at a quicker rate is already transforming healthcare, it can perform tasks accurately whilst streamlining workloads and increasing the amount of time healthcare professionals have to dedicate to patient care.
Early detection and diagnosis, treatment as well research are just a few examples of areas where the use of AI is having a positive impact.
Early detection and diagnosis
AI is helping detect and diagnose life threatening illnesses at incredibly accurate rates, helping improve medical services. One example is in breast cancer units where the NHS is currently using a deep learning AI tool to screen for the disease. Mammography intelligent assessment, or Mia™, has been designed to be the second reader in the workflow of cancer screenings. By using AI, clinicians are freed up to spend more time with patients, pressures are reduced on radiologists and more patients are seen more quickly and thus earlier detections of cancer can be made.
A further example can be seen within the NHS response to the Covid-19 pandemic where The National Covid-19 Chest Imaging Database (NCCID) used AI to help detect and diagnose the condition within individuals. 40,000 CT scans, MRIs and X-rays from more than 10,000 patients were collated. AI was then able to use this data to help diagnose potential sufferers of the disease at a much quicker rate. The outcome of this resulted in clinicians being able to introduce earlier medical interventions, reducing the risk of further complications.
Treatment
AI can also improve the treatment of patients by working through data efficiently, allowing enhanced disease management, better coordinated care plans and aid patients to comply with long-term treatment programmes. The use of robots has also been revolutionary with machines being able to carry out operations such as bladder replacement surgery and hysteromyoma resection. This reduces the stress on individuals as well as increasing the number of operations that can be carried out, leading to patients being able to be seen to quicker.
Research
It costs on average a company $359 million to develop a new drug from research lab to patient, taking typically around 12 years to develop, with only five in 5,000 of the drugs ever making it to human testing and just one of these five being approved for human usage. However, the introduction of AI is vastly improving this service, streamlining drug discovery and development processes allowing for time and costs to be significantly cut. The Pfizer vaccine for Covid-19 is one example where researchers were able to analyse patient data following a clinical trial after just 22 hours thanks to AI, a process which usually takes 30 days.
The impacts of AI within research, treatment and detection are leading to more diseases being able to be cured whilst doing so at a much quicker rate. This is resulting in huge benefits such as a lowering fatality rate, an improved service for patients as well as a reduction in costs in areas such as research.
Challenges of AI
Despite the current use of AI and the incredible work it can carry out, it does have its limitations. These can be seen in areas such as data privacy and security, trust and transparency as well as the implication on the workforce.
Data privacy
AI systems often rely on large amounts of data on personal health. However, its practises have the potential of causing significant issues in data privacy, informed consent and patient autonomy.
In 2017 the UK Information Commissioner’s Office declared that the Royal Free NHS Foundation Trust breached the UK Data Protection Act 1998 by providing personal data of around 1.6 million partial patient records to Google DeepMind (an AI laboratory), with patients not being informed about the processing of their data.
Another example can be seen in a study conducted in 2018 that analysed data sets from National Health and Nutrition Examination Survey. Here it was found that an algorithm could be used to re-identify 85.6% of adults and 69.8% of children in a physical cohort study, despite the supposed removal of identifiers of protected health information.
Due to AI’s reliance on utilising varied data sets and patient data sharing, violations of privacy and misuse of personal information could continue to be difficult to manage as AI grows.
Security
The storing of such data also puts AI systems in danger of being exploited for malicious purposes, such as cyberattacks. Hospital technologies can be infected with software viruses, malware and worms that risks patients’ privacy and health, with corrupted data or infected algorithms causing incorrect and unsafe treatment recommendations. AIs in particular are vulnerable to manipulation, as a system’s output can be completely changed to classify a mole as malignant with 100% confidence by making a small change in how inputs are presented.
The problems of data privacy and security could lead to a general mistrust in the use of AI. Patients could be opposed to utilising AI if their privacy and autonomy are compromised. Furthermore, medics may feel uncomfortable fully trusting and deploying the solutions provided if in theory AI could be corrupted via cyberattacks and present incorrect information.
Transparency
The potential of mistrust is also evident within the area of transparency. Many AI algorithms are virtually impossible to interpret or explain and this can result in medical professionals being cautious to trust and implement AI, due to this lack of explanation within results. Transparency also poses an issue for patients. If an individual is diagnosed with a disease such as cancer, they’re likely to want to know the reasoning or be shown evidence of having the condition. However deep learning algorithms and even professionals who are familiar within their field could struggle to provide such answers.
The lack of explanation and transparency is also problematic when mistakes are made. AI systems will on occasion fail in patient diagnosis and treatment for things such as ‘model drift’. This is where an AI system that was deployed into production years ago will start to show signs of performance decay, making unstable predictions over time. The reason for these mistakes could be down to several things. For example, the AI could have been built on data sets and parameters that are no longer valid such as pre-covid vs post covid.
When these issues do occur, there can be difficulties in establishing accountability. If a patient is given the wrong diagnosis, can we hold a machine accountable?
Implication on the workforce
With the introduction of AI, individuals and organisations will have to reskill and develop their abilities to be able to understand and use effectively the new technology introduced. This restructuring of organisations and workforce retraining however could prove difficult. It can take considerable time and money for organisations to be restructured and individuals to be retrained in areas; to enter a stage where they can use new skills competently and independently. Given the fact that data and digital skills are already a challenge across the public sector, the implementation of AI could be limited and prove difficult to be used broadly across the sector. Furthermore, AI is dependent on the humans that can develop and install it as a process. Until the workforce have the capabilities to do this, AI will struggle to be the complete revolutionary force it potentially promises to be.
The future
AI is already playing a huge role within healthcare, affecting certain areas such as diagnosis, treatment and research for the better, helping provide more efficient services. The issue therefore going forward is not within its utility but how it will be adopted in daily practice, given the challenges mentioned and how these issues will be addressed.
With regards to data privacy and security, setting up risk controls, governance frameworks and in investment in compliance and training could all lead to an increase in protection and to help maintain confidentially; minimising the threats of personal data violations and cyberattacks. In terms of implication on the workforce, companies could turn to modifying their long-term workforce plans, skills matrixes and job role designs to allow individuals to adapt effectively and increase their competencies when dealing with new AI technology. Transparency and accountability may be more difficult to address, however a person-centred approach, where individuals are at the forefront of decision making may be a possible option to reduce such issues occurring.
The rate at which these challenges are addressed by the solutions highlighted may limit the speed at which AI is implemented, as opposed to the time it takes for the technologies to mature. However, once these challenges are overcome, AI use could be extensive within healthcare, working alongside healthcare professionals in their efforts to increase patient care.