Blog: AI and the HRA

Last updated on 24 Sep 2024

Over the past few years, we have seen a rapid rise in the use of artificial intelligence (AI).

Our emails are now prefiltered to help us find the most important messages and avoid spam.

The maps on our phones intuitively offer the most efficient route, avoiding traffic and road closures in real time.

In our day to day lives anyone can now ask large language models (a type of AI model specifically designed to understand, generate and manipulate natural language), like Chat GPT, to answer complex questions at the click of a button.

In health and social care, the use of AI is also becoming more common place. Increasing the use of AI is one of the strategic aims set out in the NHS Long Term Plan to ‘help clinicians in applying best practice, eliminate unwarranted variation across the whole pathway of care, and support patients in managing their health and condition’.

To achieve these aims, it is important for research to inform the use of AI in health and social care research.

The Health Research Authority (HRA) has a role in reviewing research involving AI before it starts to make sure it is legal and ethical. We provide advice to the research community about using AI when doing research, which is particularly valued in new areas of research where standards may be unclear.

We also have a role in addressing questions about how AI is used by anyone doing research. And, of course, we need to consider how we might use AI when we do our role in reviewing research.

The opportunity to make a real difference

There are exciting examples of AI at work in the NHS that are already making a real difference in early diagnosis and treatment. 

AI is being used to review medical images such as X-ray, CT and MRI scans which is transforming the way that we screen for cancer. 

The accurate, early diagnosis that AI can provide is supporting clinicians to identify signs of cancer earlier, potentially saving thousands of lives a year. 

Other examples include AI being used to identify early signs of diabetes from eye scans, and even being used to predict how certain drugs may interact with diseases. 

We are also seeing AI being used to help with recruitment, bed occupancy and scheduling.

But, we know that further research will be key to unlocking the full potential of AI. 

What are things we need to consider when it comes to AI?

Whenever AI is talked about, so too are the concerns about how it is being used. Bias in particular is a recurring theme.

Questions that typically come up wherever we talk about AI are:

  • how accurate is the data that AI is trained on? 
  • can the information generated by AI be trusted?
  • how do we ensure the privacy and security of personal data (including health and care data)?
  • should AI be making decisions instead of people – particularly when it comes to ethics?
  • how can we ensure an AI can make fair decisions?
  • what is lost (if anything) when AI is used to write information about research?

These concerns pose legitimate questions that must be addressed.

The health and social care system needs to ensure that people can trust AI generated information before it can be used to make decisions.

Helping to ensure health and social care research is trusted is where we come in. 

AI and the HRA

The HRA is responsible for protecting the rights, safety, dignity and wellbeing of research participants. 

As the number of research studies involving AI increases every year, so do the number of applications submitted that are written using AI. We have taken steps to adapt the way we work to be better equipped to deal with these changes. 

When we first saw a rise in AI in research, we set up a dedicated data and AI programme with a team to help embed ways in which we could help streamline data driven research. 

Research developing and using data driven technology (including AI) raises lots of similar issues to other types of research, including about consent and data protection. 

However, we needed to make it easier for researchers to find and follow the relevant guidance. Some of the people working in this area are new to health and social care research, so they also need help to understand how research in the NHS works. We brought together specific guidance and other tools for researchers.

Similarly, our Research Ethics Committee (REC) and Confidentiality Advisory Group (CAG) members needed a wider working knowledge of AI so that they can properly understand and challenge what they are reviewing. 

We also recognised that we cannot do this alone.

To further support researchers and innovators looking to develop AI for health and social care, we teamed up with the National Institute for Health and Care Excellence (NICE), the Care Quality Commission (CQC) and the Medicines and Healthcare products Regulatory Agency (MHRA) to establish the Artificial Intelligence and Digital Regulations Service

The service provides a ‘one stop shop’ to find in-depth guidance and practical support to developers and adopters of AI technologies in health and social care. Each organisation brings vital expertise to the service, including product regulation (MHRA), data and research governance (HRA), clinical and cost effectiveness (NICE) and care quality assurance (CQC). 

Transforming the way we work, with trust remaining at the centre

The HRA is also looking at where we might be able to use AI technologies to work more effectively and improve our services, for example identifying ways AI might be helpful to support our approval processes. 

AI offers the ability to quickly process information to determine whether it meets defined standards, predict outcomes based on similar previous data and synthesise enormous volumes of information. 

With that in mind we have been looking into the feasibility of using AI in our regulatory process. 

We have been carrying out a pre-pilot project using past applications (with applicant approval) to test and evaluate the use of AI in specific review processes. 

In the coming months we will be sharing the results of this pilot with the hope to carry out a larger scale pilot in 2025. 

As a regulator that exists, in part, to give people confidence that they can trust research and its findings, it is important that we keep pace with the way people want to use technology to communicate. 

At the same time, we need to ensure that people can maintain trust in the research approval process, and that patient and service user needs remain front and centre. 

Going forward

As AI technologies impact all our operations, we are now embedding our data and AI expertise across all of our teams to cement this knowledge across the organisation.

If you have any questions about the HRA’s role in the regulation of AI in health and social care research, please contact us by emailing queries@hra.nhs.uk and we will ensure your query gets to the right person who is best suited to help you. 

A headshot of Dr Matt Westmore

Matt Westmore

Chief Executive
Back to news and updates