Artificial intelligence (AI) and data-driven technologies offer huge potential for health and social care. They can analyse large quantities of complex information quickly to provide insights that can be used to improve people’s lives. But it’s crucial that AI is used ethically, and to make sure that’s always the case, here at the HRA we’re future proofing the way that research studies using these technologies are approved.
Our work in this area is funded by NHSX as part of the NHS AI Lab.
We want to streamline the approvals process for people who have, or are likely to, apply to us for permission to start research involving AI or data driven technologies. We want these exciting studies to be reviewed in a proportionate way, based on their level of risk, and want to help them be approved more quickly. Overall we want to increase the number of AI studies being submitted for approval and to play our part in helping to realise the benefits of these new data-driven technologies.
We can’t do this alone. We’re collaborating with partners and engaging with organisations that focus on protecting the public’s interest in the use of health data, those developing the technology and experts in the rules that govern data use. As with all areas of our work, we’re also engaging with patients and research participants to make sure that their needs are front and centre.
Our first challenge was to find out what supports and what creates barriers for those developing cutting-edge technologies, and using AI for health and social care.
What we did
We carried out qualitative research with people who have submitted AI applications, along with students, academics and companies in this area. We wanted to identify common issues with our existing approvals process when studies involving data-driven technology seek approval.
An AI applicantOne challenge historically is matching up the approvals requirements to AI practices – like changing one letter of code, or changing the login page, needs re-approval and slows things down.
We worked with our Research Ethics Committees to understand more about the areas of concern when this kind of research is reviewed. This helped us to understand where the ethical challenges might lie and how to support our REC members with their work in this area.
A REC memberAs RECs don’t over scrutinise the science behind studies, they have to be able to trust what’s being presented to them. The level of understanding and knowledge required amongst REC members about AI technology and how data works to review studies of this nature could potentially present a risk. Further training or support from specialists is therefore essential.
We also carried out a literature review, and a series of workshops with patients, industry, clinicians, academics, other regulators, HRA staff and volunteers.
What we heard:
It needs to be easier at an earlier stage to identify which AI and data-driven activities are research, and exactly what approvals will be required.
We found that people outside the health and care sectors, such as start-up founders, computer scientists, mathematicians, bio informaticians and technologists are doing exciting work in this area, but that they’re not familiar with the HRA or our role. We heard that those creating AI and data-driven technologies want faster access to health data particularly at the proof of concept stage and that awareness of the range of datasets that could be used for this purposes was limited.
AI start-up ownerI wanted to do something in the NHS, but everyone I spoke to said 'go somewhere else because the NHS will kill your business'
We will be looking at how we can work with partners to support access and also ensure that appropriate safeguards are still maintained to protect health data on behalf of patients and the public.
We also identified that HRA volunteers and staff need a deeper understanding of AI if we’re to speed up the approvals process. Demand is increasing year on year, and regulators like the HRA will have to change to keep up.
This work will help us make sure new technologies can be developed and that we can support their development, for example:
- Using AI as part of image recognition, for example in radiology or pathology
- Natural Language Processing, using computers to analyse speech, like this project using NHS data at the University of Edinburgh
- Privacy enhancing technologies e.g. federated learning models
- Different data holding structures e.g. Trusted Research Environments
What we’re doing next:
We’re working with partners including the Medicines and Healthcare products Regulatory Agency (MHRA) and NHS Digital to look at how we can work together, using the outputs of our discovery work, to make change happen.
We’re now exploring a number of solutions to improve the approvals process for people applying to start health and social care research involving artificial intelligence (AI) or data driven technologies. Read more about what we are exploring and how you can get involved in helping us improve.