Early in 2018 I went to Sadler’s Wells Theatre. I had been many times before to see the ballet but this time I had been asked to attend a meeting arranged by an organisation I had not heard of before: the Experimental Cancer Medicines Centre network, usually known as the ECMC.
We were going to discuss how best to help researchers explain some of the complex and innovative research that is necessary in oncology.
Classical clinical trials, known as Clinical Trials of Investigational Medicinal Products or CTIMPs, don’t really work in this field. They require quite a large number of participants whereas some cancers can be, thankfully, very rare and so there may not be large numbers of patients to study. For example, researchers may find themselves in the situation of having several cancer drugs and want to work out who would benefit from a particular drug. Such research involves looking at different combinations of drug, patient and cancer type.
Over recent years there have a plethora of fancy names for such research approaches: umbrella, basket, adaptive trials, Complex Innovative Design trials (CIDs) and more. But the ECMC recognised that they all had the same basic idea: running as a series of small ‘arms’ each one looking at a particular drug or group of patients. At the start there might be, say five arms, and as the research continued some would be stopped when it became clear the drug wasn’t working for this particular group of patients, and other arms would be initiated.
So, the ECMC wanted to bring people together to see if they could develop a common approach – using a standard template. They wanted to enable this type of research to be more easily understood and therefore more likely to be approved, without needing further clarifications during the review process. Their focus was on developing the consensus paper – which has been published today (link to press release) - that explained what was needed in the research description. By following this method researchers would be able to ensure that the regulators, including Research Ethics Committees (RECs) would be more likely to be able to give a favourable opinion. The plan is now for this to become part of the standard way of describing such research.
During the meetings I was keen to express what I felt was an important idea: we (RECs) could approve such research so long as we (and the participants) were confident we knew what that was going to happen. In particular we would need to be sure we understood how the research would develop.
I think the researchers had at first feared we might ask for something much more complicated. They liked the approach I was suggesting and - as always happens to those who make too much noise in such meetings - I was asked to join the core group who would meet several times and bit by bit write the consensus paper, joining Will Navaie, the HRA’s Engagement Manager. We were a good diverse group who worked well, arguing happily, and gradually we managed to produce our consensus paper. It took longer than I expected (it always does) and was harder work than I expected but was fun and will be useful.
Is it the last word on complex innovative designed trials?
Probably not: it is an ever-evolving scene. There are still some aspects that need to be developed. High on my list is making sure that that the Master Protocol or the Programme Design (the terms have yet to settle down) really specifies how the Data Monitoring Committee makes its decisions. And I am equally unsure that we have sorted how our colleagues in the hospitals and research centres can be confident the research won’t suddenly become more expensive and extensive than they expected…. But I am optimistic we can get there.