PATCHS: Real-world testing and optimization – stage 1

  • Research type

    Research Study

  • Full title

    Patient Automated Triage and Clinical Hub Scheduling (PATCHS): Real-world testing and optimization – stage 1

  • IRAS ID

    264891

  • Contact name

    Benjamin Brown

  • Contact email

    benjamin.brown@manchester.ac.uk

  • Sponsor organisation

    University of Manchester

  • Duration of Study in the UK

    1 years, 0 months, 0 days

  • Research summary

    Workstream 1: patient data collection: We will install PATCHS software in 5 GP practices in Salford and Manchester. Patients will be able to register to submit requests to their GP practice through the PATCHS ‘Chatbot’ (see screenshot 9). The PATCHS Chatbot will ask a series of questions to elicit further information before the patient can submit their request to their GP practice.
    Workstream 2: training dataset collection: Each patient request will be reviewed by a GP in their practice (a ‘Home’ GP) who will deal with it using their clinical judgement. As part of this process the GP will record a triage decision (‘label’ e.g. GP appointment), which will be used as a ‘training’ dataset to ‘teach’ PATCHS how to triage requests on its own. We aim to collect 50k triages in total. PATCHS will not make any clinical decisions that would influence patient care.
    Workstream 3: test dataset collection: A random sample of 964 (1.9%) of appointment requests will be shown to three further GPs (selected at random from a panel of 20-30) that do not work in the patients’ GP practice (‘Away’ GPs). The purpose will be to determine what an ‘average’ or ‘Benchmark’ GP would have done with each of these requests to form a ‘test’ dataset. Patients will consent to sharing their information in this way when they register to use PATCHS in Workstream 1. PATCHS will retrospectively decide a triage outcome for each appointment request, which will be compared with the Benchmark GP decision. We will consider PATCHS ‘safe’ if its performance is at least as good as the Benchmark GP.
    Workstream 4: qualitative evaluation: Alongside the above workstreams, we will conduct usability tests with patients and GP staff using PATCHS in a test environment, in addition to observing staff using it in the GP practice and interviewing staff and patients about their experience. All users will also be able to submit feedback about their experience using PATCHS, and we will analyse how they use the system such as what pages they visit and when. These findings will be used to improve PATCHS’ design and implementation into clinical practice, in addition to our approach to evaluating it in future studies.

  • REC name

    Yorkshire & The Humber - Bradford Leeds Research Ethics Committee

  • REC reference

    20/YH/0020

  • Date of REC Opinion

    24 Feb 2020

  • REC opinion

    Further Information Favourable Opinion