It’s always a privilege to be invited to speak at the King’s Fund, and especially at the Digital Health and Care Congress—a key date in the digital healthcare calendar each year.
Co-presenting with Judith Chapman, Berkshire IAPT Clinical Lead, I spoke about the potential for iaptus’ new online therapy platform, Prism, to support the evaluation of online psychological therapies in IAPT.
It feels as though as much is now being said about the challenge of evaluating—or ‘kitemarking’—digital health tools, as is being said about their potential to reduce the pressure on face to face services in the NHS. Public Health England are looking at various approaches to evaluation, from self-certification by the developers of the applications through to full randomised controlled trials.
But in this domain of constantly emerging and developing technology, the idea that we will be able to say once and for all, at a particular point in time, “it works” is flawed. Online therapy platforms and tools are a broad church, including single purpose self-guided free apps, full clinical platforms offering synchronous contact from a qualified therapist in a virtual environment, and everything inbetween. The applications themselves—as with any software or service—are constantly evolving. Furthermore, figures from the HSCIC on IAPT activity reveal that use of computerised interventions is still at the margins of IAPT, so evaluation for many technologies is challenged by lack of throughput and volume.
Rather, we need to start thinking about a more iterative approach where greater use of digital tools is leveraged, enabling better evaluation (from sufficient volumes of throughput), that informs enhancements, that should drive further uptake and better outcomes. Evaluation of online tools becomes a continuous process, and provides a test bed for trialing further improvements.
By creating Prism, we think we can make a very practical contribution to the issue of online therapy evaluation. Prism integrates online therapy tools and platforms with our IAPT digital care record, iaptus, used by 70% of IAPT in England. It makes it easy for referrals from IAPT to be sent seamlessly and securely to a range of online providers, and just as easy for progress and outcomes metrics to be sent back into the patient’s care record. Making it administratively easy and secure to refer to online providers should help leverage a step change in uptake of online care and so support more statistically significant evaluation. Being able to receive outcomes back into the iaptus care record means that data about the effectiveness of the online care is being gathered “on the fly”. This data can be analysed with the care record’s built-in reporting tools almost immediately—to see if it is working for patients. This fast feedback loop can drive the cycle of use, evaluation, improvement.
It also means that any evaluation study has access to the full IAPT dataset collected for each patient subject, supporting a far more granular analysis of which online intervention achieved which outcomes for which types of patients.
It might not be as neat as the single stamp of approval that people had in mind in kitemarking health apps, but a more iterative evaluation model is not only more appropriate to the emergent technology—it has the potential to drive the continuous improvement of these tools for patients.