A deployment safety case for AI-assisted prostate cancer diagnosis.
Jia Y., Verrill C., White K., Dolton M., Horton M., Jafferji M., Habli I.
Deep learning (DL) has the potential to deliver significant clinical benefits. In recent years, an increasing number of DL-based systems have been approved by the relevant regulators, e.g. FDA. Although obtaining regulatory approvals is a prerequisite to deploy such systems for real world use, it may not be sufficient. Regulatory approvals give confidence in the development process for such systems, but new hazardous events can arise depending on how the systems have been deployed in the intended clinical pathways or how they have been used with other systems in complex healthcare settings. These kinds of events can be difficult to predict during the development process. Indeed, most health systems and hospitals require self-verification before deploying a diagnostic medical device, which could be viewed as an additional safety measure. This shows that it is important to carry on assuring the safety of such systems in deployment. In this work, we address this urgent need based on the experience of a prospective study in UK hospitals as part of the ARTICULATE PRO project. In particular, the system considered in this work is developed by Paige for prostate cancer diagnosis, which has obtained FDA approval in the US and UKCA marks in the UK. The methodology presented in this work starts by mapping out the clinical workflow within which the system has been deployed, then carries out hazard and risk analysis based on the clinical workflow, and finally presents a deployment safety case, which provides a basis for deployment and continual monitoring of the safety of this system in use. In this work we systematically address the emergence of new hazardous events from the deployment and to present a way to continually assure the safety of a regulatorily approved system in use.