Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

Deep learning (DL) has the potential to deliver significant clinical benefits. In recent years, an increasing number of DL-based systems have been approved by the relevant regulators, e.g. FDA. Although obtaining regulatory approvals is a prerequisite to deploy such systems for real world use, it may not be sufficient. Regulatory approvals give confidence in the development process for such systems, but new hazardous events can arise depending on how the systems have been deployed in the intended clinical pathways or how they have been used with other systems in complex healthcare settings. These kinds of events can be difficult to predict during the development process. Indeed, most health systems and hospitals require self-verification before deploying a diagnostic medical device, which could be viewed as an additional safety measure. This shows that it is important to carry on assuring the safety of such systems in deployment. In this work, we address this urgent need based on the experience of a prospective study in UK hospitals as part of the ARTICULATE PRO project. In particular, the system considered in this work is developed by Paige for prostate cancer diagnosis, which has obtained FDA approval in the US and UKCA marks in the UK. The methodology presented in this work starts by mapping out the clinical workflow within which the system has been deployed, then carries out hazard and risk analysis based on the clinical workflow, and finally presents a deployment safety case, which provides a basis for deployment and continual monitoring of the safety of this system in use. In this work we systematically address the emergence of new hazardous events from the deployment and to present a way to continually assure the safety of a regulatorily approved system in use.

Original publication

DOI

10.1016/j.compbiomed.2025.110237

Type

Journal article

Journal

Comput Biol Med

Publication Date

08/05/2025

Volume

192

Keywords

AI safety, Artificial intelligence, Deep learning, Histopathology, Prostate cancer diagnosis, Safety case