CITL: Clinician-in-the-Loop AI & Clinical Uncertainty
research areas
timeframe
2025 - 2025
contact
andrea.farnham@uzh.chThis project is a collaboration across the DSI Health, Ethics, and AI&Law communities, and is co-sponsored by the Population Research Center
(Illustration above is aided by use of genAI tools Gemini & chatGPT)
Advances in clinical AI have immense potential, yet real-world deployments rarely scale successfully. One persistent barrier is the lack of transparency around predictive uncertainty and clear escalation pathways for clinician oversight.
The CITL project aims to bridge this gap by convening a cross-disciplinary expert panel to define how uncertainty should be communicated, identify where humans must intervene in AI-assisted care, and create the CITL Playbook, integrating best practices from medicine, engineering, law, and ethics. Join us in May 2026 to help shape the future of clinician-in-the-loop AI.
The CITL Playbook will be inspired by the outcome of our expert panel discussion. Before our panel event, registered participants can request to receive a pre‑read introducing the panel and its key deliverables. During the expert panel, the chair will welcome participants, state our aims, and moderate a discussion based on questions arising both from participant contributions and related literature
After the panel, we will close with a standing lunch and some networking!
Coming up: Expert Panel (May 8th, 2026)
Register to attend the expert panel. For more information, see the online form here
Aim
This project addresses critical barriers to scaling clinical AI by integrating cross-disciplinary expertise and aligning with leading guidelines and studies. We will develop a clinician-in-the-loop AI playbook that defines best practices for uncertainty communication, human oversight and continuous learning in clinical decision-support.
Methods
Interdisciplinary expert panel including:
- pre-read distribution
- moderated discussion
- audience Q&A
Post-panel synthesis of insights into a structured playbook, with panelist involvement.
Project Goals
- Define what predictive uncertainty means in clinical contexts and how it should be communicated.
- Map where and how clinicians should intervene in AI-driven care pathways.
- Establish protocols for capturing feedback and learning from overrides and outcomes.
- Produce a widely accessible CITL playbook for use by researchers, clinicians and policymakers.
Background:
Clinical digital tools always promise to improve healthcare delivery, but rarely scale in routine practice [1]. With the frequent number of failures of clinical digital tools, it is often not clear what failed, at what level in the care pathway it happened, and what could have been done better [2]. Too often, uncertainty is invisible, escalation rules are ad hoc when confidence is low, and neither clinicians nor systems learn systematically from overrides, near‑misses, or model drift [3,4].
This event will include a two‑hour expert panel, followed by an apéro, to promote clinician‑in‑the‑loop approaches in clinical machine learning and artificial intelligence. It will set a shared foundation for communicating uncertainty, defining human roles at critical decision points, and building lightweight learning loops that improve outcomes and trust.
The World Health Organization itself calls for transparent uncertainty measures and human-in-the-loop governance [5]. This motivates our mission: to create a clinician‑centred AI framework that ensures safety, fairness, and trust.
Research Questions:
RQ1: How should uncertainty be defined and communicated for clinical use so that calibration, coverage, and confidence are not only measured but translated into action thresholds, escalation logic, and patient‑facing language?
RQ2: Where and how should humans enter the loop along the care pathway, and what governance ensures accountability when model confidence is low, or the stakes are high?
RQ3: How can clinicians and systems learn with the algorithm after deployment by capturing overrides, corrections, and outcomes, monitoring drift and fairness, and feeding this evidence back into model updates and workflow design?
The gaps:
- Uncertainty is often invisible to clinicians, hindering trust and accountability.
- Existing systems lack mechanisms to capture feedback from overrides and near-misses.
- Cross-disciplinary collaboration is essential to align technical innovations with clinical workflows and ethical standards.
The opportunity:
By bringing clinicians, AI engineers, legal scholars, and ethicists together, the CITL panel will develop a shared vocabulary and playbook for deploying AI responsibly in healthcare. The resulting guidance will inform future digital health projects across UZH and serve as a reference for national and international partners.
References:
- European Commission. Directorate General for Health and Food Safety., PwC., EEIG., Open Evidence. Study on the deployment of AI in healthcare: final report. [Internet]. LU: Publications Office; 2025 [cited 2025 Oct 25]. Available from: https://data.europa.eu/doi/10.2875/2169577
- Banerji CRS, Chakraborti T, Harbron C, MacArthur BD. Clinical AI tools must convey predictive uncertainty for each individual patient. Nat Med. 2023 Dec;29(12):2996–8. doi:10.1038/s41591-023-02562-7
- Cajas Ordoñez SA, Lange M, Lunde TM, Meni MJ, Premo AE. Humility and curiosity in human–AI systems for health care. The Lancet. 2025 Aug;406(10505):804–5. doi:10.1016/S0140-6736(25)01626-5
- Angus DC, Khera R, Lieu T, Liu V, Ahmad FS, Anderson B, Bhavani SV, Bindman A, Brennan T, Celi LA, Chen F, Cohen IG, Denniston A, Desai S,EmbíP, Faisal A, Ferryman K, Gerhart J, Gross M, Hernandez-Boussard T, Howell M, Johnson K, Lee K, Liu X, Lomis K, London AJ, Longhurst CA, Mandl K, McGlynn E, Mello MM, Munoz F, Ohno-Machado L, Ouyang D, Perlis R, Phillips A, Rhew D, Ross JS, Saria S, Schwamm L, Seymour CW, Shah NH, Shah R, Singh K, Solomon M, Spates K, Spector-Bagdady K, Wang T, Gichoya JW, Weinstein J, Wiens J, Bibbins-Domingo K, JAMA Summit on AI, Alterovitz G, Clancy HA, Dawson L, Diamond M, Holve EC, Kahn J, Pengetnze YM, Rao S, Shrank WH, Termulo C. AI, Health, and Health Care Today and Tomorrow: The JAMA Summit Report on Artificial Intelligence. JAMA. 2025 Oct 13. doi:10.1001/jama.2025.18490
- WHO releases AI ethics and governance guidance for large multi-modal models [Internet]. [cited 2026 Mar 27]. Available from: https://www.who.int/news/item/18-01-2024-who-releases-ai-ethics-and-governance-guidance-for-large-multi-modal-models