2nd Workshop on

Foundation AI Models in Biomedical Imaging

at

IEEE International Conference on Biomedical Imaging (ISBI) 2026

11 April 2026

London, United Kingdom

About

Foundation AI models are generalistic AI models that have recently garnered huge attention in the AI research community. Foundation AI models bring scalability and broad applicability and, thus, possess transformative potential in medical imaging applications, including (but not limited to) synthesis of medical image data, automatic report generation from radiology images, cross-lingual report generation, and image analysis. This workshop aims to explore new applications of foundations AI models in biomedical imaging with a focus on multimodal foundation models for multimodality medical data comprising medical images (radiology, pathology, fundus, etc), electronic health records, medical reports, radiomics, etc. Furthermore, the workshop will also provide a platform to identify the practical challenges of implementing foundation AI models in the biomedical imaging domains and the potential solutions related to the robustness, trustworthiness, and explainability of the medical foundation AI models. Thus, the workshop will offer an understanding of the impact of foundation AI models on the biomedical imaging domain. The workshop will comprise keynote presentations by experts, contributed paper presentations, poster sessions, and a panel discussion to encourage knowledge sharing, ideas exchange, and collaboration among the participants.

Invited Speakers

Professor Shadi Albarqouni

Prof. Shadi Albarqouni

University of Bonn, Germany

Qin Chen

Dr. Chen Qin

Imperial College London, UK

Mahapatra

Dr. Dwarikanath Mahapatra

Khalifa University, UAE

Muzammil

Dr. muzammil Behzad

KFUPM, Saudi Arabia

Call for featured talks

We welcome submissions of abstracts for featured talks at the workshop. If you are attending IEEE ISBI 2026 and would like to present your work at the FAIBI workshop too, please fill in this simple form so that the organizers can include your talk in the workshop program. This will be included as a featured talk and should be limited to 10 minutes duration + a 5-minute question-answer session.

Click here to submit title of your talk

Where and When

11 April, 2026

ExCel London

Schedule

Tentative schedule is given below. This will be updated to match the conference program.

Time Talk Speaker Title
14:00 Invited talk Dr. Chen Qin Foundational Medical AI Beyond Supervised Learning on Incomplete Imaging and Clinical Data
Read abstract As the biomedical imaging community explores the potential of large-scale foundation models, a critical bottleneck remains: real-world clinical data is inherently messy, incomplete, and rarely contains perfect ground truth. To build truly foundational AI capabilities that scale to clinical workflows, the field must move beyond traditional supervised learning paradigms. This talk presents a suite of beyond-supervised methodologies designed to tackle data incompleteness at two critical stages: physical acquisition and semantic diagnosis. First, we address the challenge of incomplete imaging signals through advanced generative modeling and unsupervised reconstruction frameworks. By learning directly from partial or accelerated physical measurements, these methods enable high-fidelity image reconstruction without the strict reliance on fully sampled reference scans. Second, we transition to downstream clinical applications by exploring multimodal architectures that fuse high-dimensional images with heterogeneous clinical records. By leveraging advanced representation learning, these systems natively handle missing clinical variables and effectively utilize unannotated data, dynamically adapting to imperfect patient profiles. By advancing medical AI beyond supervised learning at both the physics and clinical levels, we provide grounded steps toward learning the robust representations required for the next generation of clinical AI.
14:40 Invited Talk Dr. Muzammil Behzad Multimodal Medical Computer Vision: Opportunities and Challenges for Next-Generation Medical Imaging and Clinical Applications
Read abstract Multimodal medical computer vision is reshaping healthcare by integrating diverse data sources such as medical imaging, clinical reports, electronic health records, and emerging vision-language models to enable more comprehensive and intelligent clinical decision-making. By learning from complementary modalities, next-generation AI systems can improve diagnostic accuracy, enhance disease characterization, support personalized treatment planning, and streamline clinical workflows. However, significant challenges remain, including data heterogeneity, limited annotations, modality imbalance, interpretability, privacy concerns, and regulatory constraints. This talk will highlight key opportunities, recent advances, and open challenges in developing robust, trustworthy, and clinically deployable multimodal AI systems for the future of medical imaging and healthcare applications.
15:20 Invited Talk Prof. Shadi Albarqouni Rethinking Foundation Models for Medical Imaging: Toward Affordable and Multimodal AI
Read abstract Foundation models are rapidly transforming biomedical imaging by enabling transferable representations across tasks and datasets. However, their direct application to clinical settings raises important challenges related to bias, domain shift, annotation cost, and deployment in resource-constrained environments. In this talk, I present a series of recent works from our group exploring how foundation models can be critically evaluated and adapted to support affordable and practical AI systems for medical imaging. We first analyze bias in foundation model representations for mammography, highlighting how modality-specific characteristics can influence downstream performance. We then demonstrate applications in histopathology through CMIL grading, showing how specialized models can enable clinically relevant analysis with limited annotations. Building on these insights, we explore new paradigms enabled by multimodal foundation models, including cross-modal prompt learning and the use of vision-language models as gating mechanisms for open-set federated active learning. Together, these studies illustrate how specialized, multimodal, and annotation-efficient foundation models can enable robust and deployable AI systems for biomedical imaging, particularly in point-of-care and resource-limited settings.
16:00 - 16:20 Coffee Break
16:00 Invited talk Dr. Dwarikanath Mahapatra Title to be updated
Read abstract Talk withdrawn. Due to the ongoing situation in the middle east, the speaker is unable to confirm travel plans.
17:00 Closing Remarks Workshop chairs Closing Remarks and Note of Thanks

Organizers

Hazrat

Dr. Hazrat Ali

University of Stirling, UK

Rizwan

Dr. Rizwan Qureshi

Associate Prof. Salim Habib University Karachi


Hazrat

Dr. Islem Rekik

Imperial College London, UK

Bilal

Prof. Jia Wu

MD Anderson Cancer Center, USA

Bilal

Prof. Muhammad Bilal

Birmingham City University, UK

Contact us

Dr. Hazrat Ali, ali.hazrat@stir.ac.uk


Organizers’ affiliations

Hazrat
Muhammad Bilal
Jiawu
Rekik
Rizwan