Keynote Lectures
Component-level Explanation and Validation of AI Models
Wojciech Samek, Chair for Machine Learning and Communications, TU Berlin / Head of AI Department, Fraunhofer HHI, Germany
Keynote Lecture
Bjoern Schuller, University of Augsburg / Imperial College London, Germany
Component-level Explanation and Validation of AI Models
Wojciech Samek
Chair for Machine Learning and Communications, TU Berlin / Head of AI Department, Fraunhofer HHI
Germany
Brief Bio
Wojciech Samek is a Professor in the EECS Department at the Technical University of Berlin and the Head of the AI Department at the Fraunhofer Heinrich Hertz Institute (HHI) in Berlin, Germany. He earned an M.Sc. from Humboldt University of Berlin in 2010 and a Ph.D. (with honors) from the Technical University of Berlin in 2014. Following his doctorate, he founded the "Machine Learning" Group at Fraunhofer HHI, which became an independent department in 2021. He is a Fellow at BIFOLD – the Berlin Institute for the Foundation of Learning and Data and the ELLIS Unit Berlin. He also serves as a member of Germany’s Platform for AI and sits on the boards of AGH University’s AI Center, the Helmholtz Einstein School in Data Science (HEIBRiDS), and the DAAD Konrad Zuse School ELIZA. Dr. Samek's research in explainable AI (XAI) spans method development, theory, and applications, with pioneering contributions such as Layer-wise Relevance Propagation (LRP), advancements in concept-level explainability, evaluation of explanations, and XAI-driven model and data improvement. He has served as a senior editor for IEEE TNNLS, held associate editor roles for various other journals, and acted as an area chair at NeurIPS, ICML, and NAACL. He has received several best paper awards, including from Pattern Recognition (2020), Digital Signal Processing (2022), and the IEEE Signal Processing Society (2025). Overall, he has co-authored more than 250 peer-reviewed journal and conference papers, with several recognized as ESI Hot Papers (top 0.1%) or Highly Cited Papers (top 1%).
Abstract
Human-designed systems are constructed step by step, with each component serving a clear and well-defined purpose. For instance, the functions of an airplane’s wings and wheels are explicitly understood and independently verifiable. In contrast, modern AI systems are developed holistically through optimization, leaving their internal processes opaque and making verification and trust more difficult. This talk explores how explanation methods can uncover the inner workings of AI, revealing what knowledge models encode, how they use it to make predictions, and where this knowledge originates in the training data. It presents SemanticLens, a novel approach that maps hidden neural network knowledge into the semantically rich space of foundation models like CLIP. This mapping enables effective model debugging, comparison, validation, and alignment with reasoning expectations. The talk concludes by demonstrating how SemanticLens can help in identifying flaws in medical AI models, enhancing robustness and safety, and ultimately bridging the “trust gap” between AI systems and traditional engineering.
Keynote Lecture
Bjoern Schuller
University of Augsburg / Imperial College London
Germany
www.schuller.one
Brief Bio
Not Available