Paul Groth is a Full Professor of Algorithmic Data Science at the University of Amsterdam’s Informatics Institute, where he leads the Intelligent Data Engineering Lab (INDElab). Since 2021, he has served as the Founding Scientific Director of the UvA Data Science Centre, and since 2019 he has been Co-Scientific Director of both the AI for Retail Lab (AIRLab) and the Discovery Lab at the Innovation Center for Artificial Intelligence (ICAI).
In 2023, he co-founded longform.ai, an AI startup that transforms longform conversations into structured, actionable data. From 2015 to 2018, he was Disruptive Technology Director at Elsevier Labs, where he led research on knowledge graph construction, intelligent systems in science, and data provenance, while advising on company-wide precision medicine strategy. Prior to that, he was an Assistant Professor and then Postdoctoral Researcher at Vrije Universiteit Amsterdam (2009–2015) and a Postdoctoral Research Associate at the Information Sciences Institute, University of Southern California (2007–2009).
He earned his PhD in Computer Science from the University of Southampton in 2007 with a thesis titled “The Origin of Data: Enabling the Determination of Provenance in Multi-institutional Scientific Systems” and holds a BS in Computer Science (magna cum laude) from the University of West Florida (2002). His work centers on intelligent systems for integrating and using diverse, contextualized knowledge, with a particular emphasis on data provenance, integration, and knowledge sharing in web and scientific applications
Explanation is not a one off - it's a process where people and systems work together to gain understanding. This idea of co-constructing explanations or explanation by exploration is powerful way to frame the problem of explanation. In this talk, I discuss our first experiments with this approach for explaining complex AI systems by using provenance. Importantly, I discuss the difficulty of evaluation and discuss some of our first approaches to evaluating these systems at scale. Finally, I touch on the importance of explanation to the comprehensive evaluation of AI systems.