[Responsible AI] When Abstraction Fails: Navigating Uncertainty in High-Stakes Systems
Title: When Abstraction Fails: Navigating Uncertainty in High-Stakes Systems Join the meeting<https://teams.microsoft.com/l/meetup-join/19%3ameeting_YzczNmM0YWUtNWU4ZC00N...> Abstract: With high-stakes systems, safety and security engineering depends heavily on abstractions — models, simulations, bench tests — that stand in for the system itself. But these abstractions have limits. They inevitably rely on imperfect assumptions: simplifications that hold until they don’t, especially under conditions of novelty, extremity, or scale. In systems that push the boundaries of performance, therefore, abstraction becomes brittle, and the pretense of total foresight becomes a liability. This talk examines what happens when engineers reach the limits of their abstractions. Focusing on testing and modeling under conditions of extremity, I explore how ‘rational’ accidents can arise from the limits of what can be known, tested, or anticipated in advance. From there I examine how one domain — civil aviation — manages these accidents by leveraging decades of accumulated operational experience. The talk is an invitation to reconsider what counts as credible knowledge in high-consequence engineering, and to value operational experience not as a fallback when models fail, but as a foundation for safety and security when they inevitably do. <https://arxiv.org/abs/2407.02191>Bio: John Downer is Associate Professor of Science and Technology Studies at the University of Bristol. He has written extensively about the idiosyncrasies of technoscientific knowledge, the limits of proof, and their implications for technology governance. His 2024 monograph ‘Rational Accidents: Reckoning with Catastrophic Technologies,’ examines the logical dilemmas of establishing ‘ultra-high’ reliabilities in highly complex systems such as jetliners and reactors, and unpacks the practical ramifications of those dilemmas. His broader work looks at epistemological and regulatory issues in a range of technological domains, from civil aviation and nuclear, to autonomous systems and artificial intelligence. He has a PhD in Science and Technology Studies (STS) from Cornell University, and prior to joining Bristol he held positions at Kings College London, the London School of Economics (LSE), and Stanford University. He is currently a visiting researcher at the Centre for the Governance of Artificial Intelligence (GovAI) in Oxford.
participants (1)
-
Daniele Quercia