Prediction and Punishment: Cross-Disciplinary Workshop on Carceral AI
Center for Philosophy of Science
University of Pittsburgh
This cross-disciplinary workshop will provide an interactive meeting point for researchers to address the expanding use of AI in criminal legal contexts. We use the term ‘carceral AI’ to refer to a broad class of algorithmic and data-driven practices implicated in the control and incarceration of people. Examples include predictive policing, facial recognition, recidivism risk assessment instruments, automatic license plate readers, border surveillance systems, biometric databases, electronic monitoring, and audio gunshot locators. Such technologies are often introduced as ‘smart’, ‘evidence-based’, or ‘data-driven’ reforms that claim to reduce bias and increase efficiency, such as ‘evidence-based’ sentencing and ‘smart’ borders. In practice, however, AI systems can interact in complicated ways with existing social and legal structures, reinforce or mask existing structural injustices, and expand the reach of carceral systems under the guise of scientific rigor. Participants in this workshop are invited to explore how such technologies both inform and interact with topics including incarceration, policing, migration, privatization, surveillance, racial and gender justice, and resistance. We welcome contributions from civil society organizations and academic researchers from disciplines including but not limited to philosophy, law, and the social sciences. Participants will be invited to contribute to a special report on carceral AI.
Shakeer Rahman, Stop LAPD Spying Coalition
Megan Stevenson, University of Virginia Law School
Pablo Nuñez, Centro de Estudos de Segurança e Cidadania (CESeC)
Gabbrielle Johnson, Claremont McKenna College, Department of Philosophy
Poster abstracts may be on any research topic in the philosophy of science in practice, focusing on detailed and systematic studies of scientific practices — neither dismissing concerns about truth and rationality, nor ignoring contextual and pragmatic factors.
We welcome contributions from philosophers, historians and sociologists of science, practicing scientists, and any others with an interest in philosophical questions regarding scientific practice. We strive for quality, variety, innovation, and diversity in accepted abstracts.
Submission link: https://submissions2024.philosophy-science-practice.org/openconf/openconf.php
Please specify [Poster] in the title of your poster abstract
Submission deadline: 16 February 2024
Main Contact: Manuela Fernández Pinto
Posters must include a title, an abstract of 500 words, full affiliation details, and contact information for the presenter(s). Please specify [Poster] in the title of your poster abstract, so that it can be clearly distinguished from a paper/symposium submission. We will announce decisions on abstract proposals on an ongoing basis, usually within four weeks after submission. All proposals should be submitted online through the OpenConf system: https://submissions2024.philosophy-science-practice.org/openconf/openconf.php.
Our policy regarding multiple submissions to SPSP 2024 does not apply to poster abstracts. A presenting author on a contributed paper or participating in a symposium may also submit one poster abstract on a substantially different topic.
If you are wondering how to design your poster or to prepare your poster presentation, we found the PSA2016 and the Daily Nous websites helpful.
Presentations and prize
There will be dedicated slots for poster presentation in the conference program. Presenters should be present at their posters during those times. The best poster selected by conference participants, will be awarded a prize of €200.
Workshop on Epistemological Issues of Machine Learning in Science
Chaudoire Pavillon, TU Dortmund, Germany
With impressive advances in Machine Learning (ML) and particularly Deep Learning, Artificial Intelligence is currently taking science by storm. This workshop brings together top scientists and philosophers working on fundamental issues connected to the use of Machine Learning in science. The workshop marks the launch of the DFG-funded Emmy Noether Group UDNN: Scientific Understanding and Deep Neural Networks, and is co-organized with the Lamarr Institute for Machine Learning and Artificial Intelligence and co-funded by the Department for Humanities and Theology at TU Dortmund University.
Topics include, but are not restricted to:
- The relation between prediction and discovery on the one hand, and explanation and understanding on the other, in fields of science that heavily rely on ML methods
- The key issues in identifying genuine discoveries and stable predictions by ML systems
- Core conceptions of “explanation” involved in the field of eXplainable AI (XAI), and their relation to philosophical theories of understanding and explanation
- Present limitations associated with ML’s predictive power and what may be needed to overcome them
- The connection between ML and traditional scientific means for prediction and discovery, such as theories, models, and experiments
- Our present understanding of ML itself and its limitations
- Life Sciences
Axel Mosig (Ruhr University Bochum)
- Machine Learning Theory
Marie-Jeanne Lesot (Sorbonne Université Paris)
David Watson (King’s College London)
Brigitte Falkenburg (TU Dortmund)
Konstantin Genin (University of Tübingen)
Lena Kästner (University of Bayreuth)
Henk de Regt (Radbout University Nijmegen)
Eva Schmidt (TU Dortmund)
Tom Sterkenburg (LMU Munich)
- Physics / Astronomy
Michael Krämer (RWTH Aachen)
Mario Krenn (Max Planck Institute for the Science of Light)
Wolfgang Rhode (TU Dortmund)
Christian Zeitnitz (BU Wuppertal)
Annika Schuster, Frauke Stoll, and Florian J. Boge
UDNN – Scientific Understanding and Deep Neural Networks