8:00 AM - 6:00 PM MEA
In Person Conference
University of Bonn, 18-20 May, 2026
Hosted by the Center for Science and Thought (CST), University of Bonn, in collaboration with the Leverhulme Centre for the Future of Intelligence (LCFI), University of Cambridge. Funded by Stiftung Mercator.
Call for Papers: Irrationality and the Age of AI
The AI revolution has accelerated in recent years, propelled by the widespread use of large language models (LLMs). Today, AI systems are not only transforming technical environments but also shaping our thoughts, emotions, and everyday linguistic practices. Increasingly, AI research and industry are shifting their attention from rational problem-solving toward aspects of human life once considered the last bastions of humanity in the face of ‘rational’ AI. We can contrast this ‘rational’ approach to AI with the more recent expansion of AI into the realm of the expression of emotions and other aspects of human life often seen as ‘irrational’. In short, AI turns out not to be limited to a simulation of ‘rationality’ anymore.
Our conference will explore the role of affective computing, emotionally laden human-machine interaction, conversational AI models, reinforcement algorithms, and recommender systems in the wake of the LLM revolution. In this light, we will discuss what we can learn about language—both in its explicit, logical, grammatical structure and in its emotional, expressive dimension—when AI accesses these depths of human expression. We also ask what it means for humanity when even the ‘irrational’ aspects of life are no longer beyond the reach of digitalization. This raises the important question how emotions and their various form of bodily and linguistic expression are related and what it means for AI to detect and mass reproduce patterns in human behavior that are closely correlated with the emotional depth dimension of human life.
We will address a paradox of technological progress: the deeper AI mirrors the structural layers of the human mind through interdisciplinary breakthroughs, the more actually existing human irrationality becomes visible as social and political collateral damage. Simulating this irrationality, in turn, provides AI with new behavioral data, generating a non-rational feedback loop alongside the rational one—bringing both novel opportunities and risks.
These developments have profound normative consequences for social, political, and ethical thought and action. They raise urgent questions about the design of ethical AI that goes beyond regulatory compliance. Addressing these questions requires us to account for the transcultural differences that shape AI as a sociotechnological phenomenon. To this end, the conference will convene interdisciplinary expertise, industry perspectives, practical approaches, policy insights, and fundamental reflections in AI politics, ethics and philosophy. Our discussions will highlight technical dimensions of AI, its impact on human experience, culture, and society, and the philosophical, ethical, and normative frameworks for shaping desirable futures.
We welcome theoretical, empirical, and practice-oriented contributions from scholars, practitioners, and policymakers across disciplines, including computer science, linguistics, philosophy, ethics, psychology, sociology, political science, and cultural studies. Possible areas of interest include, but are not limited to:
Human Experience, Culture, and Society
-
AI, Culture, and Society
Transcultural perspectives on emotional human–machine interaction; emotional feedback loops in algorithmic decision-making; behavioral steering and impacts on identity, language, and social cohesion, feminist and intersectional critiques of technology and power dynamics in AI research and application. -
Affect and Aesthetics in the AI Age
Advances in emotion recognition and sentiment analysis; the role of embodied cognition in human–AI interaction; artistic, emotional, and aesthetic dimensions of AI systems; transformations of creativity, expression, and perception in human–AI interaction. -
Linguistics and Large Language Models
Insights into grammar, semantics, pragmatics, and discourse through large-scale models; implications for theories of language and meaning.
Philosophical, Ethical, and Normative Frameworks
-
Philosophy of Mind and AI
Insights from AI research into consciousness, intentionality, and emotion. -
Ethical and Normative Frameworks for AI
Cultural, philosophical, and policy approaches to ethical AI design and deployment, (social) risk and governance challenges in emotionally intelligent AI systems. -
Sustainable AI (ecological and social dimensions)
Environmental costs of AI development and deployment; social sustainability in data practices, labor conditions, and long-term technological responsibility.
Submission Guidelines
We invite individual proposals for 20-minute presentations (followed by Q&A) or collective proposals for 2h panels that address one or more of the above themes.
We accept proposals for traditional academic presentations, as well as project/product demonstrations and artistic interventions. We are looking for contributions from established academics, early career researchers, policy specialists, civil society organisations, as well as communicators and artists.
Accepted speakers will be considered for travel and accommodation funding.
We particularly encourage submissions for interdisciplinary papers as well as submissions from scholars and practitioners from the Majority World.
Submissions should include:
-
A title
-
Half-page abstract per talk (approx. 250–300 words) outlining the proposed topic, methodology, and its relevance to the theme of the conference
-
A brief biographical note (max. 100 words)
Deadline for Abstract Submission: December 01, 2025
Please submit your abstracts here.
Contact and Updates
For questions or further information, please contact: desirableai@gmail.com
Stay updated via our website or follow us on LinkedIn.