Norms of Assertion in linguistic Human-AI Interaction (NIHAI)
Further reading:
​
Kneer, M. (2021). Norms of assertion in the US, Germany & Japan. Proceedings of the National Academy of Science 118 (37), e2105365118. [Link]
​
Kneer, M. (2018). The norm of assertion: Empirical data. Cognition 177C, 165–171. [Link]
​​
Stuart, M. & Kneer, M. (2021). Guilty Artificial Minds: Folk Attributions of Mens Rea and Culpability to Artificially Intelligent Agents. Proceedings of the ACM on Human-Computer Interaction 5 (CSCW2), 1–27. [Link]
​​
One of the central crises of our times regards the widespread dissemination of mis- and disinformation, fake news, conspiracy theories and consequently the erosion of trust in the media, science and governmental institutions. The communication crisis, as we will call it, has been exacerbated by the use of digital means of discourse. And it will be further intensified, we predict, because in the near future many of our interlocutors will no longer be humans, but AI-driven conversational agents. We propose an in-depth, cross-cultural inquiry into responsible principles of operation for AI-driven conversational agents, so as to help mitigate or prevent a new AI-fueled wave of the communication crisis.
​
The research endeavour, which sits at the interface of ethics, linguistics, media and communication as well as social computer science, will empirically explore (i) normative expectations in language-based human-AI interaction as well as (ii) the appraisal of norm violations and their downstream interactive consequences across several European and non-European languages and cultures. We will closely (iii) track relevant moderators (e.g. context, role, model complexity of the AI system, stakes) and mediators (e.g. perceived agency, projection of mind), so as to (iv) investigate their influence on the evaluation of and disposition to rely on conversational AI systems across cultures. On the basis of the findings, we will (v) propose principles for the responsible design and use of AI-driven conversational agents, which will be tested immediately on the AI-driven lay-journalism application Reporter provided by our associated partner Polaris News. ​
​
The envisioned research is funded with over 1 Mio Euros from the ERC Chance/Hera Program​.
​
The project will be conducted by Prof. Markus Kneer​ (project leader), University of Graz, PD Dr. Markus Christen (PI), University of Zurich, Prof. Michaela Constantinescu​ (PI), University of Bucharest, Prof. Izabela Skoczen​ (PI), Jagiellonian University. Polaris News, led by award-winning journalist Hannes Grassegger​ is a key collaborator.