Reliable Artificial Intelligence by and for Natural Language Processing

Jose María Alonso Moral

Date and time: 30/04/26, 5pm CET

Speaker: José María Alonso Moral, CiTIUS-Centro Singular de Investigación en Tecnoloxías Intelixentes, Universidade de Santiago de Compostela

Presenter: TBA

Abstract: Artificial Intelligence (AI) is becoming increasingly prevalent across all spheres of society. This raises numerous technical challenges, alongside ethical, legal, socio-economic, and cultural considerations, in ensuring that so-called "intelligent systems" are reliable and perceived as trustworthy by the general public.

This talk will commence by exploring the fundamental pillars of so-called Trustworthy AI: technical robustness (safety and security) and environmental robustness (sustainability), ethics, and regulation. Subsequently, it will examine how the transparency and explainability of data, models, and automated predictions or decisions—preferably conveyed in natural language—are essential prerequisites. These are necessary to ensure that increasingly complex AI systems can be audited, debugged, improved, and, ultimately, perceived as trustworthy (i.e., safe, accurate, equitable, etc.) by the public. In order to establish well-founded trust and empower citizens to independently discern reality from manipulation and disinformation, a pedagogical endeavour is also imperative. It is necessary to educate both in and with AI, providing the public with the requisite knowledge and tools to determine when to trust or distrust these systems.

Within this context, Natural Language Processing (NLP) techniques are fundamental in enabling "intelligent systems" to interact naturally with humans—whether they be AI developers or end-users. Such techniques contribute to the automated generation of factual ('Why?'), counterfactual ('Why not?'), and transfactual ('What if things were different?') explanations. These personalised and interactive explanations assist individuals in utilising AI systems efficiently and effectively in their daily tasks. Furthermore, the "explainability" techniques associated with AI at large can specifically contribute to the development of more reliable Large Language Models (LLMs). Throughout this presentation, several use cases and computational tools will be demonstrated, which have been designed specifically for the development and evaluation of so-called Trustworthy AI, both by and for the NLP field.

Bio: 

He holds an MEng in Telecommunications Engineering from the Universidad Politécnica de Madrid (UPM), where he also earned his PhD in 2007. Following his tenures as a Postdoctoral Researcher and Associate Researcher in the "Foundations of Soft Computing" unit at the European Centre for Soft Computing, and as a Juan de la Cierva Research Fellow at the University of Alcalá (2012), he served as a Postdoctoral Researcher and Ramón y Cajal Research Fellow within the Intelligent Systems Group at the Centro Singular de Investigación en Tecnoloxías da Información (CiTIUS), University of Santiago de Compostela (USC).

He has served as a member of the Executive Committee for the ACL Special Interest Group on Natural Language Generation (SIGGEN) and as an Executive Board member of the European Society for Fuzzy Logic and Technology (EUSFLAT). Currently, he is an Associate Professor at the USC and an Affiliated Researcher at CiTIUS-USC. Furthermore, he is the President of EUSFLAT, Vice-Chair of the Task Force on "Explainable Fuzzy Systems", a member of the "Fuzzy Systems" Technical Committee, and Chair of the "SHIELD" Technical Committee within the IEEE Computational Intelligence Society (IEEE-CIS).

He is an Associate Editor for the IEEE Computational Intelligence Magazine and the International Journal of Approximate Reasoning, as well as a member of the IEEE-CIS Task Force on "Fuzzy Systems Software". In 2016, he was an Honorary Research Fellow at the University of Aberdeen (Scotland), and in 2017, a Research Fellow at the University of Bari Aldo Moro (Italy). He has authored over 190 peer-reviewed publications across international journals, book chapters, and conference proceedings. His primary research interests include explainable and trustworthy artificial intelligence, computational intelligence, interpretable fuzzy systems, natural language generation, and the development of free software tools, among others.

 

Link to the talkhttps://zoom.us/webinar/register/WN_2UoRY1CIRKqxx7XVKtuz0g