Banner
Home      Log In      Contacts      FAQs      INSTICC Portal
 
Documents

Keynote Lectures

Federated Learning (FED) of eXplainable Artificial Intelligence (XAI) Models
Pietro Ducange, University of Pisa, Italy

Available Soon
Christian S. Jensen, Aalborg University, Denmark

 

Federated Learning (FED) of eXplainable Artificial Intelligence (XAI) Models

Pietro Ducange
University of Pisa
Italy
 

Brief Bio

Pietro Ducange received the M.Sc. degree in Computer Engineering and the Ph.D. degree in Information Engineering from the University of Pisa in 2005 and 2009, respectively. Currently, he is an associate professor of Information Systems and Technologies at the University of Pisa, Italy. He teaches Large Scale and Multi Structured Databases, Intelligent Systems and Big Data Management.

He is a senior member of the AI-R&D research Group at the Department of Information Engineering. Moreover, he is a member of the Big Data, Cloud Computing and Cybersecurity Lab and of the Trustworthy and Embodied Intelligence Lab of the same department. His main research interests include explainable artificial intelligence, big data mining, social sensing and sentiment analysis. He has been involved in several R&D projects in which data mining and computation intelligence algorithms have been successfully employed. He has co-authored over 100 papers in international journals and conference proceedings. He organized several workshops, special sessions, tutorials and special issues on Trustworthy AI and its application in engineering fields.


Abstract
The current era is characterized by an increasing pervasiveness of applications and services based on data processing and often built on Artificial Intelligence (AI) and, in particular, Machine Learning (ML) algorithms. In fact, extracting insights from data is so common in daily life of individuals, companies, and public entities and so relevant for the market players, to become an important matter of interest for institutional organizations. The topic is so important and hot that ad hoc guidelines and regulations for designing trustworthy AI-based applications have also been proposed by the European Union and other national and supra-national bodies. One important aspect is given by the capability of the applications to tackle the data privacy issue.  Additionally, depending on the specific application field, paramount importance is given to the possibility for the humans to understand why a certain AI/ML-based application is providing that specific output.
Trustworthy AI models should be trained with the simultaneous goals of preserving the data privacy and ensuring a certain level of explainabilty of the system. In this talk, we discuss the concept of Federated Learning (FL) of eXplainable AI (XAI) models, in short FED-XAI, purposely designed to address the two requirements simultaneously.
We first introduce the motivations at the foundation of FL and XAI, along with their basic concepts. Then, we provide a brief survey regarding approaches, models, results, issues and applications on FED-XAI. Finally, we also show our recently released framework for providing user-friendly support to Federated Learning (FL) of Fuzzy Rule-Based Systems (FRBS) as explainable-by-design models.



 

 

Keynote Lecture

Christian S. Jensen
Aalborg University
Denmark
 

Brief Bio
Available Soon


Abstract
Available Soon



footer