Banner
Home      Log In      Contacts      FAQs      INSTICC Portal
 
Documents

Keynote Lectures

Fast Thinking vs Slow Thinking: AI Implications for Software and Information Systems Engineering
Oscar Pastor, Universidad Politécnica de Valencia, Spain

Escaping the Echo Chamber: The Quest for the Normative News Recommender System and a New Notion of Computer Science
Abraham Bernstein, University of Zurich, Switzerland

Implementing the FAIR Principles: Progress and Pitfalls
Mark Wilkinson, Universidad Politécnica de Madrid, Spain

 

Fast Thinking vs Slow Thinking: AI Implications for Software and Information Systems Engineering

Oscar Pastor
Universidad Politécnica de Valencia
Spain
http://www.pros.upv.es
 

Brief Bio
Oscar Pastor is Full Professor and Director of the "Centro de Investigación en Métodos de Producción de Software (PROS)" at the Universidad Politécnica de Valencia (Spain). He received his Ph.D. in 1992. He was a researcher at HP Labs, Bristol, UK. He has published more than two hundred research papers in conference proceedings, journals and books, received numerous research grants from public institutions and private industry, and been keynote speaker at several conferences and workshops. Chair of the ER Steering Committee, and member of the SC of conferences as CAiSE, ICWE, CIbSE or RCIS, his research activities focus on conceptual modeling, web engineering, requirements engineering, information systems, and model-based software production. He created the object-oriented, formal specification language OASIS and the corresponding software production method OO-METHOD. He led the research and development underlying CARE Technologies that was formed in 1996. CARE Technologies has created an advanced MDA-based Conceptual Model Compiler called OlivaNova, a tool that produces a final software product starting from a conceptual schema that represents system requirements. He is currently leading a multidisciplinary project linking Information Systems and Bioinformatics notions, oriented to designing and implementing tools for Conceptual Modeling-based interpretation of the Human Genome information.


Abstract
A sound web information systems practice require to integrate predictive kwowledge and explainable knowledge, the two ways to acquire knowledge and reason about the real world. The first is data-driven (or bottom-up). This is the approach used in machine learning where data is ’fed’ into a computer program, which identifies patterns. With the availability of big data and computing power, this approach is common in machine learning where there is not a symbolic description of the data. Rather, it is a mathematical description. This fast-thinking, data-driven approach cannot reflect on its own instructions and how they map to things in the real world. This must be complemented by an explainable knowledge, conceptual modeling-based and theory-driven (top-down), using slow thinking, concerned with defining what things mean and how they are related to each other in the world and within given contexts, leading to a shared understanding of the concepts in a domain

In information systems and software engineering, we increasingly need to produce software that supports human goals. Therefore, we need both data-driven (predictive knowledge) and theory-driven (explainable knowledge) approaches. In other words, besides the raw power of the data itself, we need to be able to interpret the data for understandability in the real world. In this talk, we will discuss how to design web information systems platforms that properly connect data to their symbolic descriptions with meaning in terms of both semantics and a sense of purpose or significance. This requires slow thinking, focusing on domain conceptualization, conceptual analysis and clarification, semantic elaboration and ethical considerations, all that grounded in an ontological understanding. Web Information Systems implications and practical applications in the challenging domain of understanding the human genome in advanced clinical settings (medicine of precision) would be also showed.



 

 

Escaping the Echo Chamber: The Quest for the Normative News Recommender System and a New Notion of Computer Science

Abraham Bernstein
University of Zurich
Switzerland
 

Brief Bio
Abraham Bernstein, Ph.D., is a Full Professor of Informatics at the University of Zurich (UZH), Switzerland. He received a Diploma in Computer Science from ETH Zurich and a Ph.D. in Management with a concentration in Information Technologies from the Sloan School of Management at MIT.
Mr. Bernstein is also a founding Director of the University of Zurich’s Digital Society Initiative (DSI) — a university-wide initiative with more than 280 faculty members and 1200 researchers from all disciplines investigating all aspects of the interplay between society and the digitalization and the President of the Steering Committee of the Swiss National Science Foundation’s Research Priority Program 77 on the Digital Transformation. He was also a member of the Council of Europe’s Committee of Experts on human rights dimensions of automated data processing and different forms of artificial intelligence (MSI-AUT).
Professor Bernstein’s research research focuses on various aspects of the AI/data mining/machine learning, semantic web, recommender systems, crowd computing, and collective intelligence. His work is based on both social science (organizational psychology/sociology/economics) and technical (computer science, artificial intelligence) foundations. His research in this area has been published in leading Computer Science, Management Science, and AI professional outlets. It has also been covered by the Swiss Press. Professor Bernstein has served on the editorial boards of a variety of top journals including as an Editor at the Journal of Web Semantics, Associate Editor at the ACM Transaction on Internet Technologies or ACM Transactions on Interactive Intelligent Systems.


Abstract
Recommender systems and social networks are often faulted to be the cause for creating Echo Chambers – environments where people mostly encounter news that match their previous choices or those that are popular among similar users, resulting in their isolation inside familiar but insulated information silos. Echo chambers, in turn, have been attributed to be one cause for the polarization of society, which leads to the increased difficulty to promote tolerance, build consensus, and forge compromises. To escape these echo chambers, we propose to change the focus of recommender systems from optimizing prediction accuracy only to considering measures for social cohesion.

This proposition raises questions in three spheres: In the technical sphere, we need to investigate how to build “socially considerate” recommender systems. To that end, we develop a novel recommendation framework with the goal of improving information diversity using a modified random walk exploration of the user-item graph. In the social sphere, we need to investigate if the adapted recommender systems have the desired effect. To that end, we present an empirical pilot study that exposed users to various sets (some diverse) of news with surprising results. Finally, in the normative sphere, these studies raise the question what kind of diversity is desirable for the functioning of democracy.

Reflecting the consequences of these findings for our discipline, this talk highlights that computer science needs to increasingly engage with both the social and normative challenges of our work, possibly producing a new understanding of our discipline. It proposes similar consequences for other disciplines in that they increasingly need to embrace all three spheres.



 

 

Implementing the FAIR Principles: Progress and Pitfalls

Mark Wilkinson
Universidad Politécnica de Madrid
Spain
 

Brief Bio
Mark has a B.Sc.(Hons) in Genetics from the University of Alberta, and a Ph.D. in Botany from the University of British Columbia.  He spent four years at the Max Planck Institut für Züchtungsforschung in Köln, Germany, pursuing studies in a mix of plant molecular and developmental biology and bioinformatics.  He then did a research associateship at the Plant Biotechnology Institute of the National Research Council Canada, focusing on the problem of biological data representation and integration for the purposes of automated data mining. In the subsequent 25 years, his laboratory has focused on designing biomedical data/tool representation, discovery, and automated reuse infrastructures - what are now called "FAIR Data" infrastructures. He is the lead author of the primary FAIR Principles paper, and lead author on the first paper describing a complete implementation of those principles over legacy data - what is now called the ‘FAIR Data Point’.  He is a founding member of the FAIR Metrics working group, tasked with defining the precise, measurable behaviors that FAIR resources should exhibit; co-Chair of the EOSC Task Force on FAIR Metrics and Data Quality (to 2004), co-Chair of the EOSC Task Force on FAIR Metrics and Digital Objects, and continues to work with EOSC in the Opportunity Area 3 (FAIR Assessment and Alignment). He is author of the first software application capable of a fully-automated and objective evaluation of “FAIRness” - the FAIR Evaluator - and is founder of a spin-off company, FAIR Data Systems S.L., that provides consulting, training, and customized software solutions that help clients become FAIR.


Abstract
Not available



footer