Banner
Home      Log In      Contacts      FAQs      INSTICC Portal
 
Documents

Keynote Lectures

Explanations on the Web: A Provenance-based Approach
Luc Moreau, King's College London, United Kingdom

Persistent Identification for Interoperable Data Management and Preservation
Stefan Decker, RWTH Aachen University, Germany

Hybrid Intelligence: AI Systems that Collaborate with People Instead of Replacing Them
Frank van Harmelen, The Hybrid Intelligence Center & Vrije Universiteit Amsterdam, Netherlands

 

Explanations on the Web: A Provenance-based Approach

Luc Moreau
King's College London
United Kingdom
 

Brief Bio
Luc Moreau is a Professor of Computer Science and Head of the department of Informatics, at King's College London. He has conducted research in various areas of Computer Science, including programming languages, distributed algorithms, distributed systems, and the Web. Luc is renowned for his work on Provenance. Luc was co-chair of the W3C Provenance Working Group, which resulted in four W3C Recommendations and nine W3C Notes, specifying PROV, a conceptual data model for provenance the Web, and its serializations in various Web languages. Previously, he initiated the successful Provenance Challenge series, which saw the involvement of over 20 institutions investigating provenance inter-operability in 3 successive challenges, and which resulted in the specification of the community Open Provenance Model (OPM). Before that, he led the development of provenance technology in the FP6 Provenance project and the Provenance Aware Service Oriented Architecture (PASOA) project. He is currently the principal investigator of three projects: PA4C2: Provenance Analytics for Command and Control; PLEAD: Provenance-driven and Legally-grounded Explanations for Automated Decisions ; and THUMP: Trust in Human-Machine Partnerships.


Abstract
AI-based automated decisions are increasingly used as part of new services being deployed to the general public over the Web. This approach to building services presents significant potential benefits, such as the reduced speed of execution, increased accuracy, lower cost, and ability to adapt to a wide variety of situations. However, equally significant concerns have been raised and are now well documented such as concerns about privacy, fairness, bias and ethics. On the consumer side, more often than not, the users of those services are provided with no or inadequate explanations for decisions that may impact their lives.

Meanwhile, a decade of research on provenance, a standardisation of provenance at the World Wide Web Consortium (PROV), and applications, toolkits and services adopting provenance have led to the recognition that provenance is a critical facet of good data governance for businesses, governments and organisations in general. Provenance, which is defined as a record that describes the people, institutions, entities, and activities involved in producing, influencing, or delivering a piece of data or a thing, is now regarded as an essential function of data-intensive applications, to provide a trusted account of what they performed.

In this talk, I will show that such a provenance record can provide a solid foundation for generating explanations about decisions. The talk will overview the notion of provenance, will outline key steps in constructing explanations, and will report on our experience in three projects: PA4C2: Provenance Analytics for Command and Control; PLEAD: Provenance-driven and Legally-grounded Explanations for Automated Decisions (https://plead-project.org/); and THUMP: Trust in Human-Machine Partnerships (https://thump-project.ai/).



 

 

Persistent Identification for Interoperable Data Management and Preservation

Stefan Decker
RWTH Aachen University
Germany
 

Brief Bio
Prof. Dr. Stefan Decker is Professor of Databases and Information Systems at RWTH Aachen University and Managing Director of the Fraunhofer Institute for Applied Information Technology in Birlinghoven.Previously, he was Professor of Digital Enterprise at the National University of Ireland, Galway, Director of the Digital Enterprise Research Institute (DERI) and the Insight Center in Galway, Ireland, ResearchAssistant Professor at the Information Sciences Institute of the University of Southern California, USA, and held research positions at Stanford University and the University of Karlsruhe (now KIT).He is an elected member of the Royal Irish Academy and a Fellow of Engineers Ireland.Since 1998 he has been working with linked data and semantic web technology. His current research interests include knowledge representation and data modeling, research data management, and applications for linked data technologies. More information on Prof. Decker's publications can be found at http://www.stefandecker.org/ and at Google Scholar. 


Abstract
The digitally connected economy requires infrastructure supporting the open exchange, reuse and integration of constantly evolving data. One of these infrastructure is identification via persistent identifiers (PIDs). While this concept of PIDs is not new, requirements for PIDs have rarely been investigated.

After a brief survey of relevant concepts and related work, I will present some recent results from my laboratory, where we work on interoperable data management and preservation systems for evolving data on the Web. I will discuss the need for a general data interoperability and persistence layer for the Web, addressing issues such as link rot, reliable resource referencing and citation, authenticity, integrity and trust.



 

 

Hybrid Intelligence: AI Systems that Collaborate with People Instead of Replacing Them

Frank van Harmelen
The Hybrid Intelligence Center & Vrije Universiteit Amsterdam
Netherlands
 

Brief Bio
Frank van Harmelen has a PhD in Artificial Intelligence from Edinburgh University, and has been professor of AI at the Vrije Universiteit since 2001, where he leads the research group on Knowledge Representation. He was one of the designers of the knowledge representation language OWL, which is now in use by companies such as Google, the BBC, New York Times, Amazon, Uber, Airbnb, Elsevier, Springer Nature, XMP, and Renault among others. He co-edited the standard reference work in his field (The Handbook of Knowledge Representation), and received the Semantic Web 10-year impact award ifor his work on the open source software Sesame (over 200.000 downloads). He is a Fellow of the European Association for Artificial Intelligence, member of the the Dutch Royal Academy of Sciences (KNAW), of The Royal Holland Society of Sciences and Humanities (KHWM) and of the Academia Europaea, and is adjunct professor at Wuhan University and Wuhan University of Science and Technology in China.


Abstract
Much of current AI research is implicitly aimed at building systems that replace humans: self-driving cars to replace Uber drivers, translation software replacing interpreters, image analysis software replacing radiologists. But it's becoming increasingly clear that machine intelligence will be rather different from human intelligence. It is therefore more interesting to build AI systems that collaborate in hybrid teams of people and machine, in order to combine their complementary skills. This will require that we start asking a whole set of new research questions. How to equip AI systems with a "theory of mind" to make them collaborative? How to make AI systems adaptive to changes in the team and the environment? How to instill moral values into these systems? And of course how to make them explainable? We will outline a research agenda for hybrid intelligence and present some early results from researchers worldwide into hybrid intelligence.



footer