US2TS 2019 Tutorial on
On the Role of Data Semantics for Explainable AI
Monday, March 11, 2019
10:30 AM – 12:00 PM
The goal of the tutorial is to provide answers to the following questions:
What is XAI from a Machine Learning community perspective? For what reasons has XAI emerged as an important research topic in the machine learning community?
What might a dichotomy of explainable AI methods look like? What are the broad 'types' of mechanisms that enable XAI?
What current limitations exist in the field? In what way is the present are narrowly examining XAI, and what contributions have the data semantics community provided thus far?
In what ways have the data semantics community contributed to XAI thus far? Where do the tutorial presenters see how the semantics community can contribute to the field of XAI? What ideas do the participants of the tutorial session have, and can the outline of a community research agenda be devised?
Artificial intelligence is shaping the way of our life by giving us automated decisions. We are using these automated decisions in every sphere of our life, from pattern recognition, autonomous driving, drug discovery etc.
Recent AI algorithms, and in particular deep learning methods, are improving the accuracy of such automated decision, but have limitations for exposing explanations when they fail. Even worse, many methods are unable to identify if or when they will fail, making it difficult for practitioners to know when to "trust" the decisions an AI reaches. This notion of trust is especially important in contexts where a decision has impact on the well being of an individual's health or career, or has economic and societal ramifications.
This tutorial aims at exposing existing solutions towards XAI, their limitations and the potential benefits of incorporating data semantics research and technologies to address them. The tutorial will offer an overview of the present methods for XAI, their limitations, the few contributions the data semantics community have made thus far, why data semantics is likely to be crucial role for XAI, and directions that our community can potentially follow in this emerging and exciting research area.
Pascal Hitzler is endowed NCR Distinguished Professor, Brage Golding Distinguished Professor of Research, and Director of Data Science at the Department of Computer Science and Engineering at Wright State University in Dayton, Ohio, U.S.A. His research record lists over 400 publications in such diverse areas as semantic web, artificial intelligence, neural-symbolic integration, knowledge representation and reasoning, machine learning, denotational semantics, and set-theoretic topology. His research is highly cited. He is founding Editor-in-chief of the Semantic Web journal, the leading journal in the field, and of the IOS Press book series Studies on the Semantic Web. He is co-author of the W3C Recommendation OWL 2 Primer, and of the book Foundations of Semantic Web Technologies by CRC Press, 2010, which was named as one out of seven Outstanding Academic Titles 2010 in Information and Computer Science by the American Library Association's Choice Magazine, and has translations into German and Chinese. He is on the editorial board of several journals and book series and a founding steering committee member of the Neural-Symbolic Learning and Reasoning Association and the Association for Ontology Design and Patterns, and he frequently acts as conference chair in various functions, including e.g. General Chair (ESWC2019, US2TS2018), Program Chair (FOIS 2018, AIMSA2014), Track Chair (ISWC2018, ESWC2018, ISWC2017, ISWC2016, AAAI-15), Workshop Chair (K-Cap2013), Sponsor Chair (ISWC2013, RR2009, ESWC2009), PhD Symposium Chair (ESWC 2017). He gave tutorials at ESWC 2017, ISWC 2016, IJCAI-16, AAAI-15, ISWC 2013, STIDS 2013, OWLED 2011, ESWC 2009, IJCAI 2009, Informatik 2009, KI 2009, GeoS 2009, ISWC 2006, ESWC 2006, ICANN 2006 and KI 2005. For more information about him, see http://www.pascal-hitzler.de
Md Kamruzzaman Sarker is a PhD student at Wright State University. His current research focuses on making machine learning algorithms decision more transparent. Besides this he is also interested in making ontology engineering processes easier and more human friendly. For the latter he created both the OWLAx and the ROWL plugin for Protege. He achieved Graduate Certificate in Big and Smart data. Before starting his PhD he worked in industry, at Samsung Electronics, as a software engineer. Currently he is also working as a deep learning intern at Intel Corporation.
Derek Doran is an Associate Professor of Computer Science and Engineering at Wright State University. His research interests are in developing statistical, deep learning and topological data analysis methods for the study of complex web and cyber-systems. He is especially interested in enabling deep learning and topological data models to be inherently explainable and comprehensible to users. His research is supported by NSF, AFRL, ORISE, and the Ohio Federal Research Network. Derek is author of over 75 publications, most of which are in AI and ML venues (including Journal of Stat Software; Expert Systems with Apps; Social Network Analysis and Mining; WSDM; ASONAM; ICWSM; WebInt; NeSy; Workshops at NIPS; IJCAI; ESWC) and has three patents. He is on the editorial board for Social Network Analysis and Mining and the International Journal of Web Engineering and Technologies and is a founding chair of the Neural-Symbolic Learning and Reasoning special interest group on Explainable AI. More information at: http://derk--.github.io.
Freddy Lecue (PhD 2008, Habilitation 2015) is a principal scientist and research manager in Artificial Intelligent systems, systems combining learning and reasoning capabilities, in Accenture Technology Labs, Dublin - Ireland. He is also a research associate at INRIA, in WIMMICS, Sophia Antipolis - France. Before joining Accenture Labs, he was a Research Scientist at IBM Research, Smarter Cities Technology Center (SCTC) in Dublin, Ireland, and lead investigator of the Knowledge Representation and Reasoning group. His main research interests are Explainable AI systems. The application domain of his current research is Smarter Cities, with a focus on Smart Transportation and Building. In particular, he is interested in exploiting and advancing Knowledge Representation and Reasoning methods for representing and inferring actionable insight from large, noisy, heterogeneous and big data. He has over 40 publications in refereed journals and conferences related to Artificial Intelligence (AAAI, ECAI, IJCAI, IUI) and Semantic Web (ESWC, ISWC), all describing new system to handle expressive semantic representation and reasoning. He co-organized the first three workshops on semantic cities (AAAI 2012, 2014, 2015, IJCAI 2013), and the first two tutorial on smart cities at AAAI 2015 and IJCAI 2016. Prior to joining IBM, Freddy Lecue was a Research Fellow (2008-2011) with the Centre for Service Research at The University of Manchester, UK. He has been awarded by a second prize for his Ph.D thesis by the French Association for the Advancement of Artificial Intelligence in 2009, and has been recipient of the Best Research Paper Award at the ACM/IEEE Web Intelligence conference in 2008.
Ning Xie is a PhD candidate at Wright State University. Her research interests are broadly in machine learning, with a focus on interpretable and reliable Deep Learning. Recently she studies the intrinsic activation space of Deep Neural Network (DNN), towards building a more transparent and robust DNN model. She also explores the model transparency on applications of interaction between computer vision and natural languages during her summer internship at NEC Laboratories America, Inc. Before she joins Wright State University, she was an Algorithm Engineer working on Computer Vision at one of the leading video surveillance company in China. For more information, please feel free to check her homepage at http://www.wright.edu/~xie.25/