Symbolic Logic meets Machine Learning: A Brief Survey in Infinite Domains
Speaker: Vaishak Belle (University of Edinburgh,the Alan Turing Institute, UK)
Abstract: The tension between deduction and induction is perhaps the most fundamental issue in areas such as philosophy, cognition and artificial intelligence (AI). The deduction camp concerns itself with questions about the expressiveness of formal languages for capturing knowledge about the world, together with proof systems for reasoning from such knowledge bases. The learning camp attempts to generalize from examples about partial descriptions about the world. In AI, historically, these camps have loosely divided the development of the field, but advances in cross-over areas such as statistical relational learning, neuro-symbolic systems, and high-level control have illustrated that the dichotomy is not very constructive, and perhaps even ill-formed. In this tutorial, we survey work that provides further evidence for the connections between logic and learning. Our narrative is structured in terms of three strands: logic versus learning, machine learning for logic, and logic for machine learning, but naturally, there is considerable overlap. We place an emphasis on the following “sore” point: there is a common misconception that logic is for discrete properties, whereas probability theory and machine learning, more generally, is for continuous properties. We report on results that challenge this view on the limitations of logic, and expose the role that logic can play for learning in infinite domains.
Score-Based Explanations in Data Management and Machine Learning
Speaker: Leopoldo Bertossi (Universidad Adolfo Ibáñez,Data Observatory Foundation,IMFD, Chile)
Abstract: We describe some approaches to explanations for observed outcomes in data management and machine learning. They are based on the assignment of numerical scores to predefined and potentially relevant inputs. More specifically, we consider explanations for query answers in databases, and for results from classification models. The described approaches are mostly of a causal and counterfactual nature. We argue for the need to bring domain and semantic knowledge into score computations; and suggest some ways to do this.
Rough sets
Speaker: Davide Ciucci (University of Milano-Bicocca, Italy)
Abstract: In this tutorial, the basis of rough set theory will be recalled, and extensions of the standard model surveyed. These notions will be applied to different domains such as machine learning (feature selection, clustering, classification,…) and decision making. Finally, recent trends, for instance, three-way decision, and future directions will be discussed. Through the whole tutorial links with other paradigms, first of all fuzzy sets and belief functions, will be put forward in order to frame the theory in a wider landscape and involve scholars with different backgrounds.
Information fusion using belief functions: source quality and conflict
Speaker: Frédéric Pichon (Artois University, France)
Abstract: Information fusion is the problem of extracting truthful and precise knowledge about a quantity of interest, from uncertain information provided by sources of varying quality. This tutorial will provide an overview of some recent research results in belief function theory related to this problem. Specifically, the tutorial will cover recent works revisiting Shafer’s original presentation of belief function theory as an approach to the fusion of elementary information items provided by partially reliable sources. This will lead us to explain how to properly use knowledge about the quality of the sources in the fusion process, which will provide a prism to understand some important combination rules. The last part of the tutorial will review the measurement of the conflict between belief functions and its natural application to combination rule selection. –
Representing Knowledge using Vector Space Embeddings
Speaker: Steven Schockaert (Cardiff University, UK)
Abstract: TBA