Learning on graphs with logic and neural networks

  • Lernen auf Graphen mit Logik und Neuronalen Netzen

Ritzert, Martin; Grohe, Martin (Thesis advisor); Schweikardt, Nicole (Thesis advisor)

Aachen : RWTH Aachen University (2021)
Dissertation / PhD Thesis

Dissertation, RWTH Aachen University, 2021

Abstract

In the domain of graphs we show strong connections between logic and machine learning in both theory and practice. In a purely theoretical framework we develop sublinear machine learning algorithms for supervised learning of logical formulas on various graph classes. Further we show that learning first-order logic on arbitrary graphs is intractable unless P = NP. At the intersection of theory and practice, we prove an equivalence between graph neural networks and the 1-dimensional Weisfeiler-Leman algorithm. As a practical application, we approximate combinatorial problems with recurrent graph neural networks. The proposed architecture is unsupervised and can be applied to all maximum constraint satisfaction problems. We provide learning algorithms for learning first-order and monadic second-order formulas on different graph classes that are sublinear or sublinear after a linear indexing phase. This is based on a general framework for machine learning of logical formulas on graphs introduced by Grohe and TurĂ¡n (2004). From an information theoretic perspective, learning is possible for various types of logics on mostly sparse graph classes thus our learning algorithms provably generalize well under the probably approximately correct (PAC) framework. We complement those learnability results by proving matching lower bounds. Further, we show that learning first-order logic on arbitrary graphs is intractable under common complexity theoretic assumptions. In practical machine learning on graphs, neural architectures based on graph neural networks (GNNs) dominate the leaderboards. We analyze the expressiveness of GNNs by showing an equivalence to the 1-dimensional Weisfeiler-Leman algorithm (1-WL).1-WL is a heuristic from graph isomorphism testing which is also known as color refinement. Since it is unable to distinguish all graphs, the equivalence to GNNs shows that there are pairs of graphs which GNNs are unable to distinguish. This result stands in stark contrast to feedforward neural networks which can express every function if they are large enough. As a way to avoid this upper bound, we relax GNNs to k-GNNs based on the more powerful k-dimensional Weisfeiler-Leman algorithm. We empirically show that one can effectively use practical machine learning to approximate combinatorial problems specified by logic. We propose a recurrent GNN to approximate maximum constraint satisfaction problems which in logic correspond to positive primitive formulas, a fragment of first-order logic. This GNN can be applied to any constraint satisfaction problem and is trained unsupervised to satisfy as many constraints as possible. Unsupervised training means that we do not need to compute optimal solutions for the typically NP-hard problems. By experiments on four NP-complete combinatorial problems, we demonstrate that our architecture outperforms other neural approaches, semi-definite programming, and on the maximum-2-SAT problem also a state-of-the-art heuristic.

Identifier

Downloads