Neural Logic Machines

This is an important paper in the development of neural reasoning capabilities which should reduce the brittleness of purely symbolic approaches:  Neural Logic Machine

The potential reasoning capabilities, such as with regard to multi-step inference, as in problem solving and theorem proving, are most interesting, but there are important contemporary applications in machine learning and question answering.  I’ll just provide a few hightlights from the paper on the latter and some more points and references on the former below.

The paper relates  to inductive logic programming (ILP), making it more practical learn rules at meaningful scale:

Inductive logic programming (ILP) (Muggleton, 1991; 1996; Friedman et al., 1999) is a paradigm for learning logic rules derived from a limited set of rule templates from examples. Being a powerful way of reasoning over discrete symbols, it is successfully applied to various language-related problems, and has been integrated into modern learning frameworks (Kersting et al., 2000; Richardson & Domingos, 2006; Kimmig et al., 2012). Recently, Evans & Grefenstette (2018) introduces a differentiable implementation of ILP which works with connectionist models such as CNNs. Sharing a similar spirit, Rocktäschel & Riedel (2017) introduces an end-to-end differentiable logic proving system for knowledge base (KB) reasoning. A major challenge of these approaches is to scale up to a large number of complex rules.

This should have significant practical effect on relational reasoning, which is prevalent in question answering (QA).

Our work is also related to symbolic relational reasoning, which has a wide application in processing discrete data structures such as knowledge graphs and social graphs (Zhu et al., 2014; Kipf & Welling, 2017; Zeng et al., 2017; Yang et al., 2017). Most symbolic relational reasoning approaches (e.g., Yang et al., 2017; Rocktäschel & Riedel, 2017) are developed for KB reasoning, in which the predicates on both sides of a rule is known in the KB. Otherwise, the complexity grows exponentially in the number of used rules for a conclusion, which is the case in the blocks world. Moreover, Yang et al. (2017) considers rues of the form query(Y; X) Rn(Y; Zn) ^ · ·· ^ R1(Z1; X), which is not for general reasoning.

The most interesting aspect, however, is with regard to the neural turing machines of Graves.

Neural Turing Machine (NTM) (Graves et al., 2014; 2016) enables general-purpose neural problem solving such as sorting by introducing an external memory that mimics the execution of Turing Machine. Neural program induction and synthesis (Neelakantan et al., 2016; Reed & De Freitas, 2016; Kaiser & Sutskever, 2016; Parisotto et al., 2017; Devlin et al., 2017; Bunel et al., 2018; Sun et al., 2018) are recently introduced to solve problems by synthesizing computer programs with neural augmentations. Some works tackle the issue of the systematical generalization by introducing extra supervision (Cai et al., 2017). In Chen et al. (2018), more complex programs such as language parsing are studied. However, the neural programming and program induction approaches are usually hard to optimize in an end-to-end manner, and often require strong supervisions (such as ground-truth programs).

Neural Theorem Provers Do Not Learn Rules Without Exploration bolsters the assessment in the above paper with this from its discussion section:

Neural theorem proving is a promising combination of logical learning and neural network approaches. In this work, we evaluate the performance of the NTP algorithm on synthetic logical datasets with injected relationships. We show that NTP has difficulty recovering relationships in all but the simplest settings.

The paper hits all the right topics at the intersection of deep learning and reasoning, including:

  • In this paper, we propose Neural Logic Machines (NLMs) to address the aforementioned challenges.  In a nutshell, NLMs offer a neural-symbolic architecture which realizes Horn clauses (Horn, 1951) in first-order logic (FOL). The key intuition behind NLMs is that logic operations such as logical ANDs and ORs can be efficiently approximated by neural networks, and the wiring among neural modules can realize the logic quantifiers.
  • The NLM is a neural realization of logic machines (under the Closed-World Assumption3). Given a set of base predicates, grounded on a set of objects (the premises), NLMs sequentially apply first-order rules to draw conclusions
  • Our goal is to build a neural architecture to learn rules that are both lifted and able to handle relational data with multiple arities. We present different modules of our neural operators by making analogies to a set of essential meta-rules in symbolic logic systems. Specifically, we discuss our neural implementation of (1) boolean logic rules, as lifted rules containing boolean operations (AND, OR, NOT) over a set of predicates; and (2) quantifications, which bridge predicates with different arities by logic quantifiers (8 and 9).

There is an interesting comparison with Memory Networks (2015) in the paper, but it’s not clear whether all the 2015 improvements from the 2014 work were employed in the comparison (I assume so given the reference to the former).  Memory Networks are another step in the right direction.

  • MemNNs (Weston et al., 2014) are a recently proposed class of models that have been shown to perform well at QA. They work by a “controller” neural network performing inference over the stored memories that consist of the previous statements in the story. The original proposed model performs 2 hops of inference

Such memory augmented networks are broadly applicable and many clever designs are likely to emerge.  For example, see compositional attention networks for machine reasoning.