NeSy Structure Learning
Title: NeSy Structure Learning
DNr: NAISS 2024/22-1221
Project Type: NAISS Small Compute
Principal Investigator: Matthias Möller <matthias.moeller@oru.se>
Affiliation: Örebro universitet
Duration: 2024-09-20 – 2025-10-01
Classification: 10201
Keywords:

Abstract

The Basic idea is to explore neurosymbolic (NeSy) AI and Structure Learning. In my specific case, NeSy refers to training neural networks (NN) with structures that are derived from symbolic logic (human understandable). Research has shown that training NN with structures derived from symbolic knowledge (Darwiche & Marquis, 2002), the NN will learn to extract features from vectorized data (i.e. images) which support the correct evaluation of the logic. It can be seen as a form of distant learning where the NN autonomously detects features that are relevant for evaluating the logic program(Manhaeve et al., 2021). However, an open question is whether we can learn these structures and train NNs simultaneously. The result would be a structure that is understandable by humans while using NN to learn hidden meaning from vectors. This work could contribute to following aspects: - the generated structures can be viewed by humans to gain an understanding of hidden relationships and features - the trained NN will learn something. When humans can detect what it learned it can be used to formulate new hypothesis - Our resulting NeSy programs are modular systems composed of different NN and sub-structures (logic rules). Every component can be replaced, optimized and removed which is in contrast to most machine learning approaches. - Humans can immediately inject deterministic knowleddge in our system. Thus, humans can immediately affect the output of the model and easier address issues related to bias, safety and so on. - One hope addressed by literature is that NeSy compensate for shortcomings of Machine Learning (ML) and Machine Reasoning (MR) (the best of both worlds (De Raedt et al., 2020)): 1) lower need for data, more intractability than ML 2) more robust and applicable than MR # References Darwiche, A., & Marquis, P. (2002). A Knowledge Compilation Map. _Journal of Artificial Intelligence Research_, _17_, 229–264. [https://doi.org/10.1613/jair.989](https://doi.org/10.1613/jair.989) De Raedt, L., Dumančić, S., Manhaeve, R., & Marra, G. (2020). _From Statistical Relational to Neuro-Symbolic Artificial Intelligence_ (arXiv:2003.08316). arXiv. [https://doi.org/10.48550/arXiv.2003.08316](https://doi.org/10.48550/arXiv.2003.08316) Manhaeve, R., Marra, G., & De Raedt, L. (2021). Approximate Inference for Neural Probabilistic Logic Programming. _18th International Conference on Principles of Knowledge Representation and Reasoning (KR 2021)_, 475–486.