Home Artificial Intelligence Neuro-Symbolic AI or can we create an AI that is nice at (almost) every part?

Neuro-Symbolic AI or can we create an AI that is nice at (almost) every part?

2
Neuro-Symbolic AI or can we create an AI that is nice at (almost) every part?

Neuro-symbolic AI is a strand of AI research that has been around for some time but that recently got increasingly more interest. It tackles interesting challenges in AI like attempting to learn with less data, to transfer knowledge to latest tasks and to create interpretable models. Never heard of it? Well, it’s quite a distinct segment field so don’t worry. The term “neuro-symbolic AI” might sound mysterious and definitely abstract. Let’s begin to know the “neuro” and the “symbolic” by our own way of pondering.

At the guts of neuro-symbolic AI is the query of easy methods to get from sensory experience (“neuro”) to abstract pondering (“symbolic”). “Dogs aren’t cats” is an announcement that all of us understand and (likely) agree with. This statement might sound trivial but if you think more about it a number of things develop into striking:

  • How can we learn to make the mapping between the dogs we see in the actual world and the word “dog”?

What’s more, after we make a general statement like “Dogs aren’t cats” we don’t check with any dog or cat particularly and we completely abstract away from all of the specificities of dogs like their color, size, fur texture etc. What matters are the defining features that dogs share and which make them different from cats e.g. their distinct skull shapes. This known as an idea and we use an emblem just like the word “dog” to check with the concept. Having symbols is a strong tool since it allows for abstract pondering. Along with a grammar that defines how things relate to one another e.g. “A is just not B” or “A is a component of B” we will make infinite statements about our world and draw connections between things that aren’t connected within the physical world for instance through analogies. It also allows us to make inferences. Which means that we’re in a position to draw conclusions: “All dogs are mammals” and “All mammals are cute” allows us to conclude “All dogs are cute” (as you possibly can tell I really like furry animals).

Cognitive Scientists still don’t really understand how the repeated reflection pattern on our retina produced by seeing dogs results in the abstract concept “dog” that we will reason about in our language. Or phrasing the issue the opposite way around: What’s the neural correlate (e.g. neuron, group of neurons, pattern of activity) of the concept “dog”? This known as the neuron-cognition gap and is one of the exciting frontiers in Cognitive Science for my part. But what is evident: we humans are all in a position to achieve this.

How about artificial intelligence? Relating to “seeing” and recognizing objects on the planet, AI research has made impressive leaps up to now years. In Computer Vision, researchers have created systems that distinguish areas with different meanings based on visual differences (semantic segmentation). But even when the AI “sees” the difference between a lawn and a dog, does it have an idea of those things? It’s protected to say that the AI’s concept doesn’t have the richness of our concepts of real-world things. Through our senses, we mix different modalities and create multi-modal concepts. What does the dog appear to be? How does it feel to the touch the fur? How does it sound? How does it move around and interact with us? Some dog fans probably even have encyclopedic knowledge about dog breeds and other useful facts like life expectancy, temperament of certain breeds and customary diseases. The wealthy multi-modal sensory experiences and the factual knowledge make our concepts multifaceted to an extent that is just not yet achieved inside AI.

If we now assume that the AI has acquired an idea of “dog” by seeing a number of images of dogs: How can it use its knowledge of dogs to form statements like “Dogs aren’t cats” based on its experience of cat and dog images? In other words: how is data translated into symbols? This is basically the important thing query of neuro-symbolic AI.

Computer Vision models can distinguish between dogs and the background based on visual dissimilarity. Source: https://iq.opengenus.org/panoptic-segmentation/

In computer programming we use logic to precise things like “if DOG then MAMMAL”. The concepts that a Machine Learning model learns nevertheless are available in a unique form. They’re real-valued vectors with multiple dimensions that correspond for instance to the pixel values of a picture. They can not be easily “plugged” right into a logical formula. Researchers have experimented with other ways of grounding symbols in data and geometry plays a key role. The vectors learned by the Machine Learning model might be represented in a coordinate system. After we map every vector representation for dogs in a coordinate system, a shape emerges that corresponds to the concept dog. The identical is true for a better level concept like mammal which might correspond to a much bigger space that features “dogs”. We clearly see a dog-is-part-of-mammals relationship which then might be expressed because the logical statement “All dogs are mammals”. Having symbols coupled to those vector representations implies that each time the concept of “dog” barely shifts through learning more about dogs, the meaning of the symbol changes with it.

The vector representation of a dog image might be mapped right into a coordinate system. Repeating the identical for multiple data points, we see shapes emerge that correspond to concepts. The geometric shapes of the concepts “dogs” and “mammals” are semantically interpretable as part-of-relationships. These can then be expressed in logical terms.

To sum it up: Neuro-symbolic AI tries to attach the training from sensory experience with the reasoning with abstract symbols. There’s nevertheless not one single technique to achieve this. Let’s take a look at some challenges that researchers in the sector tackle.

With the rapid advances of Large Language Models (LLMs) one could easily get the impression that models which have great language abilities even have great reasoning abilities. This nevertheless is just not necessarily the case. Joshua Tenenbaum and his colleagues call this the “good at language -> good at thought” fallacy. We confound language abilities with good pondering because much of our pondering is consciously experienced in linguistic form. It gets even trickier: Even when a model performs well on a reasoning task, it doesn’t mean that it learns to reason. Let me explain this quite confusing finding. Guy Van den Broeck and his colleagues did some interesting experiments training a BERT model on a straightforward logical reasoning task (forward chaining). They led several “BERTs” learn the identical logical reasoning task and all of them individually performed very well. the high performance one could assume that BERT has learned the reasoning problem. But what they found was that a BERT model trained on one distribution fails to generalize to the opposite distributions inside the same problem space. Which means that BERT was not in a position to generalize what it had learned which is at all times a nasty check in Machine Learning (and in life). But when it didn’t learn logical rules, then what did it actually learn? It learned statistical features because that is what BERT and most other ML models do. This finding can function a warning: If we see that a model produces the expected output, we frequently fall into the trap of believing that it “thinks” identical to us. It is generally only when we modify the context barely, that we realize the basic differences between how ML models “think” and the way we do. And it looks as if LLMs aren’t well equipped to unravel all types of reasoning tasks.

We shouldn’t throw the child out with the bathwater though because LLMs can with just a little little bit of help develop into higher at commonsense reasoning. We use our common sense on a regular basis to fill in information gaps, for instance after we hear these sentences: “It’s going to snow. I’ll should get up half-hour earlier”. Through our experience and understanding of context, we will conclude that we’d like more time within the morning because we have now to free the automobile from snow before leaving for work. Antoine Bosselut and his colleagues thought that enhancing LLMs by giving them additional structured knowledge in the shape of information graphs might improve their ability to “fill within the gaps” with common sense. While LLMs already encode an unlimited amount of information from text corpora, they were in a position to teach the model the structure of information in the way in which they wanted it to represent that knowledge. They then analyzed which parameters of the model modified during fine-tuning meaning at which points within the model learning took place. They found that almost all of the parameter changes happened within the decoder and specifically in the eye heads where different representations get mixed while the encoder and feedforward layers modified little. This means that the transformer model learned easy methods to express and access previously learned information slightly than learning many latest relationships from the knowledge graph itself. Combining LLMs with knowledge graphs may provide a factual grounding of information and more stable and interpretable concepts which is a serious shortcoming of current LLMs.

At this point it’s possible you’ll wonder: What’s all this neuro-symbolic AI research good for out of doors of the research cosmos? I used to be surprised to listen to that some people already turn research into products. The corporate Elemental Cognition builds natural language understanding solutions using neuro-symbolic AI. By “understanding” Director of AI Research Adi Kalyanpur means to fluently engage with, simplify, inform and ensure understanding. They’ve developed different neuro-symbolic AI models that at their basis all process input through neural networks that produce probabilistic outputs. These outputs are converted to a symbolic model which performs logical reasoning. The output of the reasoning step is then converted back right into a ML model to generate natural language output. Considered one of their most interesting use cases is a virtual travel agent which helps customers book a world trip. The issue is kind of complex when you consider it: there are a whole bunch of possible destinations that the agent should consider, the duration of the trip can vary from every week to as much as a 12 months, there are thousands and thousands of possible flights, layover and schedule combos. What’s more, flight availability changes continually and customers’ preferences for a destination or schedule might change through the conversation as they higher understand what the implications of every decision are. Having a neuro-symbolic system in place helps to interact dynamically by processing natural language input and producing language output while transforming natural language instructions into rules that might be processed by a reasoning engine. This fashion Elemental Cognition has managed to construct a versatile conversational agent that relies on fact-based knowledge.

Is neuro-symbolic AI the appropriate technique to solve the toughest problems in AI? I genuinely don’t know as of now it is usually applied to toy problems and is kind of domain-specific. Wherever this research endeavor leads, for me crucial thing is that researchers on this field are asking crucial questions and are pondering outside the box. It is kind of refreshing to see critical minds amidst the hype around large language models and the “larger is best” mentality. Let’s see what becomes the following big thing in AI but I feel structure and learning will play a vital role.

2 COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here