Hello everyone! 👋
Over the past few months, we've been working on ARC benchmark as a part of our Neuro-Symbolic AI project that we’ve been able to achieve some promising results, and it feels incredible to see our approach—combining symbolic reasoning with neural network capabilities—making meaningful progress.
Nucleoid (aka nuc
) adopts a Neuro-Symbolic AI architecture but introduces a novel twist: the intermediate language as a universal bridge between Neural Networks and Symbolic Systems.
The intermediate language plays a critical role in uniting the two paradigms. Based on our findings, the nuc
lang helps Neural Networks is to abstract patterns, which is eventually used in Symbolic System, and Knowledge Graph is built with logic and data representations in the intermediate language. In addition, LLMs surprisingly behaves near deterministic while running on ARC-AGI.
Before diving into our approach:
What is ARC Benchmark?
The Abstraction and Reasoning Corpus (ARC) is a benchmark dataset and challenge designed to test AGI systems on their ability to perform human-like reasoning and abstraction. Developed by François Chollet, ARC is not a typical machine learning dataset—it intentionally avoids tasks solvable by brute-force statistical techniques or large-scale data training.
Our Progress 🐋
We were able to get very promising and exciting numbers (Still incomplete tho). For example, in this puzzle, our project responded with this result without any prompt engineering.
More details here 👇
https://github.com/NucleoidAI/Nucleoid/tree/main/arc
...and this is ChatGPT o-1's answer
🌱 What is Neuro-Symbolic AI?
Neuro-Symbolic AI combines the pattern-recognition capabilities of neural networks (subsymbolic AI) with the logical reasoning and structured knowledge of symbolic AI to create robust and versatile systems. Neural networks excel at learning from unstructured data, like images or text, while symbolic AI handles explicit rules and reasoning, offering transparency and precision. By integrating these approaches, Neuro-Symbolic AI enables generalization from smaller datasets, improves explainability, and supports tasks requiring both adaptability and logical consistency. This hybrid approach is pivotal for advancing AGI, as it bridges the gap between learning from data and reasoning through.
🌍 System 1 and System 2
Neuro-Symbolic AI aligns intriguingly with the concepts from Daniel Kahneman’s Thinking, Fast and Slow, which describes two systems of human thought: System 1 (fast, intuitive, and automatic) and System 2 (slow, deliberate, and logical).
- Neural Networks in Neuro-Symbolic AI parallel System 1, as they excel at processing unstructured data, recognizing patterns, and generating outputs rapidly without explicit reasoning. They mimic intuitive, subconscious processes that are data-driven and reactive.
- Symbolic AI, on the other hand, mirrors System 2, as it relies on explicit rules, logic, and structured reasoning to solve problems in a deliberate and explainable manner, akin to conscious, rational thought.
By combining these two paradigms, Neuro-Symbolic AI reflects the dual systems of human cognition, enabling it to tackle problems requiring both fast intuition (pattern recognition) and slow reasoning (logic and planning). This hybrid approach not only enhances AI's adaptability but also brings it closer to human-like intelligence by integrating the strengths of both modes of thought.
🦆 Duck Test
"If it looks like a duck, swims like a duck, and quacks like a duck, then it probably is a duck"
Simply, it is System 1 at work. We have seen ducks probably thousands or millions of times throughout our lives, which forms well-defined patterns in our cognition. When we come across something resembling a duck, human cognition doesn’t trigger System 2 because identifying the object as a duck is automatic and intuitive. Our brains rely on a mental "duck schema" even the individual parts of a duck, such as its wings, bill, or webbed feet, are matched to their associated labels in realm of System 1.
Duck in the lake
Again, we won't be surprised if we seen duck in the lake, because we have enough labelled patterns to make the call.
Duck in the City
In this case, as our experiences typically associate ducks with natural settings, it is uncommon to see a duck in an urban environment at night. This unfamiliarity triggers System 2 to take over, engaging in more deliberate reasoning. So, System 2 is now responsible for reasoning as cities being dangerous after dark, and the duck is in the city, duck may not be safe. System 2 overrides System 1’s instinctive identification of "a duck" and shifts the focus to evaluating the broader circumstances of the duck's well-being.
CPU vs GPU
Neuro-Symbolic AI architecture orchestrates a seamless harmony between the precision of CPUs in reasoning and the raw power of GPUs in pattern recognition.
CPUs are specialized in logical branch operations used in decision-making, and GPUs execute parallel arithmetic operations for matrix multiplications in pattern recognition. So, it is important to understand how they are different.
CPUs are designed for versatility, with fewer cores optimized for high-performance single-threaded tasks and low latency, making them well-suited for complex decision-making, sequential processing, and multitasking. In contrast, GPUs have thousands of smaller, energy-efficient cores optimized for massive parallelism, enabling them to handle tasks like matrix computations, image rendering, and deep learning efficiently. GPUs excel in throughput-oriented tasks where large data sets can be processed simultaneously, while CPUs focus on general-purpose computing and running the operating system. Additionally, CPUs often feature larger caches and more sophisticated control logic to handle diverse workloads, whereas GPUs prioritize raw computational power and bandwidth to accelerate specific workloads. This fundamental difference makes GPUs indispensable for tasks requiring high parallelism, while CPUs remain the backbone of general computing and coordination.
While building a modular AI system, it is crucial to design with the underlying hardware in mind. In advanced systems like AIs, hardware constraints can sometimes conflict with algorithmic requirements. In Neuro-Symbolic standpoint, Neural Network (System 1) needs GPU for pattern recognition and CPUs for knowledge representation and reasoning in Symbolic (System 2). It is worth noting, System 1 and 2 is just for building AI foundation, expecting more and more systems like them...
In short, it is not possible doing efficient reasoning with GPU or pattern recognition with using CPU.
🌿 Stay Tuned for Recap 02 in Road_to_AGI series...
NucleoidAI / Nucleoid
Neuro-Symbolic AI with Knowledge Graph | "True Reasoning" through data and logic 🌿🌱🐋🌍
Nucleoid
Neuro-Symbolic AI with Knowledge Graph
Reasoning Engine
Declarative (Logic) Runtime Environment: Extensible Data and Logic Representation
Nucleoid is a declarative, logic-based, contextual runtime for Neuro-Symbolic AI. Nucleoid runtime tracks each statement in IPL-inspired declarative syntax and dynamically creates relationships between both logic and data statements in the knowledge graph to used in decision-making and problem-solving process.
- Adaptive Reasoning: Combines symbolic logic with contextual information to analyze relationships, draw conclusions and incorporating new information and adjusting its conclusions accordingly.
- Logic Graph: Specialized knowledge graph that captures relationships between both logic and data statements based on formal logic, facilitating complex deductions and adapting to new information.
- Explainability: The Logic Graph provides a transparent representation of the reasoning process, making it easier to understand how decisions are reached and potential biases are identified.
Echoing to the idea of "thinking, fast and slow", AI system should provide fast, “intuitive” ideas, and the…
Top comments (0)