Zarqa is the subsequent generation of Large Language Models infused with Neural-Symbolic techniques for smarter and more reliable AI
Greetings Singularitarians,
Large Language Models (LLMs) have exploded in popularity across many domains including communication, customer support, and recent product development and creativity, with 25 million users a day logging into ChatGPT alone.
Significant progress has also been made within the areas of human speech and image generation.
Breakthroughs in music generation, generative trading algorithms, and other modalities and applications that may have an unprecedented impact on the economy, society, and so many points of individuals’s lives, are undoubtedly forthcoming within the near term.
For some, this rise in generative AI may very well be alarming as a result of fears of unethical use of artificial intelligence and the specter of mass job losses. For others, there may be a convincing sense of optimism, enthusiasm and a very justified hope that advanced technologies will change people’s lives for the higher and convey positive change at an enormous scale — and the enthusiasts greatly outnumber the remaining.
Nonetheless, probably the most interesting and intriguing thing to me, is that in reality we’re currently seeing only the tip of the iceberg — literally the very first significant results of the deployment of large-scale AI models. Their architectural design is simplified, their cognitive abilities are limited, their knowledge is fixed and can’t be dynamically updated, their generative behavior has the character of imitation and at all times reuses and recombines only the past experience created by humankind. And at the identical time, immediately there are that prevent us from radically improving this approach, creating and implementing rather more advanced AI architectures that allow us not only to bypass the restrictions of existing approaches, but literally teleport us to a recent space of possibilities, where all of the mentioned restrictions are invalid, skeptics’ fears are irrelevant, and enthusiasts’ expectations are even surpassed.
Returning to the subject of huge language models, I should note that the deep tech teams at SingularityNET have been coping with this R&D thread from the very starting, since 2017 and since 2014 as a part of a small research group of enthusiasts.
I remember Yoshua Bengio’s excellent first paper on probabilistic language models which I kept at all times on my desk. I remember the primary Shakespearean sonnets generated by Andjey Karpathy using easy recurrent networks — it was breathtaking.
I remember the primary discussions of researchers on the subject of what is powerful AI in practice and whether language modeling techniques is likely to be the shortest path to attain strong human-level AI. Opinions were differing in those days, but we had the conviction of our approach and stubbornly pursued this direction, putting all our strength and inspiration into it.
Our enthusiastic research team used various kinds of DNN models when self-attentional networks didn’t exist, i.e. they’d not yet been invented by the Vaswani group led by Ashish Vaswani. We used the whole lot: recurrent networks, Temporal Convolutional Networks, Separable Convolutional Networks of Francois Chollet, and developed architectures of our own design, applying every method possible since it was so promising.
SingularityNET had been working on extracting grammatical structures, integrating them with symbolic representation, utilizing masked language models, establishing a generative approach, consistently increasing and investing all available resources. Then, in the future in 2018 the primary GPT model was released by the then tiny, R&D group in California. Following this, Sergey Edunov’s team discovered the opportunity of employing large-scale data parallelism in machine translation systems. At SingularityNET we kept improving our DNN based models and built our first GPU cluster, beginning to train elegant models on an enormous scale, mastering European languages, Korean, Arabic, Amharic, and plenty of more.
It became obvious that we wanted increasingly more data. We collected it ourselves and deduplicated and filtered huge data sets scraped from the online, following best practices and implementing our own advanced solutions. Already in those years of the primary language models, it was obvious that training with 3D parallelism and improved optimization was required, that we wanted actor-critic frameworks, and that crucially we wanted human feedback and extra model training with reinforcement. We clearly saw that the curse of multilinguality may very well be overcome, that multitasking and multimodality will work at scale, that direct programming of models with natural language commands will work (aka prompting; there was no common terminology those days, nevertheless it was clear).
We knew what architectural techniques and tricks to use, but as a result of the large compute power required for training large models, we lagged behind only a step away from the tech giants and affiliated R&D labs — even though it was at all times obvious for us what the subsequent step or several steps could be. So we never stopped.
Now’s the time to harness the advances of symbolic approaches; systems developed over a long time from our persistent efforts to construct a technological stack for Artificial General Intelligence (AGI) and convey it to life.
Combining these methods with all the numerous possibilities of large-scale language models (LLMs) in a posh and chic neural-symbolic architecture will bring exponential recent possibilities. It’s time to be the primary and lead the world within the AI revolution.
We’re proud to introduce Zarqa, a novel enterprise from SingularityNET mobilizing our engineering expertise in solutions based on scaled neural-symbolic AI to create a pioneering and cutting-edge next generation of LLMs, characterised by technical initiative and unwavering leadership.
The LLM space is moving ahead at tremendous speed, and with Zarqa we will probably be focused on near-term delivery of LLM technology which equals and exceeds the tools in the marketplace today, leveraging the strengths of the SingularityNET ecosystem’s decentralized infrastructure. We’re constructing our solution on a sophisticated computing architecture based on a modular system specifically designed for large-scale training of big discriminant and generative LLMs, in addition to for processing a large-scale knowledge metagraph and producing accelerated symbolic computations at high load, while also processing massive data and monitoring vital informational sources in real time. All of those capabilities are initially combined right into a single computing system designed to unravel the actual task of coaching and running neural-symbolic AI, while remaining suitable for gradual scaling while increasing the ability of neural-symbolic intelligence on the option to the sensible achievement of AGI.
Deep integration of LLMs with knowledge graphs will probably be an early step and can help provide a grounding of textual productions in point of fact that current LLMs so egregiously lack. Following that, integration with progressively more sophisticated knowledge graphs and associated reasoning, learning and concept creation methods will probably be pursued via integration of the OpenCog Hyperon toolkit and the TrueAGI data-integration pipeline. LLMs are poised to play a key part within the transition from today’s amazing yet limited AIs toward the more powerful AGI systems of the longer term, and Zarqa is poised to steer on this aspect, via rolling out state-of-the-art-defining LLMs after which progressively expanding them via integration of ideas and systems from additional AI paradigms and solutions under development within the SingularityNET ecosystem.
Zarqa supports a dynamically updated model of the world, leading to the phenomenon of critical machine pondering, in addition to a model of AI’s own personality that may evolve, can support the strategy of self-reflection, and might follow a fundamental moral narrative and ethical codex, bringing a level of predictability, interpretability and security that was unattainable for previous generations of AI.
We apply techniques and methods that allow AI to make use of long-term memory of events, interlocutors, communications and AI’s individualized features, plus episodic memory containing a multimodal contextualized representation of events, significantly expanding the cognitive capabilities of novel AI systems and allowing them to shape their very own unique lifecycle. We empower AI with advanced perceptual mechanisms comparable to multimodal person identification and emotion recognition.
AI will not be the entire story. Zarqa also harnesses SingularityNET’s expertise in blockchain systems and is built on SingularityNET’s unique AI-driven smart contract ecosystem for managing resources and testing AI models. This may facilitate mass scale Human In The Loop (HITL) training of models for ever increasing performance and suitability. Any organization, enthusiastic skilled, or user desirous to engage in model training and testing can participate. This may furnish Zarqa with a variety of ethical norms and inputs, allowing for broad and inclusive global user base collaboration while emphasizing safety and accuracy.
The potential of Zarqa is unparalleled, providing zetta-scale intelligence for disruptive impact and tearing down barriers to entry for technology while providing easy and open access to technology generated abundance for all of humanity, not only the elite few, thereby reducing AI sector exclusivity and oligopoly and democratizing this transformative technology.
Zarqa was born to remodel the AI landscape and pave the way in which for Artificial General Intelligence. It’s an honor for me to step up for SingularityNET as Co-CEO of this incredible initiative, alongside my Co-CEO Janet Adams, with in fact the mighty Dr. Ben Goertzel overseeing science and Dr. Alexey Potapov leading our neural-symbolic engineering. Thanks very much for reading and following our progress updates as we mobilize Zarqa at speed.
Wow, Zarpa sounds exciting. Any idea of the timeline to community engagement? Will Zarqa be a part of the Sophiaverse?!
How do you get Your Roadmap?
relax everyday