Matthew Ikle is the Chief Science Officer at SingularityNET, an organization founded with the mission of making a decentralized, democratic, inclusive and helpful Artificial General Intelligence. An ‘AGI’ that shouldn’t be depending on any central entity, that’s open for anyone and never restricted to the narrow goals of a single corporation or perhaps a single country.
SingularityNET team includes seasoned engineers, scientists, researchers, entrepreneurs, and marketers. The core platform and AI teams are further complemented by specialized teams dedicated to application areas equivalent to finance, robotics, biomedical AI, media, arts and entertainment.
Given your extensive experience and role at SingularityNET, how confident are you that we’ll achieve AGI by 2029 or sooner, as predicted by Dr. Ben Goertzel?
I’m going to reply this query in a little bit of a roundabout way. 2029 is roughly five years from now. A few years ago (early-mid 2010s), I used to be extremely optimistic about AGI progress. My optimism on the time was founded on the extent of detailed thought and convergence of ideas I witnessed in AGI research on the time. While many of the big ideas from that era, I imagine, still hold promise, the issue, as is usually the case, comes from fleshing out the small print of such broad-stroke visions.
With that caveat in mind, there may be now a plethora of recent information, from quite a few disciplines – neuroscience, mathematics, computer science, psychology, sociology, you name it – that gives not only the mechanisms for ending those details, but additionally conceptually supports the foundations of that earlier work. I’m seeing patterns, and in quite divergent fields, that each one appear to me to be converging at an accelerating rate toward analogous types of behaviors. In some ways, this convergence jogs my memory of the time period prior to the discharge of the primary iPhone. To paraphrase Greg Meredith, who’s working on our RhoLang infrastructure for protected concurrent processing, the patterns I see today are related to origin stories – how did the primary life/cell begin on earth? How and when did mind form? And related questions regarding phase transitions for instance.
For instance, there is kind of a bit of recent experimental research that tends to support the ideas underlying a posh dynamical systems viewpoint. EEG patterns of human subjects, for instance, display remarkable behavior in alignment with such system dynamics. These results harken back to some much earlier work in consciousness theories. Now there appears to be the beginnings of experimental backup for those theoretical ideas.
At SingularityNET, I’m considering loads concerning the self-similar structures that generate such dynamics. This is kind of different, I might argue, than what is going on in much of the DNN/GPT community, though there may be actually recognition amongst certain more fundamental researchers of those ideas. I might point to the paper “Consciousness in Artificial Intelligence: Insights from the Science of Consciousness” released by 19 researchers in August of 2023, for instance. The researchers spanned quite a lot of disciplines including consciousness studies, AI safety research, brain science, mathematics, computer science, psychology, neuroscience and neuroimaging, and mind and cognition research. What those researchers have in common is larger than an easy quest for the following incremental architectural improvement in DNNs, but as an alternative they’re focused on scientifically understanding the massive philosophical ideas underpinning human cognition and how one can bring them to bear to implement real AGI systems.
What do you see as the most important technological or philosophical hurdles to achieving AGI inside this decade?
Understanding and answering big philosophical and scientific questions including:
- What’s life? We might imagine the reply is evident, but biological definitions have proven problematic. Are viruses “alive” for instance.
- What’s mind?
- What’s intelligence?
- How did life emerge from a couple of base chemicals in specific environmental conditions? How could we replicate this?
- How did the primary “mind” emerge? What ingredients and conditions enabled this?
- How will we implement what we learn when investigating the above five questions?
- Is our current technology as much as the duty of implementing our solutions? If not, what do we’d like to invent and develop?
- How much time and personnel do we’d like to implement our solutions?
SingularityNET views neuro-symbolic AI as a promising solution to beat the present limitations of generative AI. Could you explain what neuro-symbolic AI is and the way SingularityNET plans to leverage this approach to speed up the event of AGI?
Historically, there have been two most important camps of AGI researchers, together with a 3rd camp mixing the ideas of the opposite two. There have been researchers who imagine solely in a sub-symbolic approach. Lately, this primarily means using deep neural networks (DNNs) equivalent to Transformer models including the present crop of enormous language models (LLMs). Resulting from the usage of artificial neural networks, sub-symbolic approaches are also called neural methods. In sub-symbolic systems processing is run across equivalent and unlabeled nodes (neurons) and links (synapses). Symbolic proponents use higher-order logic and symbolic reasoning, wherein nodes and links are labeled with conceptual and semantic meaning. SingularityNET follows a 3rd approach which can be most accurately described as a neuro-symbolic hybrid, leveraging the strengths of symbolic and sub-symbolic methods.
Yet it’s a particular type of hybrid largely based on Ben Goertzels’ patternist philosophy of mind and detailed in, amongst many other documents, his screed “The General Theory of General Intelligence: A Pragmatic Patternist Perspective”.
While much of current DNN and LLM research relies upon simplistic neural models and algorithms, the usage of mammoth datasets (e.g. the whole web), and proper settings of billions of parameters within the hopes of achieving AGI, SingularityNET’s PRIMUS strategy relies upon foundational understandings of dynamic processes at multiple spatio-temporal scales and the way best to align such processes to prompt desired properties to emerge at different scales. Such understandings enable us to proceed to guide AGI research and development in a human comprehensible manner.
What frameworks do you suspect are critical to be certain that AGI development advantages all of humanity? How can decentralized AI platforms like SingularityNET promote a more equitable and transparent process in comparison with centralized AI models?
Every kind of ideas here:
Transparency — While nothing is ideal, ensuring complete transparency of the decision-making process may also help everyone involved (researchers, developers, users, and non-users alike) align, guide, understand, and higher handle AGI development for the advantage of humanity. This is comparable to the issue of bias which I’ll touch on below.
Decentralization – While decentralization could be messy, it may possibly help be certain that power is shared more broadly. It shouldn’t be, in itself, a panacea, but a tool that, if used accurately, may also help create more equitable processes and results.
Consensus-based decision-making – decentralization and consensus-based decision making can work together within the pursuit of more equitable processes and results. Again, they don’t at all times guarantee equity. There are also complexities that should be addressed here when it comes to repute and areas of experience. For instance, how can we best balance conflicting desired characteristics? I view transparency, decentralization, and consensus-based decision-making, as just three critically essential tools that could be used to guide AGI development for the advantage of humanity.
Spatiotemporal alignment of emergent phenomena across multiple scales from the extraordinarily small to the inordinately large. In developing AGI, I imagine it will be significant to not only depend on a single “black-box” approach wherein one hopes to get every part correct on the outset. As a substitute, I imagine designing AGI with fundamental understandings at various development stages and at multiple scales can’t only make it more more likely to achieve AGI, but more importantly to guide such development in alignment with human values.
SingularityNET is a decentralized AI platform. How do you envision the intersection of blockchain technology and AGI evolving, particularly regarding security, governance, and decentralized control?
Blockchain actually has a task to play in AI control, security, and governance. Certainly one of blockchain’s biggest strengths is its ability to foster transparency. The query of bias is an incredible example of this. I might argue that every body and each dataset is biased. I even have my very own personal biases, for instance, with regards to what I imagine is required to realize truly protected, helpful, and benevolent AGI. These biases were forged by my studies and background they usually guide my very own work.
At the identical time, I attempt to be completely open to ideas that conflict with my biases and am willing to regulate my biases based upon recent evidence. Regardless, I try my best to be open and transparent with respect to my biases, and to then condition my ideas and decisions based upon a self-reflective understanding of those biases. It is difficult, it’s difficult but, I imagine, higher than not acknowledging one’s own biases. By its nature, blockchain allows for higher and transparent tracking, tracing, and verification of processes and events. In the same manner as I described previously, transparency is a crucial, but not at all times sufficient, component for security, governance, and decentralized control.
How blockchain and AGI co-evolve is an interesting query. So that the 2 technologies interact toward a positive singularity, it seems clear that the elemental characteristics I keep pointing at (transparency, decentralization, consensus, and values alignment), are central and important and must be kept in mind in any respect stages of their co-evolution.
As a frontrunner who has been closely involved in each AI and blockchain, what do you suspect are a very powerful aspects for fostering collaboration between these two fields, and the way can that drive innovation in AGI?
I come from the AI/AGI side of that pair. As is usually the case when integrating cross-disciplinary ideas, much comes all the way down to matters of language and communication. All groups have to take heed to one another in an effort to higher understand how the technologies may also help each other. In my job at SingularityNET, this has been a continuing struggle. High-end researchers, which it could be an understatement to say that SingularityNET has in abundance, often have clear mental conceptions of huge ideas. When working across disciplinary boundaries, the difficult part is realizing that not everyone seems to be “in your head”. What one takes without any consideration, is not going to be so clearly observed from those in other fields. Even words utilized in common could be used otherwise across different fields of study. There was a recent case in our BioAI work, wherein biologists were using a mathematical term, but not entirely accurately when it comes to its mathematical definition. Once those types of situations are clearly understood, the team can move forward with common purpose in order that the combination truly proves the entire greater than the sum of its parts.
How do you see the AI and blockchain industries working towards greater diversity and inclusion, and what role does SingularityNET play in promoting these values?
AI and blockchain can each play major roles in improving diversification and inclusion efforts. Although I imagine it’s not possible to remove all bias – many biases form simply through life experiences – one could be open and transparent about one’s biases. That is something I actively strive to do in my very own work which is biased by my academic background in order that I see problems through a lens of complex system dynamics. Yet I still strive to be open to and understand ideas and analogies from other perspectives. AI could be harnessed to help on this self-reflection process, and blockchain can actually aid with transparency. SingularityNET can play an enormous role by hosting tools for detecting, measuring, and removing, as much as is feasible, biases in datasets.
How does SingularityNET’s work in decentralized AI ecosystems contribute to solving global challenges equivalent to sustainability, education, and job creation, especially in regions like Africa, where you will have a special interest?
Sustainability:
- Applying AI and system models to unravel complex ecosystem problems at massive scale.
- Monitoring such solutions at scale.
- Using blockchain to trace, trace, and confirm such solutions.
- Using a mix of AI, ecosystem models, hyper-local data, and blockchain, we now have ideated complete solutions to artisanal mining in Africa, and agricultural carbon sequestration at scale.
Education:
As a former tenured full professor of mathematics and computer science, education is amazingly essential to me, especially because it provides opportunities to underserved student populations. It is crucial to:
- Enhance accessibility by developing hybrid courses to succeed in students who may face geographical, financial, or time constraints.
- Promote diversity and Inclusion by Increasing the participation of underserved populations in AI, blockchain, and other advanced technologies.
- Foster interdisciplinary knowledge by creatin courses that bridge academic and skilled fields.
- Support profession advancement by providing skills and certifications which are directly applicable to the job market.
I view each AGI and blockchain, and their synergies, as playing critical roles addressing the above objectives inside “apprenticeship to mastery” style programs centered upon hands-on project-based learning.
Job Creation:
By fostering the 4 educational objectives above, it seems to me AGI, blockchain, and other advanced technologies, coupled with positive collaborations amongst teachers and learners, could encourage and spawn entire recent technologies and businesses.
As someone committed to achieving a positive singularity, what specific milestones or breakthroughs in AI technology do you suspect shall be crucial to be certain that AGI develops in a helpful way for society?
- Ability to align emergent phenomena in human interpretable manners across multiple spatiotemporal scales.
- Ability to know at a deeper level the concepts underlying “spontaneous” phase transitions.
- Ability to beat multiple hard problems at a nice detail to enable true multi-processing through state superpositions.
- Transparency in any respect stages.
- Decentralized decision-making based upon consensus constructing.