A Recent Type of Engineering


Image generated using OpenAI Dall-e

As of writing this (April 2023), frameworks equivalent to langchain [1] are pioneering increasingly more complex use-cases for LLMs. Recently, software agents augmented with LLM-based reasoning capabilities have began the race towards a human-level of machine intelligence.

Agents are a pattern in software systems; they’re algorithms that could make decisions and interact relatively autonomously with their environment. Within the case of langchain agents, the environment is generally the text-in/text-out based interfaces to the web, the user or other agents and tools.

Running with this idea, other projects [2,3] have began working on more general problem solves (some kind of ‘micro’ artificial general intelligence, or AGI — an AI system that approaches human-level reasoning capabilities). Although the present incarnation of those systems are still quite monolithic in that they arrive as one piece of software that takes goals/tasks/ideas as input, it is simple to see of their execution that they’re counting on multiple distinct sub-systems under the hood.

AutoGPT in action, finding a recipe.
Image by Significant Gravitas (https://github.com/Significant-Gravitas/Auto-GPT, 30/03/2023)

The brand new paradigm we see with these systems is that they model thought processes: “think critically and examine your results”, “seek the advice of several sources”, “reflect on the standard of your solution”, “debug it using external tooling”, … these are near how a human would think as well.

Now, in daily (human) life, we hire experts to do jobs that require a particular expertise. And my prediction is that within the near future, we are going to hire some kind of cognitive engineers to model AGI thought processes, probably by constructing specific multi-agent systems, to resolve specific tasks with a greater quality.

From how we work with LLMs already today, we’re already doing this — modelling cognitive processes. We do that in specific ways, using prompt engineering and a number of results from adjoining fields of research, to realize a required output quality. Though what I described above may appear futuristic, that is already the established order.

Where will we go from here? We’ll probably see ever smarter AI systems that may even surpass human-level sooner or later. And as they get ever smarter, it’ll get ever harder to align them with our goals — with what we wish them to do. AGI alignment and the safety concerns with over-powerful unaligned AIs is already a very lively field of research, and the stakes are high — as explained intimately e.g. by Eliezer Yudkowski [4].

My hunch is that smaller i.e. ‘dumber’ systems are easier to align, and can due to this fact deliver a certain result with a certain quality with the next probability. And these systems are precisely what we will construct using the cognitive engineering approach.

  • We must always get a superb experimental understanding of find out how to construct specialized AGI systems
  • From this experience we must always create and iterate the appropriate abstractions to raised enable the modelling of those systems
  • With the abstractions in place, we will start creating re-usable constructing blocks of thought, similar to we use re-usable constructing blocks to create user interfaces
  • Within the nearer future we are going to understand patterns and best practices of modelling these intelligent systems, and with that have will come understanding of which architectures can result in which outcomes

As a positive side effect, through this work and experience gain, it might be possible to learn find out how to higher align smarter AGIs as well.

I expect to see a merge of data from different disciplines into this emerging field soon.
Research from multi-agent systems and find out how to use them for problem-solving, in addition to insights from psychology, business management and process modelling all might be beneficially be integrated into this recent paradigm and into the emerging abstractions.

We may also have to take into consideration how these systems can best be interacted with. E.g. human feedback loops, or a minimum of regular evaluation points along the method may also help to realize higher results — chances are you’ll know this personally from working with ChatGPT.
This can be a UX pattern previously unseen, where the pc becomes more like a co-worker or co-pilot that does the heavy lifting of low-level research, formulation, brainstorming, automation or reasoning tasks.

Johanna Appel is co-founder of the machine-intelligence consulting company Altura.ai GmbH, based in Zurich, Switzerland.

She helps firms to take advantage of these ‘micro’ AGI systems by integrating them into their existing business processes.

[1] Langchain GitHub Repository, https://github.com/hwchase17/langchain

[2] AutoGPT GitHub Repository, https://github.com/Significant-Gravitas/Auto-GPT

[3] BabyAGI GitHub Repository, https://github.com/yoheinakajima/babyagi

[4] “Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization”, Lex Fridman Podcast #368, https://www.youtube.com/watch?v=AaTRHFaaPG8


What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
Inline Feedbacks
View all comments

Share this article

Recent posts

Would love your thoughts, please comment.x