Home Artificial Intelligence Evolution through large models

Evolution through large models

Evolution through large models

This paper pursues the insight that giant language models (LLMs) trained to generate code can vastly improve the effectiveness of mutation operators applied to programs in genetic programming (GP). Because such LLMs profit from training data that features sequential changes and modifications, they’ll approximate likely changes that humans would make. To focus on the breadth of implications of such evolution through large models (ELM), within the principal experiment ELM combined with MAP-Elites generates a whole bunch of hundreds of functional examples of Python programs that output working ambulating robots within the Sodarace domain, which the unique LLM had never seen in pre-training. These examples then help to bootstrap training a latest conditional language model that may output the precise walker for a specific terrain. The flexibility to bootstrap latest models that may output appropriate artifacts for a given context in a website where zero training data was previously available carries implications for open-endedness, deep learning, and reinforcement learning. These implications are explored here in depth within the hope of inspiring latest directions of research now opened up by ELM.



Please enter your comment!
Please enter your name here