Navigating the World of ChatGPT and Its Open-source Adversaries The Beginnings: GPT-3 and ChatGPT enter the stage February 2023: LLaMA (16.7 k stars) March 2023: llama.cpp (20.4 k starts) & Alpaca (19.1 k stars) Visualizing Trends What’s missing? What’s next? Summary


Photo by Paulius Dragunas on Unsplash

The speed with which recent tools and libraries within the domain of huge language models (LLMs) are being developed is unprecedented. In this text, we summarize recent milestones and put the newest and best into perspective: We start with the arrival of OpenAI’s GPT-3 and the following development of ChatGPT and proceed with the looks of Open alternatives reminiscent of LLaMa, Alpaca, GPT4All and Vicuna. This could get you on top of things with the newest developments and hopefully provide some perspective on what challenges lay ahead. Lastly, I give some perspective on features I’m waiting for and ongoing developments, reminiscent of self-refinement.

By today’s standards, one could consider GPT-3 to be old. GPT-3 is the underlying language model of ChatGPT, and the original paper “Language Models are Few-Shot Learners” was already made available in May 2020. At its core, it’s a model that takes input text and tries to predict what’s next. The model was made initially available with a waitlist, and it took until November 2021 when an API was made available to everyone. GPT-3 sparked a recent generation of tremendously successful AI writing tools, reminiscent of Jasper (with a surprising $1.5 B valuation in 2022 after being founded in 2021) or

In early 2022, OpenAI presented InstructGPT, which used RLHF: Reinforcement learning from human feedback, where human preferences are used as a reward signal. And in November 2022, ChatGPT was released (termed GPT-3.5). In just 5 days, it managed to draw one million users, surpassing previous giants like Instagram (2.5 months), Facebook (10 months), and Twitter (2 years). Since then, the timeline has been ever-accelerating.

Photo by Liudmila Shuvalova on Unsplash

It was until February 24, 2023, when Meta (Facebook) introduced LLaMa — Large Language Model Meta, “A foundational, 65-billion-parameter large language model”. It was trained on publicly available data and their 13B model was shown to outperform GPT-3 on most benchmarks. The wonderful thing about it was: The model and weights were made available to the general public. That’s — almost. There may be a noncommercial GNU license and you have got to enroll and wait to get access, which in my case took a few days. Once I first tested it, things didn’t work out of the box. My first impression of the GitHub repository was that this was something where code was made available quickly relatively than having something that’s thoroughly tested. That is what I’d have expected for a repository coming from a giant corporation. What I didn’t anticipate at the moment is how quickly the community would overcome this and built something on top. Sometimes speed trumps usability in spite of everything.

Take into accout that LLaMA is geared towards GPT-3, and got here out when ChatGPT was already on the stage. Clearly, one in every of the things of interest was the ChatGPT functionality. It took three days , until February 27 — and while others were still waiting to get access to the LLaMA weights — when a primary model that used RLHF and the LLaMA weights was introduced: ChatLLaMA. This required Meta’s original weights and a custom dataset for fine-tuning.

Parts of the community were getting impatient with having to attend to get access. It was not long until the weights were uploaded as a torrent — and shared on the GitHub repository as a Pull Request, effectively circumventing the waitlist.

One in all the main problems with LLMs and likewise of LLaMA is, that they’re very resource hungry. More specifically, industrial-grade and expensive GPUs reminiscent of an A100 GPU, which might have 40 or 80 GB of memory (Pricetag for the 80 GB variant is ~$15k). OpenAI has 10k of those to run ChatGPT. One reason for this is just the model size. Take LLaMA for instance, which has models with 7, 13, 33 and 65 billion parameters. Even for the smallest one, 7B you would want 14 GB of memory just for the model. To run stuff you can even need some for the decoding cache. A typical consumer-grade GPU has 8 GB and only top consumer models, reminiscent of the NVIDIA 4090 (MSRP $1599) have 24 GB. There may be one strategy to reduce the memory load by running them with reduced precision — a process called quantization.

One package that addresses this was Georgi Gerganov’s llama.cpp, which was initially released on March 10. That is two weeks after the discharge of LLaMA. “The fundamental goal is to run the model using 4-bit quantization on a MacBook” and was hacked in a night”. The breakthrough of that is you can run it on CPU — where you sometimes have way more memory and it effectively lets you run the LLM on commodity hardware. Later this was even improved by enabling memory mapping. In case you are excited about a more in-depth discussion about memory usage, there may be a pleasant thread by Jeremy Howard about this on Twitter.

Around the identical time of the discharge of llama.cpp, consider that is two weeks after the LLaMA release, researchers from Stanford released Alpaca to construct a Chat-GPT-like instruction following model. The interesting part is how they managed to get training data: They began with 175 human-written instructions and gave it to GPT-3 to generate more instructions. They then handed 52 K instruction-following examples to ChatGPT to have their training data. This cost lower than $500 in OpenAI credits. Add some $600 USD to coach, they usually had their model.

Photo by Levart_Photographer on Unsplash

Within the meantime, what happened on the side of OpenAI? A core limitation of ChatGPT is that it isn’t up-to-date and is proscribed to the training data from a while ago. Connecting to an existing knowledge base is one in every of the primary apparent use cases. Quickly after ChatGPT became public, you’d see several tutorials on tips on how to adapt ChatGPT to your personal knowledge base.

An example of this pdfGPT, which lets you chat with a PDF document. Technically, this will be done by embedding your existing knowledge base and saving this right into a vector database. For every prompt, you may then quickly retrieve chunks of relevant information and add this to the prompt of Chat-GPT. Nevertheless, the quantity of knowledge you may add this manner is fundamentally limited: gpt-3.5-turbo, the model of ChatGPT, has a token limit of 4,096, which is a few 3,000 words (6–12 pages).

Shortly after Alpaca, on March 14, OpenAI held their GPT-4 developer demo, showcasing what the brand new model can do. Along with the increased performance and multimodal capabilities, this recent model pushes the token limit to eight,192 tokens. There may be also one other model: gpt-4-32k — model, with as much as 32,768 tokens, which will likely be 48–72 pages. This offers a variety of room to feed data into your model. For the model specifications, you will discover more within the documentation of OpenAI here.

On March 23, openAI introduced plugins as a method to feed in up-to-date information or run computations. An example of that is the retrieval plugin (13.4 k stars). This has connectors to many vector databases, so the connection to knowledge bases is somewhat standard today. Besides OpenAI plugins, there are open libraries that aim to construct solutions around LLMs, reminiscent of LangChain, which likewise have connectors to existing vector databases. Even further, Microsft offers Azure-based solutions to directly interface with enterprise data, so I believe by now there are good solutions to beat the limitation of actuality.

One other limitation of ChatGPT is that it makes things up, i.e., it hallucinates. A simple strategy to address that is with prompt engineering. Say you would say that the model shouldn’t make things up and only take information from an existing knowledge base. By now, there are unbelievable tutorials on tips on how to write higher prompts.

From the Open source side, the subsequent big step forward was GPT4All (25k stars) from Nomic AI. Like Alpaca, it used ChatGPT (3.5) outputs to coach. The breakthrough here is that they showed how you would use quantized weights and may run them in your laptop. For me, this worked like a charm, and the setup was so straightforward that I spent more time downloading the models than the rest. I made a separate article about this here.

Now, after having models that used ChatGPT output, the subsequent step that one could anticipate can be to make use of the newer model, GPT-4, to generate training data. That is what Vicuna did; see the GitHub repository here. Here, they used 70K user-shared ChatGPT transcripts to coach their model.

Again, training costs were a mere $300 USD. The model is Vicuna-13B, so while it could run on GPU it isn’t as portable, e.g., because the quantized 7B model. Martin Thissen has written a pleasant writeup on tips on how to use it and the way you would get a quantized Vicuna model here.

By now, there ought to be one apparent trend: Open Source models are quickly catching up and will be very effectively trained based on output data from ChatGPT.

I like GitHub stars as a metric for popularity and mentioned them throughout this text, with the star count of the day of writing. Originally, a GitHub star was meant as a strategy to keep track of repositories you discover interesting. It’s to notice that they don’t necessarily tell something in regards to the usage of a tool and vital translation metrics — reminiscent of what number of downloads equal one star should not defined. Nevertheless, it’s an attempt at a quantitative metric:

You could find an up-to-date-graph here:

There may be one thing I would like to spotlight here: The pace, it is a little more than a month. The community is hyped about working on LLM projects and I’d argue that if you wish to sustain with the community this probably is not going to work with waitlists.

If you have got seen the OpenAI keynote, one in every of the fundamental attractions was the multimodal capabilities of GPT-4. The developer livestream made a powerful presentation where a picture of a scribbled note was become HTML code. That is amazing, but there isn’t any public access yet. But rest assured: Open-source, multimodal models like open-flamingo (1.5k stars) are already within the pipeline.

In terms of further improving LLMs, one idea is to make use of LLMs themselves to refine existing models. And just at the top of March, a study highlighted how LLMs can profit from using self-refinement:

“The fundamental idea is to generate an output using an LLM, then allow the identical model to supply multi-aspect feedback for its own output; finally, the identical model refines its previously generated output given its own feedback.”

So we will definitely see self-improving models. Ultimately, one query is how far this will go: If increasingly content is generated with AI, will we still get good training data?

To me, the purpose of the present development to the next:

  • The open-source community is amazingly fast in developing alternatives
  • Generating training datasets from existing models is a viable technique to create clones
  • We’ll probably see models that self-improve themselves in the longer term

More profoundly I’m curious to see how OpenAI will solve the cloning problem from a business perspective, especially with such strong competition from an Open-source community that would give you progressive features quickly. Is there a technique to get the community on board?

In fact, from a classical perspective, so much is completed: Community focus, a well-documented API, and a strategy to do plugins. But here I’m waiting for the plugin beta access, as many others do. From an organization perspective this is sensible: Don’t exit within the wild with an untested product. But will this be too slow for the growing amount of expert enthusiasts which are willing to spend day and night working on this problem?

This text is an attempt to put in writing a consistent story and highlight key milestones that I find relevant. It’s certainly not comprehensive and there are a lot of more exciting tools on the market on this domain. To call just a few:

  • UL2 (from Google Research) — 28.3k stars
  • OpenAssistant21k stars
  • RWKV-LM — 5.1k stars, Apache license
  • Baize — 1.7 k stars, GPL license
  • Bloom — 3.08 k likes on Huggingface

For further reading on other packages, I discovered this list by Sung Kim quite comprehensive.


What are your thoughts on this topic?
Let us know in the comments below.


0 0 votes
Article Rating
1 Comment
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

Would love your thoughts, please comment.x