Home Artificial Intelligence Creating OpenAI GPTs for (numerous) Fun and (a bit of) Profit

Creating OpenAI GPTs for (numerous) Fun and (a bit of) Profit

0
Creating OpenAI GPTs for (numerous) Fun and (a bit of) Profit

OpenAI announced its intent to let customers construct their very own “GPTs” during their DevDay conference on November 6, 2023. Here’s what they said on their corresponding blog that day.

We’re rolling out custom versions of ChatGPT you can create for a selected purpose — called GPTs. GPTs are a recent way for anyone to create a tailored version of ChatGPT to be more helpful of their each day life, at specific tasks, at work, or at home — after which share that creation with others. For instance, GPTs can assist you to learn the foundations to any board game, help teach your kids math, or design stickers. — OpenAI

Creating custom versions of ChatGPT sounds excellent. But there’s a caveat: you need to have a GPT Plus or Enterprise account to make use of the brand new GPTs. The associated fee starts at US$20 per thirty days. Nevertheless, if other GPT Plus users interact together with your custom GPT, OpenAI pays you a small royalty based on the variety of user interactions.

I spent the last month experimenting with custom GPTs to know the system’s advantages and limitations. I built a creative writing chatbot called the RobGon Dialog Assistant, which suggests recent dialog inspired by literature in the general public domain. I also created two versions of a chatbot to generate musical chord progressions based on songs in the general public domain. The primary version, RobGon Chord Composer, loads relevant song data from an easy text file. The second version, RobGon Chord Composer Presto, gets the song data via a custom service I wrote.

Retrieval Augmented Generation

Large Language Models (LLMs), like ChatGPT, often perform higher in the event that they can access external data before answering users’ questions. This method is known as Retrieval Augmented Generation (RAG) [1]. As a substitute of only counting on the LLM’s internal memory, RAG systems can find and inject relevant text data which will help the language model handle the users’ queries.

These retrieval systems can work in other ways. One method is to make use of a semantic text search based on the…

LEAVE A REPLY

Please enter your comment!
Please enter your name here