OpenAI launches an API for ChatGPT, plus dedicated capability for enterprise customers


To call ChatGPT, the free text-generating AI developed by San Francisco-based startup OpenAI, successful is a large understatement.

As of December, ChatGPT had an estimated greater than 100 million monthly lively users. It’s attracted major media attention and spawned countless memes on social media. It’s been used to write lots of of e-books in Amazon’s Kindle store. And it’s credited with co-authoring at the least one scientific paper.

But OpenAI, being a business — albeit a capped-profit one — needed to monetize ChatGPT someway, lest investors get antsy. It took a step toward this with the launch of a premium service, ChatGPT Plus, in February. And it made a much bigger move today, introducing an API that’ll allow any business to construct ChatGPT tech into their apps, web sites, services and products.

An API was all the time the plan. That’s in accordance with Greg Brockman, the president and chairman of OpenAI (and in addition one in all the co-founders). He spoke with me yesterday afternoon via a video call ahead of the launch of the ChatGPT API.

“It takes us some time to get these APIs to a certain quality level,” Brockman said. “I believe it’s sort of this, like, just with the ability to meet the demand and the size.”

Brockman says the ChatGPT API is powered by the identical AI model behind OpenAI’s wildly popular ChatGPT, dubbed “gpt-3.5-turbo.” GPT-3.5 is probably the most powerful text-generating model OpenAI offers today through its API suite; the “turbo” moniker refers to an optimized, more responsive version of GPT-3.5 that OpenAI’s been quietly testing for ChatGPT.

Priced at $0.002 per 1,000 tokens, or about 750 words, Brockman claims that the API can drive a variety of experiences, including “non-chat” applications. Snap, Quizlet, Instacart and Shopify are among the many early adopters.

The initial motivation behind developing gpt-3.5-turbo might’ve been to chop down on ChatGPT’s gargantuan compute costs. OpenAI CEO Sam Altman once called ChatGPT’s expenses “eye-watering,” estimating them at a number of cents per chat in compute costs. (With over one million users, that presumably adds up quickly.)

But Brockman says that gpt-3.5-turbo is improved in other ways.

“In case you’re constructing an AI-powered tutor, you never want the tutor to only give a solution to the coed. You would like it to all the time explain it and help them learn — that’s an example of the sort of system you must have the ability to construct [with the API],” Brockman said. “We predict that is going to be something that can just, like, make the API way more usable and accessible.”

The ChatGPT API underpins My AI, Snap’s recently announced chatbot for Snapchat+ subscribers, and Quizlet’s recent Q-Chat virtual tutor feature. Shopify used the ChatGPT API to construct a personalised assistant for shopping recommendations, while Instacart leveraged it to create Ask Instacart, an upcoming toll that’ll allow Instacart customers to ask about food and get “shoppable” answers informed by product data from the corporate’s retail partners.

“Grocery shopping can require a giant mental load, with a variety of aspects at play, like budget, health and nutrition, preferences, seasonality, culinary skills, prep time, and recipe inspiration,” Instacart chief architect JJ Zhuang told me via email. “What if AI could tackle that mental load, and we could help the household leaders who’re commonly chargeable for grocery shopping, meal planning, and putting food on the table — and truly make grocery shopping truly fun? Instacart’s AI system, when integrated with OpenAI’s ChatGPT, will enable us to do exactly that, and we’re thrilled to begin experimenting with what’s possible within the Instacart app.”

Image Credits: Instacart

Those that’ve been closely following the ChatGPT saga, though, is likely to be wondering if it’s ripe for release — and rightly so.

Early on, users were in a position to prompt ChatGPT to reply questions in racist and sexist ways, a mirrored image of the biased data on which ChatGPT was initially trained. (ChatGPT’s training data features a broad swath of web content, namely e-books, Reddit posts and Wikipedia articles.) ChatGPT also invents facts without disclosing that it’s doing so, a phenomenon in AI often known as hallucination.

ChatGPT — and systems prefer it — are vulnerable to prompt-based attacks as well, or malicious adversarial prompts that get them to perform tasks that weren’t an element of their original objectives. Entire communities on Reddit have formed around finding ways to “jailbreak” ChatGPT and bypass any safeguards that OpenAI put in place. In one in all the less offensive examples, a staffer at startup Scale AI was in a position to get ChatGPT to reveal details about its inner technical workings.

Brands, little doubt, wouldn’t wish to be caught within the crosshairs. Brockman is adamant they won’t be. Why so? One reason, he says, is sustained improvements on the back end — in some cases on the expense of Kenyan contract staff. But Brockman emphasized a recent (and decidedly less controversial) approach that OpenAI calls Chat Markup Language, or ChatML. ChatML feeds text to the ChatGPT API as a sequence of messages along with metadata. That’s versus the usual ChatGPT, which consumes raw text represented as a series of tokens. (The word “unbelievable” could be split into the tokens “fan,” “tas” and “tic,” for instance.)

For instance, given the prompt “What are some interesting party ideas for my thirtieth birthday?” a developer can decide to append that prompt with a further prompt like “You’re a fun conversational chatbot designed to assist users with the questions they ask. You must answer truthfully and in a fun way!” or “You’re a bot” before having the ChatGPT API process it. These instructions help to raised tailor — and filter — the ChatGPT model’s responses, in accordance with Brockman.

“We’re moving to a higher-level API. If you have got a more structured way of representing input to the system, where you say, ‘that is from the developer’ or ‘that is from the user’ … I should expect that, as a developer, you truly will be more robust [using ChatML] against these sorts of prompt attacks,” Brockman said.

One other change that’ll (hopefully) prevent unintended ChatGPT behavior is more frequent model updates. With the discharge of gpt-3.5-turbo, developers will by default be mechanically upgraded to OpenAI’s latest stable model, Brockman says, starting with gpt-3.5-turbo-0301 (released today). Developers may have the choice to stay with an older model in the event that they so select, though, which could somewhat negate the profit.

Whether or not they opt to update to the most recent model or not, Brockman notes that some customers — mainly large enterprises with correspondingly large budgets — may have deeper control over system performance with the introduction of dedicated capability plans. First detailed in documentation leaked earlier this month, OpenAI’s dedicated capability plans, launched today, let customers pay for an allocation of compute infrastructure to run an OpenAI model — for instance, gpt-3.5-turbo. (It’s Azure on the back end, by the best way.)

Along with “full control” over the instance’s load — normally, calls to the OpenAI API occur on shared compute resources — dedicated capability gives customers the power to enable features equivalent to longer context limits. Context limits confer with the text that the model considers before generating additional text; longer context limits allow the model to “remember” more text essentially. While higher context limits won’t solve all of the bias and toxicity issues, they could lead on models like gpt-3.5-turbo to hallucinate less.

Brockman says that dedicated capability customers can expect gpt-3.5-turbo models with as much as a 16k context window, meaning they’ll soak up 4 times as many tokens as the usual ChatGPT model. Which may let someone paste in pages and pages of tax code and get reasonable answers from the model, say — a feat that’s impossible today.

Brockman alluded to a general release in the longer term, but not anytime soon.

“The context windows are beginning to creep up, and a part of the rationale that we’re dedicated-capacity-customers-only at once is because there’s a variety of performance tradeoffs on our side,” Brockman said. “We would eventually have the ability to supply an on-demand version of the identical thing.”

Given OpenAI’s increasing pressure to show a profit after a multibillion-dollar investment from Microsoft, that wouldn’t be terribly surprising.


What are your thoughts on this topic?
Let us know in the comments below.


0 0 votes
Article Rating
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

Would love your thoughts, please comment.x