Home Artificial Intelligence Anthropic launches Claude, a chatbot to rival OpenAI’s ChatGPT

Anthropic launches Claude, a chatbot to rival OpenAI’s ChatGPT

1
Anthropic launches Claude, a chatbot to rival OpenAI’s ChatGPT

Anthropic, a startup co-founded by ex-OpenAI employees, today launched something of a rival to the viral sensation ChatGPT.

Called Claude, Anthropic’s AI — a chatbot — could be instructed to perform a spread of tasks, including searching across documents, summarizing, writing and coding, and answering questions on particular topics. In these ways, it’s much like OpenAI’s ChatGPT. But Anthropic makes the case that Claude is “much less more likely to produce harmful outputs,” “easier to converse with” and “more steerable.”

Organizations can request access. Pricing has yet to be detailed.

“We predict that Claude is the proper tool for a wide selection of consumers and use cases,” an Anthropic spokesperson told TechCrunch via email. “We’ve been investing in our infrastructure for serving models for several months and are confident we are able to meet customer demand.”

Following a closed beta late last yr, Anthropic has been quietly testing Claude with launch partners, including Robin AI, AssemblyAI, Notion, Quora and DuckDuckGo. Two versions can be found as of this morning via an API, Claude and a faster, more cost effective derivative called Claude Easy.

Together with ChatGPT, Claude powers DuckDuckGo’s recently launched DuckAssist tool, which directly answers straightforward search queries for users. Quora offers access to Claude through its experimental AI chat app, Poe. And on Notion, Claude is a component of the technical backend for Notion AI, an AI writing assistant integrated with the Notion workspace.

“We use Claude to judge particular parts of a contract, and to suggest recent, alternative language that’s more friendly to our customers,” Robin CEO Richard Robinson said in an emailed statement. “We’ve found Claude is absolutely good at understanding language — including in technical domains like legal language. It’s also very confident at drafting, summarising, translations and explaining complex concepts in easy terms.”

But does Claude avoid the pitfalls of ChatGPT and other AI chatbot systems prefer it? Modern chatbots are notoriously vulnerable to toxic, biased and otherwise offensive language. (See: Bing Chat.) They have a tendency to hallucinate, too, meaning they devise facts when asked about topics beyond their core knowledge areas.

Anthropic says that Claude — which, like ChatGPT, doesn’t have access to the web and was trained on public webpages as much as spring 2021 — was “trained to avoid sexist, racist and toxic outputs” in addition to “to avoid helping a human engage in illegal or unethical activities.” That’s par for the course within the AI chatbot realm. But what sets Claude apart is a way called “constitutional AI,” Anthropic asserts.

“Constitutional AI” goals to supply a “principle-based” approach to aligning AI systems with human intentions, letting AI much like ChatGPT reply to questions using an easy set of principles as a guide. To construct Claude, Anthropic began with an inventory of around 10 principles that, taken together, formed a type of “structure” (hence the name “constitutional AI”). The principles haven’t been made public. But Anthropic says they’re grounded within the concepts of beneficence (maximizing positive impact), nonmaleficence (avoiding giving harmful advice) and autonomy (respecting freedom of alternative).

Anthropic then had an AI system — not Claude — use the principles for self-improvement, writing responses to a wide range of prompts (e.g. “compose a poem within the type of John Keats”) and revising the responses in accordance with the structure. The AI explored possible responses to hundreds of prompts and curated those most consistent with the structure, which Anthropic distilled right into a single model. This model was used to coach Claude.

Anthropic admits that Claude has its limitations, though — several of which got here to light throughout the closed beta. Claude is reportedly worse at math and a poorer programmer than ChatGPT. And it hallucinates, inventing a reputation for a chemical that doesn’t exist, for instance, and providing dubious instructions for producing weapons-grade uranium.

It’s also possible to get around Claude’s built-in safety features via clever prompting, as is the case with ChatGPT. One user within the beta was in a position to get Claude to describe make meth at home.

“The challenge is making models that each never hallucinate but are still useful — you’ll be able to get into a tricky situation where the model figures strategy to never lie is to never say anything in any respect, so there’s a tradeoff there that we’re working on,” the Anthropic spokesperson said. “We’ve also made progress on reducing hallucinations, but there may be more to do.”

Anthropic’s other plans include letting developers customize Claude’s constitutional principles to their very own needs. Customer acquisition is one other focus, unsurprisingly — Anthropic sees its core users as “startups making daring technological bets” along with “larger, more established enterprises.”

“We’re not pursuing a broad direct to consumer approach at the moment,” the Anthropic spokesperson continued. “We predict this more narrow focus will help us deliver a superior, targeted product.”

Little doubt, Anthropic is feeling some type of pressure from investors to recoup the a whole lot of hundreds of thousands of dollars that’ve been put toward its AI tech. The corporate has substantial outside backing, including a $580 million tranche from a gaggle of investors including disgraced FTX founder Sam Bankman-Fried, Caroline Ellison, Jim McClave, Nishad Singh, Jaan Tallinn and the Center for Emerging Risk Research.

Most recently, Google pledged $300 million in Anthropic for a ten% stake within the startup. Under the terms of the deal, which was first reported by the Financial Times, Anthropic agreed to make Google Cloud its “preferred cloud provider” with the businesses “co-develop[ing] AI computing systems.”

1 COMMENT

LEAVE A REPLY

Please enter your comment!
Please enter your name here