Hyperbolic, Nebius AI Studio, and Novita 🔥

-


We’re thrilled to announce the addition of three more outstanding serverless Inference Providers to the Hugging Face Hub: Hyperbolic, Nebius AI Studio, and Novita. These providers join our growing ecosystem, enhancing the breadth and capabilities of serverless inference directly on the Hub’s model pages. They’re also seamlessly integrated into our client SDKs (for each JS and Python), making it super easy to make use of a wide range of models along with your preferred providers.

These partners join the ranks of our existing providers, including Together AI, Sambanova, Replicate, fal and Fireworks.ai.

The brand new partners enable a swath of recent models: DeepSeek-R1, Flux.1, and plenty of others. Find all of the models supported by them below:

We’re quite excited to see what you may construct with these recent providers!



How it really works



In the web site UI

  1. In your user account settings, you’re capable of:
  • Set your individual API keys for the providers you’ve signed up with. If no custom key’s set, your requests will probably be routed through HF.
  • Order providers by preference. This is applicable to the widget and code snippets within the model pages.

Inference Providers

  1. As mentioned, there are two modes when calling Inference APIs:
  • Custom key (calls go on to the inference provider, using your individual API key of the corresponding inference provider)
  • Routed by HF (in that case, you do not need a token from the provider, and the costs are applied on to your HF account quite than the provider’s account)

Inference Providers

  1. Model pages showcase third-party inference providers (those which can be compatible with the present model, sorted by user preference)

Inference Providers



From the client SDKs



from Python, using huggingface_hub

The next example shows the best way to use DeepSeek-R1 using Hyperbolic because the inference provider. You should utilize a Hugging Face token for automatic routing through Hugging Face, or your individual Hyperbolic API key if you have got one.

Install huggingface_hub from source (see instructions). Official support will probably be released soon in version v0.29.0.

from huggingface_hub import InferenceClient

client = InferenceClient(
    provider="hyperbolic",
    api_key="xxxxxxxxxxxxxxxxxxxxxxxx"
)

messages = [
    {
        "role": "user",
        "content": "What is the capital of France?"
    }
]

completion = client.chat.completions.create(
    model="deepseek-ai/DeepSeek-R1", 
    messages=messages, 
    max_tokens=500
)

print(completion.selections[0].message)

And here’s the best way to generate a picture from a text prompt using FLUX.1-dev running on Nebius AI Studio:

from huggingface_hub import InferenceClient

client = InferenceClient(
    provider="nebius",
    api_key="xxxxxxxxxxxxxxxxxxxxxxxx"
)


image = client.text_to_image(
    "Bob Marley within the type of a painting by Johannes Vermeer",
    model="black-forest-labs/FLUX.1-schnell"
)

To maneuver to a distinct provider, you possibly can simply change the provider name, the whole lot else stays the identical:

from huggingface_hub import InferenceClient

client = InferenceClient(
-	provider="nebius",
+   provider="hyperbolic",
    api_key="xxxxxxxxxxxxxxxxxxxxxxxx"
)



from JS using @huggingface/inference

import { HfInference } from "@huggingface/inference";

const client = recent HfInference("xxxxxxxxxxxxxxxxxxxxxxxx");

const chatCompletion = await client.chatCompletion({
    model: "deepseek-ai/DeepSeek-R1",
    messages: [
        {
            role: "user",
            content: "What is the capital of France?"
        }
    ],
    provider: "novita",
    max_tokens: 500
});

console.log(chatCompletion.selections[0].message);



Billing

For direct requests, i.e. if you use the important thing from an inference provider, you’re billed by the corresponding provider. As an illustration, should you use a Nebius AI Studio key you are billed in your Nebius AI Studio account.

For routed requests, i.e. if you authenticate via the hub, you will only pay the usual provider API rates. There is not any additional markup from us, we just go through the provider costs directly. (In the long run, we may establish revenue-sharing agreements with our provider partners.)

Necessary Note ‼️ PRO users get $2 price of Inference credits every month. You should utilize them across providers. 🔥

Subscribe to the Hugging Face PRO plan to get access to Inference credits, ZeroGPU, Spaces Dev Mode, 20x higher limits, and more.

We also provide free inference with a small quota for our signed-in free users, but please upgrade to PRO should you can!



Feedback and next steps

We might like to get your feedback! Here’s a Hub discussion you should use: https://huggingface.co/spaces/huggingface/HuggingDiscussions/discussions/49



Source link

ASK ANA

What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

0
Would love your thoughts, please comment.x
()
x