We’re excited to share that OVHcloud is now an Inference Provider on the Hugging Face Hub! This partnership enriches our platform, providing seamless access to serverless inference capabilities directly from model pages. The integration of OVHcloud enhances the diverse array of options for users looking to leverage AI models effectively.
With this launch, accessing popular open-weight models like gpt-oss, Qwen3, DeepSeek R1, and Llama has never been easier. Users can explore OVHcloud’s offerings on the Hub at OVHcloud on Hugging Face and check out trending supported models at Trending Models on Hugging Face.
OVHcloud AI Endpoints provide a fully managed, serverless solution that enables users to access leading AI models through simple API calls. Starting at a competitive pay-per-token rate of €0.04 per million tokens, this service is designed for scalability and flexibility.
Security and efficiency are paramount, with infrastructure located in European data centers to ensure data sovereignty and reduced latency for users in Europe. Additionally, OVHcloud supports advanced features such as structured outputs, function calling, and multimodal capabilities, accommodating both text and image processing needs.
The infrastructure is built for production, resulting in response times under 200 milliseconds for the initial tokens. This speed is ideal for interactive applications and agentic workflows, supporting both text generation and embedding models seamlessly. For more details on OVHcloud’s platform and infrastructure, check out their public catalog here.
For additional guidance, refer to OVHcloud’s dedicated documentation page on using it as an Inference Provider and see the list of supported models.
How it Works
In the Website UI
Within your user account settings on Hugging Face, you can enhance your experience by:
- Setting your API keys for the providers you’ve signed up with. If you don’t set a custom key, your requests will automatically route through Hugging Face.
- Ordering providers by preference, which applies to the widget and code snippets on the model pages.
There are two operational modes when calling Inference Providers:
- Custom key: This mode allows you to make requests directly to the inference provider using your API key.
- Routed by Hugging Face: This mode eliminates the need for a key from the provider, with the billing applied directly to your Hugging Face account.
Model pages showcase compatible third-party inference providers, sorted by user preference to facilitate informed choices.
From the Client SDKs
Using Python with huggingface_hub
Here’s a quick example of how to utilize OpenAI’s gpt-oss-120b model through OVHcloud as the inference provider. You’ll either need a Hugging Face token for automated routing or your own OVHcloud API key:
import os
from huggingface_hub import InferenceClient
client = InferenceClient(
api_key=os.environ["HF_TOKEN"],
)
completion = client.chat.completions.create(
model="openai/gpt-oss-120b:ovhcloud",
messages=[
{
"role": "user",
"content": "What is the capital of France?"
}
],
)
print(completion.choices[0].message)
From JavaScript using @huggingface/inference
import { InferenceClient } from "@huggingface/inference";
const client = new InferenceClient(process.env.HF_TOKEN);
const chatCompletion = await client.chatCompletion({
model: "openai/gpt-oss-120b:ovhcloud",
messages: [
{
role: "user",
content: "What is the capital of France?",
},
],
});
console.log(chatCompletion.choices[0].message);
Billing
Understanding the billing structure is vital:
-
For direct requests, using an inference provider’s key results in charges being applied to that provider’s account, such as with OVHcloud.
-
For routed requests, when authenticated via the Hugging Face Hub, users will only incur standard provider API rates with no additional markup from Hugging Face. Future revenue-sharing agreements with providers may be established.
Important Note ‼️ PRO users receive $2 worth of inference credits monthly, usable across providers. Upgrading to the Hugging Face PRO plan unlocks further benefits including ZeroGPU, Spaces Dev Mode, and 20x higher limits.
Free inference is available within a small quota for signed-in free users, but we encourage considering an upgrade to PRO for an enhanced experience!
Feedback and Next Steps
Your feedback is invaluable! Share your thoughts and comments here.
Inspired by: Source




