This week at Dell Tech World, we announced the new edition of Dell Enterprise Hub, with an entire suite of models and applications to simply construct AI running on premises with Dell AI servers and AI PCs.
![]()
Models Ready for Motion
When you go to the Dell Enterprise Hub today, you could find a number of the hottest models, like Meta Llama 4 Maverick, DeepSeek R1 or Google Gemma 3, available for deployment and training in a couple of clicks.
But what you get is far more than a model, it’s a totally tested container optimized for specific Dell AI Server Platforms, with easy instructions to deploy on-premises using Docker and Kubernetes.

Meta Llama 4 Maverick might be deployed on NVIDIA H200 or AMD MI300X Dell PowerEdge servers
We constantly work with Dell CTIO and Engineering teams to make the newest and best models ready, tested and optimized for Dell AI Server platforms as quickly as possible – Llama 4 models were available on the Dell Enterprise Hub inside 1 hour of their public release by Meta!
Introducing AI Applications
The Dell Enterprise Hub now features ready-to-deploy AI Applications!
If models are engines, then applications are the cars that make them useful so you may actually go places. With the brand new Application Catalog you may construct powerful applications that run entirely on-premises in your employees and use your internal data and services.
The brand new Application Catalog makes it easy to deploy leading open source applications inside your private network, including OpenWebUI and AnythingLLM.

OpenWebUI makes it easy to deploy on-premises chatbot assistants that hook up with your internal data and services via MCP, to construct agentic experiences that may search the net, retrieve internal data with vector databases and storage for RAG use cases.

AnythingLLM makes it easy to construct powerful agentic assistants connecting to multiple MCP servers so you may connect your internal systems and even external services. It includes features to enable multiple models, working with images, documents and set role-based access controls in your internal users.

These applications are easy to deploy using the provided, customizable helm charts so your MCP servers are registered from the get go.

Powered by NVIDIA, AMD and Intel
Dell Enterprise Hub is the one platform on this planet that gives ready-to-use model deployment solutions for the newest AI Accelerator hardware:
- NVIDIA H100 and H200 GPU powered Dell platforms
- AMD MI300X powered Dell platforms
- Intel Gaudi 3 powered Dell platforms

We work directly with Dell, NVIDIA, AMD and Intel in order that once you deploy a container in your system, it’s all configured and able to go, has been fully tested and benchmarked so it runs with the most effective performance out of the box in your Dell AI Server platform.
On-Device Models for Dell AI PC
The brand new Dell Enterprise Hub now provides support for models to run on-device on Dell AI PCs along with AI Servers!

These models enable on-device speech transcription (OpenAI whisper), chat assistants (Microsoft Phi and Qwen 2.5), upscaling images and generating embeddings.
To deploy a model, you may follow specific instructions for the Dell AI PC of your alternative, powered by Intel or Qualcomm NPUs, using the brand new Dell Pro AI Studio. Coupled with PC fleet management systems like Microsoft Intune, it’s an entire solution for IT organizations to enable employees with on-device AI capabilities.
Now with CLI and Python SDK
Dell Enterprise Hub offers a web based portal into AI capabilities for Dell AI Server platforms and AI PCs. But what if you desire to work directly out of your development environment?
Introducing the brand new dell-ai open source library with a Python SDK and CLI, so you need to use Dell Enterprise Hub inside your environment directly out of your terminal or code – just pip install dell-ai

Wrapping up
With Models and Applications, for AI Servers and AI PCs, easily installable using Docker, Kubernetes and Dell Pro AI Studio, Dell Enterprise Hub is an entire toolkit to deploy Gen AI applications within the enterprise, fully secure and on-premises.
As a Dell customer, which means you may in a short time, inside an hour as an alternative of weeks:
- roll out an in-network chat assistant powered by the newest open LLMs, and connect it to your internal storage systems (ex. Dell PowerScale) using MCP, all in an air gapped environment
- give access to complex agentic systems, with granular access controls and SSO, that may work with internal text, code, images, audio and documents and access the net for current context
- arrange employees with on-device, private transcription powered by a fleet of Dell AI PCs in a totally managed way
When you are using Dell Enterprise Hub today, we might love to listen to from you within the comments!
