Google Cloud C4 Brings a 70% TCO improvement on GPT OSS with Intel and Hugging Face

-



Intel and Hugging Face collaborated to show the real-world value of upgrading to Google’s latest C4 Virtual Machine (VM) running on Intel® Xeon® 6 processors (codenamed Granite Rapids (GNR)). We specifically desired to benchmark improvements within the text generation performance of OpenAI GPT OSS Large Language Model(LLM).

The outcomes are in, and so they are impressive, demonstrating a 1.7x improvement in Total Cost of Ownership(TCO) over the previous-generation Google C3 VM instances. The Google Cloud C4 VM instance further resulted in:

  • 1.4x to 1.7x TPOT throughput/vCPU/dollar
  • Cheaper price per hour over C3 VM



Introduction

GPT OSS is a standard name for an open-source Mixture of Experts (MoE) model released by OpenAI. An MoE model is a deep neural network architecture that uses specialized “expert” sub-networks and a “gating network” to come to a decision which experts to make use of for a given input. MoE models let you scale your model capability efficiently without linearly scaling compute costs. In addition they allow for specialization, where different “experts” learn different skills, allowing them to adapt to diverse data distributions.

Even with very large parameters, only a small subset of experts is activated per token, making CPU inference viable.

Intel and Hugging Face collaborated to merge an authority execution optimization (PR #40304) to eliminate redundant computation where every expert processed all tokens to transformers. This optimization directed each expert to run only on the tokens it’s routed to, removing FLOPs waste and improving utilization.

gpt_oss_expert



Benchmark Scope & Hardware

We benchmarked GPT OSS under a controlled, repeatable generation workload to isolate architectural differences (GCP C4 VMs on Intel Xeon 6 processors (GNR) vs GCP C3 VMs on 4th Gen Intel Xeon Processors (SPR)) and MoE execution efficiency. The main target is regular‑state decoding (per‑token latency) and end‑to‑end normalized throughput with increasing batch size while keeping sequence lengths fixed. All runs use static KV cache and SDPA attention for determinism.



Configuration Summary

  • Model: unsloth/gpt-oss-120b-BF16
  • Precision: bfloat16
  • Task: Text generation
  • Input length: 1024 tokens (left‑padded)
  • Output length: 1024 tokens
  • Batch sizes: 1, 2, 4, 8, 16, 32, 64
  • Enabled features:
    • Static KV cache
    • SDPA attention backend
  • Reported metrics:
    • Throughput (Total generated tokens per second aggregated over the batch)



Hardware Under Test

Instance Architecture vCPUs
C3 4th Gen Intel Xeon processor (SPR) 172
C4 Intel Xeon 6 processor (GNR) 144



Create instance



C3

Visit Google Cloud Console and click on on create a VM under your project. Follow the steps below to create a 176 vCPU instance.

  1. pick C3 in the Machine configuration and specify Machine type as c3-standard-176. You furthermore mght have to set the CPU platform and switch on all-core turbo to make performance more stable:
    alt text
  2. configure OS and storage tab as below:
    alt text
  3. keep other configurations as default
  4. click Create button



C4

Visit Google Cloud Console and click on on create a VM under your project. Follow the below steps to create a 144 vCPU instance.

  1. pick C4 within the Machine configuration tab and specify Machine type as c4-standard-144. You may also set the CPU platform and switch on all-core turbo to make performance more stable:
    alt text
  2. configure OS and storage tab as we’d like for C3.
  3. keep other configurations as default
  4. click Create button



Arrange the environment

Login the instance with SSH after which install docker. Follow the steps below to establish the environment easily. For reproducibility, we list the versions and commits we’re using within the commands.

  1. $ git clone https://github.com/huggingface/transformers.git
  2. $ cd transformers/
  3. $ git checkout 26b65fb5168f324277b85c558ef8209bfceae1fe
  4. $ cd docker/transformers-intel-cpu/
  5. $ sudo docker construct . -t
  6. $ sudo docker run -it --rm --privileged -v /home/:/workspace /bin/bash

We’re in container now, do following steps.

  1. $ pip install git+https://github.com/huggingface/transformers.git@26b65fb5168f324277b85c558ef8209bfceae1fe
  2. $ pip install torch==2.8.0 torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu



Benchmark Procedure

For every batch size we

  1. Construct a fixed-length 1024‑token left‑padded batch.
  2. Run a single warm‑up round.
  3. set max_new_tokens=1024 and measure total latency, then get throughput=(OUTPUT_TOKENSbatch_size)/total_latencythroughput = (OUTPUT_TOKENS * batch_size) / total_latency

Run numactl -l python benchmark.py for the next codes.

import os
import time
import torch
from datasets import load_dataset
from transformers import AutoModelForCausalLM, AutoTokenizer

INPUT_TOKENS = 1024
OUTPUT_TOKENS = 1024

def get_inputs(tokenizer, batch_size):
    dataset = load_dataset("ola13/small-the_pile", split="train")
    tokenizer.padding_side = "left"
    selected_texts = []
    for sample in dataset:
        input_ids = tokenizer(sample["text"], return_tensors="pt").input_ids
        if len(selected_texts) == 0 and input_ids.shape[-1] >= INPUT_TOKENS:
            selected_texts.append(sample["text"])
        elif len(selected_texts) > 0:
            selected_texts.append(sample["text"])
        if len(selected_texts) == batch_size:
            break

    return tokenizer(selected_texts, max_length=INPUT_TOKENS, padding="max_length", truncation=True, return_tensors="pt")

def run_generate(model, inputs, generation_config):
    inputs["generation_config"] = generation_config
    model.generate(**inputs) 
    pre = time.time()
    model.generate(**inputs)
    latency = (time.time() - pre)
    return latency

def benchmark(model, tokenizer, batch_size, generation_config):
    inputs = get_inputs(tokenizer, batch_size)
    generation_config.max_new_tokens = 1
    generation_config.min_new_tokens = 1
    prefill_latency = run_generate(model, inputs, generation_config)
    generation_config.max_new_tokens = OUTPUT_TOKENS
    generation_config.min_new_tokens = OUTPUT_TOKENS
    total_latency = run_generate(model, inputs, generation_config)
    decoding_latency = (total_latency - prefill_latency) / (OUTPUT_TOKENS - 1)
    throughput = OUTPUT_TOKENS * batch_size / total_latency

    return prefill_latency, decoding_latency, throughput


if __name__ == "__main__":
    model_id = "unsloth/gpt-oss-120b-BF16"
    tokenizer = AutoTokenizer.from_pretrained(model_id)
    model_kwargs = {"dtype": torch.bfloat16}
    model = AutoModelForCausalLM.from_pretrained(model_id, **model_kwargs)
    model.config._attn_implementation="sdpa"
    generation_config = model.generation_config
    generation_config.do_sample = False
    generation_config.cache_implementation="static"

    for batch_size in [1, 2, 4, 8, 16, 32, 64]:
        print(f"---------- Run generation with batch size = {batch_size} ----------", flush=True)
        prefill_latency, decoding_latency, throughput = benchmark(model, tokenizer, batch_size, generation_config)
        print(f"throughput = {throughput}", flush=True)



Results



Normalized Throughput per vCPU

Across batch sizes as much as 64, Intel Xeon 6 processor‑powered C4 consistently outperforms C3 with a 1.4x to 1.7× throughput per-vCPU. The formula is:

normalized_throughput_per_vCPU=throughput_C4/vCPUs_C4throughput_C3/vCPUs_C3 text{normalized_throughput_per_vCPU} = frac{text{throughput_C4} / text{vCPUs_C4}} {text{throughput_C3} / text{vCPUs_C3}}

throughput-gpt-oss-per-vcpu



Cost & TCO

At batch size 64, C4 provides 1.7× the per‑vCPU throughput of C3; with near parity in price per vCPU (hourly cost scales linearly with vCPU count), this yields a 1.7× TCO advantage (C3 would require 1.7× the spend for a similar generated token volume).

Per‑vCPU throughput ratio:
throughput_C4/vCPUs_C4throughput_C3/vCPUs_C3=1.7TCO_C3TCO_C41.7 frac{text{throughput_C4} / text{vCPUs_C4}}{text{throughput_C3} / text{vCPUs_C3}} = 1.7 Rightarrow frac{text{TCO_C3}}{text{TCO_C4}} approx 1.7

throughput-gpt-oss-per-dollar



Conclusion

Google Cloud C4 VMs powered by Intel Xeon 6 processors (GNR) provide each impressive performance gains and higher cost efficiency for big MoE inference over previous generation Google Cloud C3 VM (powered by 4th Gen Intel Xeon processors). For GPT OSS MoE inference, we observed combined higher throughput, lower latency, and reduced cost. These results underline that because of targeted framework optimizations from Intel and Hugging Face, large MoE models could be efficiently served on next-generation general-purpose CPUs.



Source link

ASK ANA

What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

0
Would love your thoughts, please comment.x
()
x