LLMs have grow to be essential tools for constructing software.
But for Apple developers, integrating them stays unnecessarily painful.
Developers constructing AI-powered apps typically take a hybrid approach,
adopting some combination of:
- Local models using Core ML or MLX for privacy and offline capability
- Cloud providers like OpenAI or Anthropic for frontier capabilities
- Apple’s Foundation Models as a system-level fallback
Each comes with different APIs, different requirements, different integration patterns.
It’s so much, and it adds up quickly.
Once I interviewed developers about constructing AI-powered apps,
friction with model integration got here up immediately.
One developer put it bluntly:
I assumed I’d quickly use the demo for a test and possibly a fast and dirty construct
but as a substitute wasted a lot time.
Drove me nuts.
The fee to experiment is high,
which discourages developers from discovering that
local, open-source models might actually work great for his or her use case.
Today we’re announcing AnyLanguageModel,
a Swift package that gives a drop-in substitute for Apple’s Foundation Models framework
with support for multiple model providers.
Our goal is to scale back the friction of working with LLMs on Apple platforms
and make it easier to adopt open-source models that run locally.
The Solution
The core idea is easy:
Swap your import statement, keep the identical API.
- import FoundationModels
+ import AnyLanguageModel
Here’s what that appears like in practice.
Start with Apple’s built-in model:
let model = SystemLanguageModel.default
let session = LanguageModelSession(model: model)
let response = try await session.respond(to: "Explain quantum computing in a single sentence")
print(response.content)
Now try an open-source model running locally via MLX:
let model = MLXLanguageModel(modelId: "mlx-community/Qwen3-4B-4bit")
let session = LanguageModelSession(model: model)
let response = try await session.respond(to: "Explain quantum computing in a single sentence")
print(response.content)
AnyLanguageModel supports a variety of providers:
- Apple Foundation Models: Native integration with Apple’s system model (macOS 26+ / iOS 26+)
- Core ML: Run converted models with Neural Engine acceleration
- MLX: Run quantized models efficiently on Apple Silicon
- llama.cpp: Load GGUF models via the llama.cpp backend
- Ollama: Hook up with locally-served models via Ollama’s HTTP API
- OpenAI, Anthropic, Google Gemini: Cloud providers for comparison and fallback
- Hugging Face Inference Providers: A whole bunch of cloud models powered by world-class inference providers.
The main focus is on local models you can download from the Hugging Face Hub.
Cloud providers are included to lower the barrier to getting began and to offer a migration path.
Make it work, then make it right.
Why Foundation Models because the Base API
When designing AnyLanguageModel, we faced a selection:
create a brand new abstraction that tries to capture every part,
or construct on an existing API.
We selected the latter,
using Apple’s Foundation Models framework
because the template.
This might sound counterintuitive.
Why tie ourselves to Apple’s decisions?
A number of reasons:
-
Foundation Models is genuinely well-designed.
It leverages Swift features like macros for an ergonomic developer experience,
and its abstractions around sessions, tools, and generation map well to how LLMs actually work. -
It’s intentionally limited.
Foundation Models represents something like a lowest common denominator for language model capabilities.
Reasonably than seeing this as a weakness,
we treat it as a stable foundation (hyuk hyuk).
Every Swift developer targeting Apple platforms will encounter this API,
so constructing on it directly means less conceptual overhead. -
It keeps us grounded.
Each additional layer of abstraction takes you extra from the issue you are actually solving.
Abstractions are powerful,
but stack too many they usually grow to be an issue in themselves.
The result’s that switching between providers requires minimal code changes,
and the core abstractions remain clean and predictable.
Package Traits: Include Only What You Need
One challenge with multi-backend libraries is dependency bloat.
In case you only wish to run MLX models,
you should not must pull in llama.cpp and all its dependencies.
AnyLanguageModel uses Swift 6.1 package traits to unravel this.
You choose in to only the backends you would like:
dependencies: [
.package(
url: "https://github.com/mattt/AnyLanguageModel.git",
from: "0.4.0",
traits: ["MLX"]
)
]
Available traits include CoreML, MLX, and Llama (for llama.cpp / llama.swift).
By default, no heavy dependencies are included.
You get the bottom API plus cloud providers,
which only require standard URLSession networking.
For Xcode projects (which don’t yet support trait declarations directly),
you’ll be able to create a small internal Swift package that depends upon AnyLanguageModel with the traits you would like,
then add that package as a neighborhood dependency.
The README has detailed instructions.
Image Support (and API Design Trade-offs)
Vision-language models are incredibly capable and widely used.
They will describe images,
extract text from screenshots,
analyze charts,
and answer questions on visual content.
Unfortunately,
Apple’s Foundation Models framework doesn’t currently support sending images with prompts.
Constructing on an existing API means accepting its constraints.
Apple will likely add image support in a future release (iOS 27, perhaps?),
but vision-language models are too useful to attend for.
So we have prolonged beyond what Foundation Models offers today.
Here’s an example sending a picture to Claude:
let model = AnthropicLanguageModel(
apiKey: ProcessInfo.processInfo.environment["ANTHROPIC_API_KEY"]!,
model: "claude-sonnet-4-5-20250929"
)
let session = LanguageModelSession(model: model)
let response = try await session.respond(
to: "What's on this image?",
image: .init(url: URL(fileURLWithPath: "/path/to/image.png"))
)
We’re taking a calculated risk here;
we’d design something that conflicts with Apple’s eventual implementation.
But that is what deprecation warnings are for.
Sometimes you’ve to jot down the API for the framework that does not exist yet.
Try It Out: chat-ui-swift
To see AnyLanguageModel in motion,
take a look at chat-ui-swift,
a SwiftUI chat application that demonstrates the library’s capabilities.
The app includes:
- Apple Intelligence integration via Foundation Models (macOS 26+)
- Hugging Face OAuth authentication for accessing gated models
- Streaming responses
- Chat persistence
It’s meant as a start line:
Fork it, extend it, swap in numerous models.
See how the pieces fit together and adapt it to your needs.
What’s Next
AnyLanguageModel is currently pre-1.0.
The core API is stable,
but we’re actively working on bringing the total feature set of Foundation Models to all adapters, namely:
- Tool calling across all providers
- MCP integration for tools and elicitations
- Guided generation for structured outputs
- Performance optimizations for local inference
This library is step one toward something larger.
A unified inference API provides the scaffolding needed to construct seamless agentic workflows on Apple platforms —
applications where models can use tools, access system resources, and attain complex tasks.
More on that soon. 🤫
Get Involved
We might love your help making this higher:
- Try it out — Construct something, kick the tires
- Share your experiences — What works? What’s frustrating? We wish to listen to concerning the challenges you face integrating AI into your apps
- Open issues — Feature requests, bug reports, questions
- Contribute — PRs are welcome
Links
We’re excited to see what you construct 🦾

