
Hello, dear readers. Blissful belated Thanksgiving and Black Friday!
This yr has felt like living inside a everlasting DevDay. Every week, some lab drops a brand new model, a brand new agent framework, or a brand new “this changes the whole lot” demo. It’s overwhelming. But it surely’s also the primary yr I’ve felt like AI is finally diversifying — not only one or two frontier models within the cloud, but a complete ecosystem: open and closed, giant and tiny, Western and Chinese, cloud and native.
So for this Thanksgiving edition, here’s what I’m genuinely thankful for in AI in 2025 — the releases that feel like they’ll matter in 12–24 months, not only during this week’s hype cycle.
1. OpenAI kept shipping strong: GPT-5, GPT-5.1, Atlas, Sora 2 and open weights
As the corporate that undeniably birthed the "generative AI" era with its viral hit product ChatGPT in late 2022, OpenAI arguably had among the many hardest tasks of any AI company in 2025: proceed its growth trajectory at the same time as well-funded competitors like Google with its Gemini models and other startups like Anthropic fielded their very own highly competitive offerings.
Thankfully, OpenAI rose to the challenge after which some. Its headline act was GPT-5, unveiled in August as the following frontier reasoning model, followed in November by GPT-5.1 with latest Quick and Pondering variants that dynamically adjust how much “considering time” they spend per task.
In practice, GPT-5’s launch was bumpy — VentureBeat documented early math and coding failures and a cooler-than-expected community response in “OpenAI’s GPT-5 rollout is just not going easily," but it surely quickly course corrected based on user feedback and, as a every day user of this model, I'm personally pleased with it and impressed with it.
At the identical time, enterprises actually using the models are reporting solid gains. ZenDesk Global, for instance, says GPT-5-powered agents now resolve greater than half of customer tickets, with some customers seeing 80–90% resolution rates. That’s the quiet story: these models may not at all times impress the chattering classes on X, but they’re beginning to move real KPIs.
On the tooling side, OpenAI finally gave developers a serious AI engineer with GPT-5.1-Codex-Max, a brand new coding model that may run long, agentic workflows and is already the default in OpenAI’s Codex environment. VentureBeat covered it intimately in “OpenAI debuts GPT-5.1-Codex-Max coding model and it already accomplished a 24-hour task internally.”
Then there’s ChatGPT Atlas, a full browser with ChatGPT baked into the chrome itself — sidebar summaries, on-page evaluation, and search tightly integrated into regular browsing. It’s the clearest sign yet that “assistant” and “browser” are on a collision course.
On the media side, Sora 2 turned the unique Sora video demo right into a full video-and-audio model with higher physics, synchronized sound and dialogue, and more control over style and shot structure, plus a dedicated Sora app with a full fledged social networking component, allowing any user to create their very own TV network of their pocket.
Finally — and perhaps most symbolically — OpenAI released gpt-oss-120B and gpt-oss-20B, open-weight MoE reasoning models under an Apache 2.0–style license. Whatever you think that of their quality (and early open-source users have been loud about their complaints), that is the primary time since GPT-2 that OpenAI has put serious weights into the general public commons.
2. China’s open-source wave goes mainstream
If 2023–24 was about Llama and Mistral, 2025 belongs to China’s open-weight ecosystem.
A study from MIT and Hugging Face found that China now barely leads the U.S. in global open-model downloads, largely due to DeepSeek and Alibaba’s Qwen family.
Highlights:
-
DeepSeek-R1 dropped in January as an open-source reasoning model rivaling OpenAI’s o1, with MIT-licensed weights and a family of distilled smaller models. VentureBeat has followed the story from its release to its cybersecurity impact to performance-tuned R1 variants.
-
Kimi K2 Pondering from Moonshot, a “considering” open-source model that reasons step-by-step with tools, very much within the o1/R1 mold, and is positioned as the perfect open reasoning model up to now on the earth.
-
Z.ai shipped GLM-4.5 and GLM-4.5-Air as “agentic” models, open-sourcing base and hybrid reasoning variants on GitHub.
-
Baidu’s ERNIE 4.5 family arrived as a completely open-sourced, multimodal MoE suite under Apache 2.0, including a 0.3B dense model and visual “Pondering” variants focused on charts, STEM, and power use.
-
Alibaba’s Qwen3 line — including Qwen3-Coder, large reasoning models, and the Qwen3-VL series released over the summer and fall months of 2025 — continues to set a high bar for open weights in coding, translation, and multimodal reasoning, leading me to declare this past summer as "
VentureBeat has been tracking these shifts, including Chinese math and reasoning models like Light-R1-32B and Weibo’s tiny VibeThinker-1.5B, which beat DeepSeek baselines on shoestring training budgets.
In the event you care about open ecosystems or on-premise options, that is the yr China’s open-weight scene stopped being a curiosity and have become a serious alternative.
3. Small and native models grow up
One other thing I’m thankful for: we’re finally getting good small models, not only toys.
Liquid AI spent 2025 pushing its Liquid Foundation Models (LFM2) and LFM2-VL vision-language variants, designed from day one for low-latency, device-aware deployments — edge boxes, robots, and constrained servers, not only giant clusters. The newer LFM2-VL-3B targets embedded robotics and industrial autonomy, with demos planned at ROSCon.
On the big-tech side, Google’s Gemma 3 line made a robust case that “tiny” can still be capable. Gemma 3 spans from 270M parameters up through 27B, all with open weights and multimodal support within the larger variants.
The standout is Gemma 3 270M, a compact model purpose-built for fine-tuning and structured text tasks — think custom formatters, routers, and watchdogs — covered each in Google’s developer blog and community discussions in local-LLM circles.
These models may never trend on X, but they’re exactly what you wish for privacy-sensitive workloads, offline workflows, thin-client devices, and “agent swarms” where you don’t want every tool call hitting an enormous frontier LLM.
4. Meta + Midjourney: aesthetics as a service
One in every of the stranger twists this yr: Meta partnered with Midjourney as an alternative of simply attempting to beat it.
In August, Meta announced a deal to license Midjourney’s “aesthetic technology” — its image and video generation stack — and integrate it into Meta’s future models and products, from Facebook and Instagram feeds to Meta AI features.
VentureBeat covered the partnership in “Meta is partnering with Midjourney and can license its technology for future models and products,” raising the plain query: does this slow or reshape Midjourney’s own API roadmap? Still awaiting a solution there, but unfortunately, stated plans for an API release have yet to materialize, suggesting that it has.
For creators and types, though, the immediate implication is straightforward: Midjourney-grade visuals start to indicate up in mainstream social tools as an alternative of being locked away in a Discord bot. That might normalize higher-quality AI art for a much wider audience — and force rivals like OpenAI, Google, and Black Forest Labs to maintain raising the bar.
5. Google’s Gemini 3 and Nano Banana Pro
Google tried to reply GPT-5 with Gemini 3, billed as its most capable model yet, with higher reasoning, coding, and multimodal understanding, plus a brand new Deep Think mode for slow, hard problems.
VentureBeat’s coverage, “Google unveils Gemini 3 claiming the lead in math, science, multimodal and agentic AI,” framed it as a direct shot at frontier benchmarks and agentic workflows.
However the surprise hit is Nano Banana Pro (Gemini 3 Pro Image), Google’s latest flagship image generator. It focuses on infographics, diagrams, multi-subject scenes, and multilingual text that truly renders legibly across 2K and 4K resolutions.
On the earth of enterprise AI — where charts, product schematics, and “explain this method visually” images matter greater than fantasy dragons — that’s an enormous deal.
6. Wild cards I’m maintaining a tally of
A couple of more releases I’m thankful for, even in the event that they don’t fit neatly into one bucket:
-
Black Forest Labs’ Flux.2 image models, which launched just earlier this week with ambitions to challenge each Nano Banana Pro and Midjourney on quality and control. VentureBeat dug into the small print in “Black Forest Labs launches Flux.2 AI image models to challenge Nano Banana Pro and Midjourney."
-
Anthropic’s Claude Opus 4.5, a brand new flagship that goals for cheaper, more capable coding and long-horizon task execution, covered in “Anthropic’s Claude Opus 4.5 is here: Cheaper AI, infinite chats, and coding skills that beat humans."
-
A gentle drumbeat of open math/reasoning models — from Light-R1 to VibeThinker and others — that show you don’t need $100M training runs to maneuver the needle.
Last thought (for now)
If 2024 was the yr of “one big model within the cloud,” 2025 is the yr the map exploded: multiple frontiers at the highest, China taking the lead in open models, small and efficient systems maturing fast, and inventive ecosystems like Midjourney getting pulled into big-tech stacks.
I’m thankful not only for any single model, but for the incontrovertible fact that we now have options — closed and open, local and hosted, reasoning-first and media-first. For journalists, builders, and enterprises, that diversity is the true story of 2025.
Blissful holidays and best to you and your family members!
