Good morning. It’s Wednesday, September twenty fourth.
On at the present time in tech history: In 1980Harold Cohen presented AARON, his long-running generative art system. As an alternative of imitating scenes, AARON used rule sets, figure-ground separation, and its own visual constructing blocks. It became an early example of AI probing how structured rules could generate drawings that felt intentional and human-like.
-
OpenAI rewrites AI infrastructure playbook
-
Alibaba’s Qwen3 stack: trillion-scale models meet sub-second multimodal AI
-
Gemini Live API brings real-time reliability; Play and Photos go conversational
-
Microsoft is killing tech debt, scaling Windows ML for devs
-
5 Latest AI Tools
-
Latest AI Research Papers
You read. We listen. Tell us what you’re thinking that by replying to this email.
On the lookout for unbiased, fact-based news? Join 1440 today.
Join over 4 million Americans who start their day with 1440 – your day by day digest for unbiased, fact-centric news. From politics to sports, we cover all of it by analyzing over 100 sources. Our concise, 5-minute read lands in your inbox each morning without charge. Experience news without the noise; let 1440 provide help to make up your personal mind. Join now and invite your mates and family to be a part of the informed.

Today’s trending AI news stories
Oracle to construct, Nvidia to lease: OpenAI rewrites AI infrastructure playbook
OpenAI, Oracle, SoftBank, and Nvidia are fusing money, hardware, and energy into what could possibly be probably the most ambitious AI infrastructure project so far. Five latest U.S. data centers are within the pipeline: Texas, Latest Mexico, Ohio, plus one other Midwest site, bringing total planned capability to almost 7 gigawatts, in regards to the same demand as seven nuclear reactors. Abilene, Texas, already hosts tens of hundreds of Nvidia GPUs, but the following wave will dwarf even the biggest hyperscaler campuses.

Construction underway on a Project Stargate AI infrastructure site in Abilene, Texas, in April 2025. | Image: Daniel Cole / Reuters
Oracle is paying for and managing three of the brand new sites, selling compute back to OpenAI via Oracle Cloud. SoftBank is backing “fast-build” gigawatt-scale campuses. Nvidia’s $100 billion deal introduces a brand new model – chip leasing. As an alternative of OpenAI buying tens of millions of GPUs outright, Nvidia provides hardware under a usage-based structure, turning capital expense into cloud-style economics.

(l to r): Openai President Greg Brockman, Nvidia Founder and CEO jensen Huang, and Openai CEO altman | Image: Nvidia
The closed loop has Nvidia taking non-voting equity while OpenAI commits spend back into 4–5 million GPUs, targeting 10 gigawatts of compute. CEO Jensen Huang insists this won’t squeeze supply for other customers, though the dimensions could redefine how hyperscalers finance AI infrastructure.
Alibaba’s Qwen3 stack: trillion-scale models meet sub-second multimodal AI
Alibaba is on an aggressive run of AI rollouts this week. Qwen3-Next is a faster MoE architecture that expands to 512 experts while activating only 10 plus a shared expert per step. With stability fixes like normalized router initialization and a spotlight gating, it delivers over 10x the throughput of Qwen3-32B on long sequences, handling as much as 256K tokens natively, with experimental paths to 1M. Two 80B variants lead the road: Instruct for assistants and Considering for reasoning, with FP8 releases cutting latency and energy overhead.
Qwen3-Omni pushes into full multimodality with 30B parameters and just 3B lively per inference. Its split Thinker-Talker system enables streaming speech generation with sub-second response, benchmark wins across 32 of 36 audio/video tasks, and support for 119 written and 19 spoken languages. Open-source Instruct, Considering, and Captioner variants extend its reach.
On raw scale, Qwen3-Max pushes past 1T parameters and 36T tokens, with ChunkFlow boosting 1M-token context training. Benchmarks put Instruct in the worldwide top three, beating Claude Opus 4 and DeepSeek V3.1 on coding and agent tasks. The Considering variant, still in training, has already hit perfect scores on AIME 25 and HMMT with test-time scaling and code execution.
Qwen3-LiveTranslate-Flash makes all this tangible by bringing 18-language real-time interpretation at 3s latency, integrating lip-reading, gesture recognition, and semantic unit prediction for near-offline quality. It edges out GPT-4o-Audio-Preview and Gemini-2.5-Flash on speech tasks, while producing expressive dialect-specific voices. Read more.
Gemini Live API brings real-time reliability; Play and Photos go conversational
Google’s upgraded Gemini Live API now runs on a native audio model built for real-time reliability. Function calls, the pipes that allow agents pull live data or execute services, at the moment are as much as 2x more accurate, even in messy multi-function scenarios. Add tighter audio handling and conversations flow like they need to: pauses, side chatter, and interruptions now not break the thread. Next up is “considering mode,” where developers set a reasoning budget, trading speed for depth with transparent traces of the model’s process.
Introducing our latest Gemini Live model 🔊, built on all of the stuff you love about Gemini, with significantly improved function calling and more natural feeling / sounding conversations (because of native audio)!
Check out the brand new model at ai.studio/live
— Logan Kilpatrick (@OfficialLoganK)
5:50 PM • Sep 23, 2025
On the buyer side, Google is flexing the identical tech. A brand new Gemini overlay in Google Play interprets on-screen context so gamers can ask for hints without breaking flow. A redesigned “You” tab turns the shop into a customized hub for progress, rewards, and cross-app recommendations.

‘You’ can interact with Gemini Live using your voice. | GIF: Google
Google Photos’ conversational editor is expanding beyond Pixel 10, say “remove glare” or “add clouds” and edits occur in seconds, watermarked for provenance.
🚨 NEW LABS EXPERIMENT 🚨
Introducing Mixboard 💡🧑🎨 an experimental, AI-powered concepting board. Designed to provide help to explore, visualize, and refine your ideas and powered by our latest image generation model (🍌)
Now available in US-only public beta! Learn more and take a look at it
— Google Labs (@GoogleLabs)
8:04 PM • Sep 23, 2025
In Google Labs, Mixboard reimagines mood boards with generative AI, letting users mix images, regenerate styles, and riff on ideas via natural prompts. Read more.
Microsoft is killing tech debt, scaling Windows ML for devs, and cooling chips from the within out
Microsoft goes after the $85B technical debt problem with autonomous GitHub Copilot agents and latest Azure migration tooling. These agents don’t just flag .NET and Java breaking changes, they generate fixes, refactor dependencies, patch security gaps, spin up tests, and repackage workloads into containers. In pilots, Xbox cut migration effort by 88%, while Ford reported a 70% reduction modernizing middleware.
On the client side, Windows ML is now generally available in Windows 11, embedding a production-ready ONNX runtime that mechanically routes workloads across CPUs, GPUs, and NPUs via execution providers from AMD, Intel, NVIDIA, and Qualcomm. Adobe, McAfee, and Wondershare are already constructing on it, running semantic video search, real-time deepfake detection, and other edge workloads.
The corporate can also be now cooling GPUs from the within out. Its latest in-chip microfluidics carves hairline channels directly into silicon, pushing liquid coolant across hotspots. Early tests show a 65% drop in GPU temperature rise and as much as 3x efficiency over cold plates. Co-developed with Swiss startup Corintis, the design uses bio-inspired channels modeled after leaf veins, with AI rerouting coolant in real time. Read more.


5 latest AI-powered tools from around the net

arXiv is a free online library where researchers share pre-publication papers.



Your feedback is precious. Reply to this email and tell us how you’re thinking that we could add more value to this text.
Occupied with reaching smart readers such as you? To turn into an AI Breakfast sponsor, reply to this email or DM us on 𝕏!