Good morning. It’s Monday, September twenty second.
On at the present time in tech history: In 2017, How much magazine published “Recent theory cracks open the black box of deep learning,” highlighting the information-bottleneck view that sees training as compressing representations until only predictive bits remain. It marked a rare moment when physics-style theory intersected with deep learning, sparking debate on generalization, implicit regularization, and the dynamics of SGD.
-
Altman’s Manhattan Project–scale AI build-out
-
xAI slashes 98% of token costs
-
Meta + Oracle $20B Cloud Deal
-
Google Home powered by Gemini
-
5 Recent AI Tools
-
Latest AI Research Papers
You read. We listen. We heard your requests to send out the e-mail a bit earlier, so any further it can exit at 6:00am Eastern Time. Tell us what you’re thinking that by replying to this email.
In partnership with WorkOS
The MCP Registry makes it easy for LLMs to find tools, but discovery alone isn’t enough.
Tools still must act on behalf of users, and that requires secure, delegated access. API keys don’t cut it. They’re hard to scope, break user flows, and undermine the promise of seamless integration.
WorkOS Connect delivers a totally compliant OAuth 2.1 flow. It handes PKCE, scopes, user consent, and secure token issuance out of the box.
The WorkOS advantage:
– Compliant with MCP OAuth 2.1
– Handles redirects, consent, and scopes
– Easy to drop in and fast to ship
Thanks for supporting our sponsors!

Today’s trending AI news stories
Altman readies Manhattan Project–scale AI build-out
OpenAI is scaling on two axes: compute and hardware. On the infrastructure side, it’s committing an additional $100B to order server rentals, pushing total spend toward $350B by 2030. CFO Sarah Friar concedes compute shortages have already throttled feature rollouts; Altman insists the AI race is a contest of raw scale as much as algorithms. Forecasts peg annual server bills at $85B, nearly half of hyperscaler cloud revenue in 2024, putting enormous pressure on chipmakers, integrators, and utilities. These “standby” clusters won’t just idle, they’re designed for immediate spin-up of coaching runs or demand spikes, giving OpenAI an always-ready compute buffer.

At the identical time, Jony Ive’s io project is pulling Apple’s supply chain and design bench into AI-native hardware. Prototypes under discussion include a screenless smart speaker, glasses, recorder, even a wearable pin. Luxshare is already contracted; Goertek is in talks. Launch window: 2026–27. Greater than two dozen Apple veterans, including Tang Tan and Evans Hankey, have defected to OpenAI, citing fewer bureaucratic bottlenecks and the prospect to ship category-defining products.
Over the following few weeks, we’re launching some latest compute-intensive offerings. Due to associated costs, some features will initially only be available to Pro subscribers, and a few latest products could have additional fees.
Our intention stays to drive the associated fee of
– Sam Altman (@sama)
6:45 PM • Sep 21, 2025
OpenAI is making a Manhattan Project-scale wager. At $20B a yr on training alone, it’s betting that future AI leadership will hinge on brute-force compute combined with devices built from the bottom up for conversational, multimodal intelligence. Read more.
xAI bends the price-performance curve, slashing 98% of token costs; Neuralink eyes October trial
xAI’s Grok 4 Fast collapses the trade-off between scale and price. The model delivers GPT-5-class reasoning on AIME (92%) and HMMT (93.3%) while running 40% leaner, slashing task costs by as much as 98%. It fuses fast-answer and deep-reasoning modes right into a single prompt-driven system, cutting “considering tokens” and latency which is critical for real-time, compute-sensitive use cases.

With native tool use for web, code, and search, Grok 4 Fast even displaced OpenAI’s o3-search at the highest of LMArena. A 2M-token context and pricing from $0.05/M tokens position it squarely for mass deployment. Wharton’s Ethan Mollick called it one other reset of the AI cost curve, noting that benchmarks like GPQA Diamond may now be functionally maxed out.
With the brand new Grok 4 Fast, the value/performance curve for AI shifted again. I updated my chart to reflect
I also think GPQA Diamond is probably going maxed out (the tests themselves have errors, making it unattainable to get to 100%), I’m going to want to do that with a harder benchmark.
— Ethan Mollick (@emollick)
7:24 PM • Sep 21, 2025
Meanwhile, Neuralink heads into human trials this October with its thought-to-text implant, FDA-cleared for investigational use. The device decodes neural activity directly into text, with near-term applications for speech-impaired patients and longer-term ambitions to let healthy users query AI systems by thought alone. Read more.
Oracle, Meta near $20B AI cloud deal as bubble fears rise
Oracle’s push into AI cloud infrastructure is accelerating as the corporate reportedly nears a $20 billion multi-year cope with Meta to host its Llama models. The agreement, if finalized, would expand Meta’s compute capability across Facebook, Instagram, and WhatsApp while reducing its dependence on Microsoft Azure. It also builds on Oracle’s recent momentum: a record-breaking $300 billion contract with OpenAI and a partnership with xAI. Oracle is betting its cheaper, faster Cloud Infrastructure can undercut AWS and Google Cloud, a technique that has helped drive its top off 85% this yr.
MARK ZUCKERBERG:
“We’re going to spend aggressively. Even when we lose a pair hundred billion, it could suck, nevertheless it’s higher than being behind the race for super intelligence.”
To even hear Zucks say this…it shows how essential they imagine the chance is.
Cap-ex is
— amit (@amitisinvesting)
5:42 PM • Sep 19, 2025
However the AI boom is showing signs of strain. MIT research finds 95% of AI pilots fail to deliver ROI, echoing warnings from OpenAI’s Sam Altman and Meta’s Mark Zuckerberg. Chatting with Access podcast, Zuckerberg cautioned that “collapse is certainly a possibility.” Despite committing $600 billion through 2028 to AI data centers and infrastructure, he argues overbuilding is safer than missing the window for superintelligence.
Meta also faces reputational headwinds. Strike 3 Holdings has filed a $350 million lawsuit alleging Meta pirated 2,396 pornographic videos via BitTorrent to complement training data, including explicit titles suggesting underage actors. Legal scholars warn such practices could contaminate AI outputs and trigger public backlash. Meta denies the costs, noting its V-JEPA 2 model trained on “1 million hours of web video,” though critics say the dataset stays suspiciously vague. Read more.
First take a look at the Google Home app powered by Gemini
Google is popping the Home app right into a true AI command center. Version 3.41.50.3 embeds Gemini, letting you query your smart home via the brand new “Ask Home” bar, by voice or text, and get contextual insights, not only device toggles. Gemini taps into device states, sensor histories, and activity logs, synthesizing data to anticipate needs, track trends, or summarize past events.
UI changes streamline control. The old Favorites tab is now Home, and Devices/Settings live behind a grid icon. You’ll be able to pin live environmental metrics like outdoor air quality and temperature, while latest video and thermometer icons hint at next-gen Nest hardware. Read more.


5 latest AI-powered tools from around the net

arXiv is a free online library where researchers share pre-publication papers.



Your feedback is useful. Reply to this email and tell us how you’re thinking that we could add more value to this article.
Thinking about reaching smart readers such as you? To turn into an AI Breakfast sponsor, reply to this email or DM us on 𝕏!