Good morning. It’s Friday, August twenty ninth.
On this present day in tech history: In 1993the Mbone (“multicast backbone”) quietly proved the Web could move greater than text. It streamed a David Blair’s full-length film, Wax or the Discovery of Television Among the many Bees to hundreds using IP multicast, tunneling packets through routers like a guerrilla overlay network. The demo was fragile, but its elegant overlay proved large-scale media wasn’t a server problem – it was a distribution problem.
You read. We listen. Tell us what you’re thinking that by replying to this email.
Your AI now works with 500+ apps to really get things done
Rube connects your AI to 500+ apps so you may:
-
Prep for meetings in seconds → “Hey Rube, what’s on my calendar with Acme Corp?”
-
Update projects mechanically → “Add these notes to Notion under Q3 Roadmap.”
-
Cut busywork immediately → “Upload the brand new ☕️ emoji to Slack.”
-
Stay in flow → all without switching tabs, copying links, or chasing logins.
-
One login for each tool.
-
Share access across your team immediately.
-
Works inside VSCode, Claude, Cursor, and more
Thanks for supporting our sponsors!

Today’s trending AI news stories
OpenAI rolls out gpt-realtime and GPT-5 Codex but joint safety tests expose cracks
The brand new gpt-realtime model is now production-ready through the Realtime API. Unlike older systems, it processes speech directly without text conversion, cutting latency and capturing nuance like laughter, sighs, or accent shifts. It could possibly even switch languages mid-sentence.
Benchmarks show major jumps: 82.8% on Big Bench Audio (vs. 65.6% prior) and 30.5% on MultiChallenge (vs. 20.6%). Developers also get SIP support for phone systems, MCP for secure tool access, image input for contextual evaluation, and two recent expressive voices (Cedar, Marin). Pricing dropped 20% to $32 per million input tokens.
On the dev side, ChatGPT Codex now runs on GPT-5, integrated directly with GitHub for pull requests, branch management, and code reviews. A brand new IDE extension, upgraded CLI, one-shot task execution, and customizable “agents.md” files streamline automation.
We’re releasing recent Codex features to make it a more practical coding collaborator:
– A brand new IDE extension
– Easily move tasks between the cloud and your local environment
– Code reviews in GitHub
– Revamped Codex CLIPowered by GPT-5 and available through your ChatGPT plan.
— OpenAI Developers (@OpenAIDevs)
9:01 PM • Aug 27, 2025
But joint OpenAI–Anthropic tests using the SHADE-Arena sabotage framework flagged risks: GPT-4.1 and GPT-4o still leaked detailed misuse instructions under pressure, while Claude models leaned toward refusals but showed sabotage quirks. Each families exhibited sycophancy, validating unsafe user decisions. With GPT-5 evaluations ahead, audits and refusal testing remain non-optional. Read more.
From AI avatars to personal blockchain, Google layers multimedia and fintech tools
Google Vids now supports AI avatars that generate videos from scripts with selectable voices and personas, automatic transcript trimming to remove filler words, and eight-second image-to-video clips powered by Veo 3. Complementing this, Flow offers five (5) free Veo 3 Fast AI videos, or a single standard video, per 30 days, with credits and per-second API pricing ($0.040 Fast, $0.0075 standard) for scalable production.
On the training front, NotebookLM is testing Drive search and AI Flashcards. “Discover Sources” surfaces internal Docs and Slides alongside web content, while AI Flashcards generate study aids from documents, extracting key facts and questions. Public sharing of audio artifacts and upcoming video overviews deepen collaboration and multimedia integration.
Meanwhile, Google Cloud is quietly piloting its own blockchain, Google Cloud Universal Ledger (GCUL), a non-public, permissioned Layer 1 network for Python-based smart contracts, payment automation, and digital asset management. Initially tested with CME Group, GCUL offers a “credibly neutral” infrastructure for banks and fintechs, emphasizing compliance, API accessibility, and controlled adoption, though its private design has sparked debate over decentralization. Read more.
Musk’s week: a coding agent, a vision-trained robot, and a reusable Starship
Elon Musk’s empire just pulled a triple tech swing that feel like pressure tests for scale. At xAI, the brand new grok-code-fast-1 model dropped, a lean, agentic coder built to churn out routine dev tasks quickly and cheaply. Unlike bloated LLMs, it’s tuned for compact execution, already plugged into GitHub Copilot and Windsurf. Free to try (for now), the actual play is whether or not it could possibly evolve from quick snippets into full-stack automation that seriously rivals Microsoft’s Copilot and OpenAI’s Codex.

Tesla’s Optimus, meanwhile, just ditched motion-capture suits for helmet-and-backpack rigs with five synced cameras, training Optimus humanoids via vision-only feeds. It’s the Autopilot playbook: flood models with scaled human video as a substitute of costly teleoperation. Gains include: data density, speed, lower cost. Risks: zero tactile feedback, making dexterity harder. Leadership shuffled too, with AI chief Ashok Elluswamy folding Optimus under Tesla’s camera-first AI stack.
After which there’s SpaceX. Starship S37 pulled a precision splashdown after a 66-minute hop, validating reinforced heat tiles, engine-out recovery, and a satellite bay. Each stages survived, underscoring that fully reusable rocketry is shifting from theory to engineering cadence. Read more.


5 recent AI-powered tools from around the net

arXiv is a free online library where researchers share pre-publication papers.



Your feedback is helpful. Reply to this email and tell us how you’re thinking that we could add more value to this text.
Fascinated about reaching smart readers such as you? To change into an AI Breakfast sponsor, reply to this email or DM us on 𝕏!