Good morning. It’s Wednesday, June eleventh.
On this present day in tech history: In 1978Intel launched the 8086a 16-bit processor that became the inspiration of the x86 architecture. Adopted in the primary IBM PCs, it set an ordinary that also drives most personal computers today. The chip enabled faster processing and more complex software on the time, but its deeper impact was locking within the x86 model. Nearly five many years later, its legacy continues to define how PCs are built and the way software is written.
-
OpenAI taps Google, slows o3-pro for precision
-
Google hastens Veo 3, adds Mariner to Chrome
-
Redwood equips NEO for mobile, full-body tasks
-
5 Latest AI Tools
-
Latest AI Research Papers
You read. We listen. Tell us what you’re thinking that by replying to this email.
Clone Any Voice. Speak Any Language. Sound Human.
Fish.Audio is redefining AI voice generation with essentially the most realistic speech synthesis in the marketplace. Whether you are creating voiceovers, audiobooks, ads, or full-blown AI characters, this platform delivers stunning quality and control.
-
200,000+ voices in the general public library — or upload your individual
-
Voice cloning from just 15 seconds of audio
-
Text-to-speech, speech-to-textand full voice agent support
-
Supports 13 languages with native-level expressiveness
-
Utilized by over 150 creators and studios across YouTube, TikTok, and beyond
Side-by-side tests show Fish.Audio outperforming ElevenLabs in emotional nuance and clarity. From Taylor Swift and Elon to original avatars, it’s develop into the go-to platform for creators who want voices that feel real.

Today’s trending AI news stories
OpenAI Trades Speed for Precision with o3-pro; Taps Google for Compute
OpenAI is leaning hard into precision with o3-pro, a tool-native upgrade to its o3 model built for enterprise-grade reasoning. Tuned for precision over speed, it integrates Python, file evaluation, web browsing, and vision, making it ideal for complex, multi-step tasks where accuracy trumps latency. Available via API and now default for Pro and Team ChatGPT users, o3-pro trades speed and price at $20 input and $80 output per million tokens, for deeper tool use and more deliberate considering. It may well take minutes per response, but early testers say it handles uncertainty well, reasons in context, and chooses tools intelligently. It’s slow, but by design.

At the identical time, OpenAI dropped o3’s pricing by 80%, pushing high-reasoning LLMs into mass-market territory. Input is now $2 per million tokens, output $8, with cacheable inputs as little as $0.50. “Flex mode” offers speed–cost tradeoffs, and benchmarks show o3 still holds its own against Claude Opus and Gemini Pro at a fraction of the value. This isn’t only a price cut, it’s a compression of the AI value curve, aimed toward devs, startups, and researchers who need power without the premium.
We’re cutting the value of o3 by 80% and introducing o3-pro within the API, which uses much more compute.
O3:
Input: $2 / 1M tokens
Output: $8 / 1M tokensNow in effect.
We optimized our inference stack that serves o3. Same exact model—just cheaper.
platform.openai.com/docs/models/o3
— OpenAI Developers (@OpenAIDevs)
8:18 pm • Jun 10, 2025
The corporate’s open-weight model, once slated for June, has been delayed following a breakthrough it now plans to fold in. The model goals to bring o-series reasoning to the open ecosystem, potentially augmented by OpenAI’s cloud backend. Though timing slips, the goal is strategic: redefine what “open” can do, at the same time as rivals like Mistral and Qwen gain ground.
Behind all of it, OpenAI has quietly signed a compute cope with Google Cloud, shifting from a Microsoft-first infrastructure stance to a more pragmatic, multi-cloud play. Within the LLM arms race, compute is the true constraint, and even competitors now share silicon.
Google Upgrades Veo 3 Speed, Embeds Project Mariner in Chrome
Google is advancing its AI-native tooling with two closely linked developments. I see 3 fast significantly boosts the rendering speed of 720p video, greater than doubling that of its predecessor, while maintaining integration across each the Gemini app and Flow. Gemini Pro users can now generate three videos every day, and Flow Pro users are charged 20 credits per output, while Ultra-tier users retain access to higher quality and generation limits. Google can be experimenting with multimodal prompts like voice-to-video and plans to scale access to Workspace accounts and international users.
🔥Veo 3 keeps growing like crazy. To maintain up, we’re introducing Veo 3 Fast in @GeminiApp and Flow. It’s >2x faster, has the identical 720p resolution, and a bunch of serving optimizations. The large headline: we are able to serve more of it, even for the Yetis!
The best way to start:
1) Get a
— Josh Woodward (@joshwoodward)
6:19 pm • Jun 9, 2025
Running parallel to that is the gradual release of Project Mariner, an experimental browser-based agent embedded directly in Chrome. Available to Gemini Ultra subscribers, Mariner operates across open tabs and may handle navigation, form-filling, and transactional tasks through a prompt-driven chat interface. Designed with strict permission gating, the agent asks for user approval before taking actions, even for basic lookups, reflecting Google’s emphasis on privacy-aware automation. Read more.
Redwood AI equips humanoid robot ‘NEO’ for mobile tasks, whole-body manipulation
1X Technologies has launched Redwood, a light-weight AI model that turns its NEO humanoid right into a home-capable autonomous agent. Trained on real-world robot data, Redwood lets NEO move, perceive, and act in domestic spaces, handling tasks like laundry, door answering, and indoor navigation. The model generalizes well, adapting to novel objects and retrying failed grasps. It enables whole-body, multi-contact manipulation, synchronizing locomotion and arm movement for mobile, dynamic control, including bracing and leaning.
Crucially, Redwood runs entirely on NEO’s onboard GPU, with voice-driven intent prediction handled by a connected language model. Unlike simulation-tuned AI, Redwood is grounded in practical deployment, as demoed at NVIDIA GTC 2025. For developers, it offers a path to scalable, compute-efficient humanoid autonomy. For robotics, it’s a signpost toward real-world utility. Read more.

-
Multimodal Language Models Develop Intuitive Object Representations
-
Meta Forms AGI Superintelligence Team
-
AlphaOne Lets Developers Tune LLM Reasoning Speed at Inference
-
Qualcomm Unveils First GenAI Smart Glasses That Work Without Cloud or Phone
-
Apple Opens the Gates to Smarter, Privacy-First App Development
-
China’s AI chip tool QiMeng beats engineers, designs processors in only days
-
Qualcomm shares its vision for the long run of smart glasses with on-glass Gen AI
-
Anthropic’s AI-generated blog dies an early death
-
IBM to Construct Fault-Tolerant Quantum Computer 20,000x More Powerful by 2029 in Latest York
-
China’s AlphaBot2 humanoid robot with first full-embodied AI works at auto factory
-
Top 15 Vibe Coding Tools Transforming AI-Driven Software Development in 2025
-
Manus Boosts AI Video Creation with Google Veo 3 Integration, Bringing Cinematic Flair to Basic, Plus, and Pro Users
-
Barclays Scales Microsoft 365 Copilot to 100,000 Employees in Considered one of Banking’s Largest Gen AI Deployments
-
Google DeepMind and UK Government Unveil “Extract” Initiative
-
Krea AI Launches Free Beta of Krea 1, Its First In-House Image Model with Advanced Aesthetic Control
-
Genspark Launches Agentic AI Browser with Smart Shopping, YouTube Summaries, and 700+ App Integrations
-
Yutori AI debuts Scouts, web-tracking agents for real-time alerts on anything from deals to rentals
-
Create Limitless Cinematic Videos for Free with SkyReels-V2. Now Open-Sourced on GitHub
-
Scammers are using AI to enroll fake students in online classes, then steal college financial aid

5 recent AI-powered tools from around the online

arXiv is a free online library where researchers share pre-publication papers.



Your feedback is beneficial. Reply to this email and tell us how you’re thinking that we could add more value to this article.
Desirous about reaching smart readers such as you? To develop into an AI Breakfast sponsor, reply to this email or DM us on 𝕏!