Good morning. It’s Wednesday, April ninth.
On this present day in tech history: 1979: The primary fully functional Motorola DynaTAC prototype, a precursor to the trendy cell phone, was demonstrated.
You read. We listen. Tell us what you’re thinking that by replying to this email.
Able to level up your work with AI?
HubSpot’s free guide to using ChatGPT at work is your recent cheat code to go from working hard to hardly working
HubSpot’s guide will teach you:
All with the intention to provide help to unleash the ability of AI for a more efficient, impactful skilled life.

Today’s trending AI news stories
Google Rolls Out Deep Research and Recent AI Tools with Gemini 2.5 Pro
Google has launched Deep Research for Gemini Advanced users on the experimental Gemini 2.5 Pro model, now accessible via web, Android, and iOS. Designed as a private research assistant, it excels in reasoning and structured synthesis, surpassing competitors with a 2-to-1 user preference margin. Features like Audio Overviews provide podcast-style narration, making insights more portable. Positioned not as a chatbot but a full-stack tool, it integrates deeply into real-world knowledge workflows.
📣 Deep Research is now powered by Gemini 2.5 Pro, our most intelligent AI model. ✨
This upgraded Deep Research is now even higher at:
🔍 Finding & synthesizing information
📊 Providing more insightful reports
🧠 Analytical reasoningGemini Advanced users can access the brand new
— Google Gemini App (@GeminiApp)
9:34 PM • Apr 8, 2025
We have now been given first glance at The Wizard of Oz movie which has been reconstructed for @SphereVegas scale and determination using AI by extending frames and scenes while keeping the originality within the movie. @Google and Sphere collaborated to get this project done.
Watching it
— Sarbjeet Johal (@sarbjeetjohal)
12:59 AM • Apr 9, 2025
📣 It’s here: ask Gemini about anything you see. Share your screen or camera in Gemini Live to brainstorm, troubleshoot, and more.
Rolling out to Pixel 9 and Samsung Galaxy S25 devices today and available for all Advanced users on @Android within the Gemini app:
— Google Gemini App (@GeminiApp)
12:03 PM • Apr 7, 2025
Meta’s Llama 4 Models Impress in Some Areas, But Face Criticism Over Long-Context Tasks

Image: Artificial Evaluation
Meta’s submission of the customized Llama 4 models to LM Arena has stirred up a storm of questions, particularly around transparency. The “Llama-4-Maverick-03-26-Experimental” version was fine-tuned for human preference, but this wasn’t made clear initially. Meta’s VP of generative AI, Ahmad Al-Dahle, has denied rumors that the corporate artificially boosted its Llama 4 models’ benchmark scores.
In response, LM Arena dropped over 2,000 battle results, highlighting how style and tone swayed evaluations. They’ve also revamped their leaderboard policies, reinforcing their commitment to fair and reproducible tests. Artificial Evaluation has also updated its Llama 4 Intelligence Index scores for Scout and Maverick, following adjustments to account for discrepancies in Meta’s claimed MMLU Pro and GPQA Diamond results.
On the performance front, Meta’s Llama 4 models—Maverick and Scout—impressed with strong scores in reasoning, coding, and arithmetic, outpacing rivals like Claude 3.7 and GPT-4o-mini. Maverick clocked 49 points, while Scout followed with 36. Nevertheless, when it got here to long-context tasks, each models hit a wall. Maverick managed just 28.1%, and Scout trailed even further at 15.6%. Meta points to ongoing tweaks and optimizations because the models are regularly rolled out.
Meanwhile, NVIDIA has turbocharged Llama 4’s inference on its Blackwell B200 GPUs, pushing the models to over 40,000 tokens per second. With a multimodal, multilingual architecture and TensorRT-LLM optimization, these models now handle tasks like document summarization and image-text comprehension with impressive speed. Read more.
Microsoft’s Copilot Turns Vision into Context
Microsoft has prolonged Copilot Vision from web to mobile. Now accessible via the Copilot app for iPhone users subscribed to Copilot Pro, the feature turns a phone’s camera right into a real-time visual search tool. Point your camera, and Copilot deciphers what it sees—plant health, product specs, interior tweaks—all processed in real time.
The tech runs inside Voice mode of the Copilot app and only prompts with permission. Powered by OpenAI’s latest models, it’s a cognitive overlay for the physical world. Microsoft’s broader push is obvious: collapse the space between seeing and knowing. $20/month buys you early access and premium speeds. Read more.

-
Stanford’s 2025 AI Index Maps a Costly, Crowded Race
-
ElevenLabs Launches MCP Server for Voice Agent Access via Text Prompts
-
Runway Launches Gen-4 Turbo: 10-Second Videos in Just 30 Seconds
-
Amazon expands Bedrock with speech, video, and multilingual coding AI
-
Together AI Launches Open-Source DeepCoder-14B, Difficult Code-Crunching Giants
-
Recent open source AI company Deep Cogito releases first models and so they’re already topping the charts
-
Nvidia’s recent Llama-3.1 Nemotron Ultra outperforms DeepSeek R1 at half the scale
-
OpenAI Introduces the Evals API: Streamlined Model Evaluation for Developers
-
Google is allegedly paying some AI staff to do nothing for a yr relatively than join rivals
-
AI heads for $4.8T by 2033, cementing its place as the following great industrial terrain
-
Anthropic proclaims 100 roles in Europe, recent EMEA head
-
China’s quantum computer cracks billion-parameter AI, nudging frontier physics into machine learning
-
Nvidia and Supermicro lead chip and AI stock rebound after tariff hit
-
Amazon says its AI video model can now generate minutes-long clips
-
IBM releases a brand new mainframe built for the age of AI
-
$115 million just poured into this startup that makes engineering 1,000x faster — and Bezos, Altman, and Nvidia are all betting on its success
-
Sakana AI Introduces The AI Scientist-v2
-
ArXiv Launches Deep Research for Quick Lit Reviews
-
China: First-ever mega humanoid robot training hub opens with plans to show 100+ models
-
KPMG’s AI bot slashes interview scheduling time by 60%, saving over 1,000 hours
-
Andreessen Horowitz seeks to lift $20 billion megafund amid global interest in US AI startups
-
EU moves to trim GDPR, eyeing leaner data rules in coming weeks
-
Tesla and Warner Bros. Win A part of Lawsuit Over AI Images from ‘Blade Runner 2049’
-
White House orders agencies to develop AI strategies and name leaders
-
Hyundai expands robot fleet with 1000’s of Atlas units from Boston Dynamics
-
Microsoft AI chief Suleyman sees advantage in constructing models ‘3 or 6 months behind’
-
China’s DeepRoute.ai to team up with Qualcomm to develop advanced driver assistance solutions
-
Sam Altman defends AI art after Studio Ghibli backlash, calling it a ‘net win’ for society

4 recent AI-powered tools from around the online

arXiv is a free online library where researchers share pre-publication papers.


You’ve heard the hype. It’s time for results.

After two years of siloed experiments, proofs of concept that fail to scale, and disappointing ROI, most enterprises are stuck. AI is not transforming their organizations — it’s adding complexity, friction, and frustration.
But Author customers are seeing positive impact across their corporations. Our end-to-end approach is delivering adoption and ROI at scale. Now, we’re applying that very same platform and technology to construct agentic AI that truly works for each enterprise.
This isn’t just one other hype train that overpromises and underdelivers. It’s the AI you’ve been waiting for — and it’s going to vary the way in which enterprises operate. Be among the many first to see end-to-end agentic AI in motion. Join us for a live product release on April 10 at 2pm ET (11am PT).
Cannot make it live? No worries — register anyway and we’ll send you the recording!
Your feedback is helpful. Reply to this email and tell us how you’re thinking that we could add more value to this article.
All in favour of reaching smart readers such as you? To change into an AI Breakfast sponsor, reply to this email or DM us on 𝕏!