Welcome, AI enthusiasts.
We have now an exclusive for you today.
In case you missed it, last week Google released two recent upgraded Gemini 1.5 models—achieving recent, state-of-the-art performance across math benchmarks.
We partnered with Google to assist explain what makes these recent models so special for developers, real-world use cases, AI agents, and more. Let’s get into it…
In today’s AI rundown:
-
Google’s two recent Gemini 1.5 models
-
Gemini 1.5 in comparison with other AI models
-
The age of the AI-first developer
-
Real-world use cases of Gemini 1.5
-
Proactive AI agent systems
– Rowan Cheung, founder
EXCLUSIVE Q&A WITH LOGAN KILPATRICK
GEMINI
✨ Google rolls out two recent Gemini 1.5 models

Image credits: Kiki Wu / The Rundown
The Rundown: Google just released two recent upgraded versions of Gemini 1.5 across the Gemini API, including 1.5 pro-002, which achieved state-of-the-art performance across math benchmarks, and 1.5-flash-002, which makes big gains in instruction following.
Cheung: “Are you able to give us the rundown on the whole lot being released and why it actually matters?”
Kilpatrick: “Today, we’re rolling out two recent production-ready Gemini models and in addition improving rate limits, pricing for 1.5 Pro, and a few of the filter settings enabled by default. Really, all these are focused on enabling developers to go in and construct more of the stuff that they are enthusiastic about.”
Cheung: “What exactly makes the brand new models so unique?“
Kilpatrick: “Math, the flexibility for the models to code, which is clearly super essential for individuals who care about developer stuff. It has been a number of listening and form of iterating on the feedback that we have been getting from the ecosystem.“
Kilpatrick added: “The linear amount of progress that we have seen with, and in some cases, exponential in several benchmarks with this iteration of Gemini models… has been incredibly exciting”
Why it matters: Google’s recent Gemini 1.5-pro-002 model achieves state-of-the-art performance across difficult math benchmarks like AMC + AIME 24, and MATH. Because of this the model is in a position to solve advanced mathematical problems and tasks that require deep domain expertise, a significant hurdle from most previous AI models.
You’ll be able to try AI Studio and the brand new Gemini 1.5 models without spending a dime here.
HEAD-TO-HEAD
💎 Gemini 1.5 in comparison with other AI models

Image credits: Kiki Wu / The Rundown
The Rundown: Google also announced significant improvements to accessibility for developers constructing with Gemini models, including a 50% reduced price on 1.5 Pro, 2x higher rate limits on Flash and 3x higher on 1.5 Pro, 2x faster output, and 3x lower latency.
Cheung: “Along with the brand new updates, higher rate limits, expanded feature access, and high context windows, what other capabilities does Gemini 1 .5 offer that developers needs to be really enthusiastic about?“
Kilpatrick: “A part of my perspective is the financial burden to construct with AI is one in every of the speed limiters of this technology being accessible… our technique to combat that is we’ve got probably the most generous free tier of any language model that exists on the earth”
Kilpatrick added: “Certainly one of the massive differentiators is you possibly can come to AI Studio, fine-tune Gemini 1.5 Flash without spending a dime, after which ultimately put that model into production and pay the identical extremely competitive, per million token cost. There isn’t any incremental cost to make use of a fine-tuned model, which is super differentiated within the ecosystem.”
Why it matters: Google’s latest Gemini updates significantly lower the financial barrier for AI development while boosting performance, especially in math. With these updates, Gemini now tops the LLM leaderboard by way of performance-to-price ratio, context windows, video understanding, and other LLM benchmarks.
The pace of innovation: Google’s Gemini project is barely around a 12 months old. Google was the primary to ship 1M context windows (and 2M) and context caching, they usually’ve been making rapid progress ever since.
THE AI ERA
🚀 The age of the AI-first developer

Image credits: Kiki Wu / The Rundown
The Rundown: AI helps developers tackle significantly harder problems faster while concurrently lowering the entry barrier for non-developers to contribute to recent innovation and even construct their very own AI apps.
Cheung: “I believe what’s really, cool with the age of AI, is seeing anyone, even individuals who are usually not technical, with the ability to construct their very own AI apps. If someone were to begin from zero, is there a tool stack, documentation, courses, videos, or perhaps tutorials from Google that you just would recommend?“
Kilpatrick: “To your point…As someone who was formerly a software engineer, I actually can go and tackle 10x harder problems now.”
Kilpatrick added: “For the one who’s never coded before, they’re now in a position to tackle like every problem with code because they’ve this co-pilot of their hands.”
Kilpatrick added: “[For beginners] ai.google.dev is our default landing page that also links out to the Gemini API documentation. On GitHub, we’ve got a Quickstart repo where you possibly can literally run 4 commands have a neighborhood version of AI Studio and Gemini running in your computer to mess around with the models.”
Why it matters: With AI as an assistant, some developers are tackling 10x more difficult software problems—which also means 10x the speed of improvements and 10x the innovation, for many who use the tech properly. Google also has great resources to assist even complete beginners start in lower than 5 minutes.
USE CASES
🌎 Real-world use cases of Gemini 1.5

The Rundown: Gemini 1.5’s multimodal capabilities allow a number of real-world applications that other models cannot match, reminiscent of processing and analyzing hour-long videos or entire books—due to its impressive 2M token context window.
Cheung: “Are you able to share an example or some use cases of how customers are using these experimental models of Gemini in the true world?”
Kilpatrick: “Taking in video, I believe, is one in every of the good things… With the ability to go into an AI studio and just drop an hour-long video in there and ask a bunch of questions is such a mind-blowing experience. And to find a way to try it without spending a dime.”
Kilpatrick added: “The intent was to construct a multimodal model from the bottom up…the order of magnitude of essential use cases for the world, for developers and for individuals who wish to construct with this technology, so a lot of them are multimodal.”
Why it matters: Gemini 1.5’s 2M context window allows it to process and analyze long-form content like long videos, entire books, and lengthy podcasts, opening recent possibilities for content evaluation and interaction. For a full take a look at its potential, take a look at Google’s list of 185 real-world gen AI use cases from leading organizations.
AI AGENTS
📈 Proactive AI agent systems

Image credits: Kiki Wu / The Rundown
The Rundown: The long run of AI is prone to shift from reactive to proactive systems, with AI agents able to initiating actions and asking for clarification or permission, very like human assistants do today.
Cheung: “What do you’re thinking that probably the most surprising way AI will change our day by day lives in the longer term?”
Kilpatrick: “With most AI systems today, it’s a technique. Form of, I prompt the system after which it gives me a response back or I tell it to do something and it form of does what I would instruct it to do.”
Kilpatrick added: “I believe the longer term is, within the medium term, the system actually asking me for permission or clarification on things that I would want it to go do and really solving those problems.”
Kilpatrick added: “It’s actually very interesting to me that only a few AI systems, if any today, ask me how they may also help in an actual, not surface-level way that finally ends up being meaningful.“
Why it matters: By shifting from purely reactive to proactive systems, AI could turn out to be more like a real “Her-like“ assistant, anticipating needs and offering solutions before being prompted. At the present state, no AI systems do that effectively, but as AI continues to advance with projects like Astra, this is probably going the subsequent stage for AI.
GO DEEPER
INTERVIEW
🎥 Watch the complete interview live

In the complete interview with Logan Kilpatrick & Rowan Cheung:
-
Dive deep into state-of-the-art math achievements of the brand new models
-
Discuss real-world use cases of Gemini 1.5, and exciting possibilities
-
Go in-depth on learn how to succeed and thrive in the brand new age of AI
-
Nerd out on the ultimate form aspects of AI and proactive AI agents
Listen on Twitter/X, Spotify, Apple Music, or YouTube.