its latest LLM: Gemini 3. The model is long-awaited and has been widely discussed before its release. In this text, I’ll cover my first experience with the model and the way it differs from other frontier LLMs.
The goal of the article is to share my first impressions when using Gemini 3, highlighting what works well and what doesn’t work well. I’ll highlight my experience using Gemini 3 within the console and while coding with it.
Why it is best to use Gemini 3
In my view, Gemini 2.5 pro was already the perfect conversational LLM available before the discharge of Gemini 3. The one area I imagine one other LLM was higher at was Claude Sonnet 4.5 considering, for coding.
The rationale I imagine Gemini 2.5 pro is the perfect non-coding LLM is as a result of its:
- Ability to efficiently find the right information
- Low amount of hallucinations
- Its willingness to disagree with me
I imagine the last point is a very powerful. Some people want warm LLMs that feel good to seek advice from; nonetheless, I’d argue you (as a problem-solver) all the time want the alternative:
You wish an LLM that goes straight to the purpose and is willing to say that you just are incorrect
My experience was that Gemini 2.5 was much better at this, in comparison with other LLMs akin to GPT-5, Grok 4, and Claude Sonnet 4.5.
Considering Google, in my view, already had the perfect LLM on the market, the discharge of a more recent Gemini model is thus very interesting, and something I began testing right after release.
It’s price stating that Google released Gemini 3 Pro, but has not yet released a flash alternative, though it’s natural to think such a model can be released soon.
I’m not endorsed by Google within the writing of this text.
Gemini 3 within the console
I first began testing Gemini 3 Pro within the console. The very first thing that struck me was that it was relatively slow in comparison with Gemini 2.5 Pro. Nonetheless, this is generally not a difficulty, as I mostly value intelligence over speed, in fact, as much as a certain threshold. Though Gemini 3 Pro is slower, I definitely wouldn’t say it’s too slow.
One other point I noticed is that when explaining, Gemini 3 creates or utilises more photos in its explanations. For instance, when discussing EPC certificates with Gemini, the model found the image below:

I also noticed it could sometimes generate images, even when I didn’t explicitly prompt for it. The image generation within the Gemini console is surprisingly fast.
The moment I used to be most impressed by Gemini 3’s capabilities was after I was analyzing the primary research paper on diffusion models, and I discussed with Gemini to grasp the paper. The model was, in fact, good at reading the paper, including text, images, and equations; nonetheless, this can also be a capability the opposite frontier models possess. I used to be most impressed after I was chatting with Gemini 3 about diffusion models, trying to grasp them.
I made a misconception concerning the paper, considering we were discussing conditional diffusion models, though we were in reality unconditional diffusion. Note that I used to be discussing this before even knowing concerning the terms and diffusion.
Gemini 3 then proceeded to call out that I used to be misunderstanding the concepts, efficiently understanding the actual intent behind my query, and significantly helped me deepen my understanding of diffusion models.

I also took a number of the older queries I ran within the Gemini console with Gemini 2.5 Pro, and ran the very same queries again, this time using Gemini 3 Pro. They were often broader questions, though not particularly difficult ones.
The responses I got were overall quite similar, though I did notice Gemini 3 was higher at telling me things I didn’t know, or uncovering topics / areas I (or Gemini 2.5 Pro) hadn’t thought of before. I used to be, for instance, discussing how I write articles, and what I can do to enhance, where I imagine Gemini 3 was higher at providing feedback, and coming up with more creative approaches to improving my writing.
Thus, to sum it up, Gemini 3 within the console is:
- A bit slow
- Smart, and provides good explanations
- Good at uncovering things I haven’t thought of, which is super helpful when coping with problem-solving
- Is willing to disagree with you, and help call out ambiguities, traits I imagine are really essential in an LLM assistant
Coding with Gemini 3
After working with Gemini 3 within the console, I began coding with it through Cursor. My overall experience is that it’s definitely model, though I still prefer Claude Sonnet 4.5 considering as my predominant coding model. The predominant reason for that is that Gemini 3 too often comes up with more complex solutions and is a slower model. Nonetheless, Gemini 3 is most definitely a really capable coding model that is likely to be higher for other coding use-cases than what I’m coping with. I’m mostly coding infrastructure around AI agents and CDK stacks.
I attempted Gemini 3 for coding in two predominant ways:
- Making the sport shown on this X post, from only a screenshot of the sport
- Coding some agentic infrastructure
First, I attempted to make the Game from the X post. On the primary prompt, the model made a Pygame with all of the squares, but it surely forgot so as to add all of the sprites (art), the bar on the left side, and so forth. Principally, it made a really minimalist version of the sport.

I then wrote a follow-up prompt with the next:
Make it look properly like this game with the design and all the pieces. Use
Note: When coding, you ought to be far more specific in your instructions than my prompt above. I used this prompt because I used to be essentially vibing in the sport, and desired to see Gemini 3 Pro’s ability to create a game from scratch.
After running the prompt above, it made a working game, where the guests are walking around, I should purchase pavements and different machines, and the sport essentially works as expected. Very impressive!
I continued coding with Gemini 3, but this time on a more production-grade code base. My overall conclusion is that Gemini 3 Pro often gets the job done, though I more often experience bloated or worse code than I do when using Claude 4.5 Sonnet. Moreover, Claude Sonnet 4.5 is kind of a bit faster, making it the definite model of alternative for me when coding. Nonetheless, I’d probably regard Gemini 3 Pro because the second-best coding model I’ve used.
I also think that which coding model is best highly is determined by what you’re coding. In some situations, speed is more essential. Particularly types of coding, one other model is likely to be higher, and so forth, so it is best to really check out the models yourself and see what works best for you. The value of using these models is taking place rapidly, and you’ll be able to easily revert any changes made, making it super low cost to check out different models.
It’s also price mentioning that Google released a brand new IDE called Antigravity, though I haven’t tried it yet.
Overall impressions
My overall impression of Gemini 3 is nice, and my updated LLM usage stack will seem like this:
- Claude 4.5 Sonnet considering for coding
- GPT-5 when I would like quick answers to easy questions (the GPT-app works well to open with a shortcut).
- GPT-5 when generating images
- When I would like more thorough answers and have longer discussions with an LLM a couple of topic, I’ll use Gemini 3. Typically, to learn latest topics, discuss software architecture, or similar.
The pricing for Gemini 3 per million tokens looks like the next (per November 19, 2025, from Gemini Developer API Docs)
- If you will have lower than 200k input tokens:
- Input tokens: 2 USD
- Output tokens: 12 USD
- If you will have greater than 200k input tokens:
- Input tokens: 4 USD
- Output tokens: 18 USD
In conclusion, I even have good first impressions from Gemini 3, and highly recommend checking it out.
👉 Find me on socials:
💻 My webinar on Vision Language Models
🧑💻 Get in contact
✍️ Medium
It’s also possible to read my other articles:
