Google I/O was an AI evolution, not a revolution


At Google’s I/O developer conference, the corporate made its case to developers — and to some extent, consumers — why its bets on AI are ahead of rivals. On the event, the corporate unveiled a revamped AI-powered search engine, an AI model with an expanded context window of two million tokens, AI helpers across its suite of Workspace apps, like Gmail, Drive and Docs, tools to integrate its AI into developers’ apps and even a future vision for AI, codenamed Project Astra, which might reply to sight, sounds, voice and text combined. 

While each advance by itself was promising, the onslaught of AI news was overwhelming. Though obviously aimed toward developers, these big events are also a chance to wow end users concerning the technology. But after the flood of reports, even somewhat tech-savvy consumers could also be asking themselves, wait, what’s Astra again? Is it the thing powering Gemini Live? Is Gemini Live kind of like Google Lens? How is it different from Gemini Flash? Is Google actually making AI glasses or is that vaporware? What’s Gemma, what’s LearnLM…what are Gems? When is Gemini coming to your inbox, your docs? How do I take advantage of these items?

For those who know the answers to those, congratulations, you’re a TechCrunch reader. (For those who don’t, click the links to get caught up.)

Image Credits: Google

What was missing from the general presentation, despite the passion from the person presenters or the whooping cheers from the Google employees in the gang, was a way of the approaching AI revolution. If AI will ultimately result in a product that may profoundly impact the direction of technology the best way the iPhone impacted personal computing, this was not the event where it debuted. 

As a substitute, the takeaway was that we’re still very much within the early days of AI development. 

On the sidelines of the event, there was a way that even Googlers knew the work was unfinished. When demoing how AI could compile a student’s study guide and quiz inside moments of uploading a multihundred-page document — a formidable feat — we noticed that the quiz answers weren’t annotated with the sources cited. When asked about accuracy, an worker admitted that the AI gets things mostly right and a future version would point to sources so people could fact-check its answers. But when you’ve got to fact-check, then how reliable is an AI study guide in preparing you for the test in the primary place? 

Within the Astra demo, a camera mounted over a table and linked to a big touchscreen allow you to do things like play Pictionary with the AI, show it objects, ask questions on those objects, have it tell a story and more. However the use cases for a way these abilities will apply to on a regular basis life weren’t readily apparent, despite the technical advances that, on their very own, are impressive. 

For instance, you would ask the AI to explain objects using alliteration. Within the livestreamed keynote, Astra saw a set of crayons and responded “creative crayons coloured cheerfully.” Neat party trick.

Once we challenged Astra in a personal demo to guess the thing in a scribbled drawing, it appropriately identified the flower and house I drew on the touchscreen straight away. After I drew a bug — one greater circle for the body, one smaller circle for the top, little legs off the perimeters of the massive circle — the AI stumbled. Is it a flower? No. Is it the sun? No. The worker guided the AI to guess something that was alive. I added two more legs for a complete of eight. Is it a spider? Yes. A human would have seen the bug immediately, despite my lack of artistic ability. 

To present you a way of where the technology is today, Google staff didn’t allow recording or photographs within the Astra demo room. In addition they had Astra running on an Android smartphone, but you couldn’t see the app or hold the phone. The demos were fun, and positively the tech that made them possible is price exploring, but Google missed a chance to showcase how its AI technology will impact your on a regular basis life.

When are you going to want to ask an AI to provide you with a band name based on a picture of your dog and a stuffed tiger, for instance? Do you actually need an AI to allow you to find your glasses? (These were other Astra demos from the keynote.)

Image Credits: Google demo video (opens in a brand new window)

That is hardly the primary time we’ve watched a technology event full of demos of a sophisticated future without real-world applications or people who pitch conveniences as more significant upgrades. Google, for example, has teased its AR glasses in previous years, too. (It even parachuted skydivers into I/O wearing Google Glass, a project built over a decade ago, that has since been killed off.)

After watching I/O, it seems like Google sees AI as just one other means to generate additional revenue: Pay for Google One AI Premium if you happen to want its product upgrades. Perhaps, then, Google won’t make the primary huge consumer AI breakthrough. Like OpenAI’s CEO Sam Altman recently mused, the unique idea for OpenAI was to develop the technology and “create all types of advantages for the world.”

“As a substitute,” he said, “it now looks like we’ll create AI after which other people will use it to create all types of amazing things that all of us profit from.” 

Google appears to be in the identical boat.

Still, there have been times when Google’s Astra AI seemed more promising. If it could appropriately discover code or make suggestions on the right way to improve a system based on a diagram, it’s easier to see the way it could possibly be a useful work companion. (Clippy, evolved!)

Gemini in Gmail.
Image Credits: Google

There have been other moments when the real-world practicality of AI shone through, too. A greater search tool for Google Photos, for example. Plus, having Gemini’s AI in your inbox to summarize emails, draft responses or list motion items could allow you to finally get to inbox zero, or some approximation of that, more quickly. But can it filter your unwanted but non-spam emails, smartly organize emails into labels, be certain that you simply never miss a crucial message and offer an summary of all the things in your inbox that you might want to take motion on as soon as you log in? Can it summarize a very powerful news out of your email newsletters? Not quite. Not yet. 

As well as, a number of the more complex features, like AI-powered workflows or the receipt organization that was demoed, won’t roll out to Labs until September.

When enthusiastic about how AI will impact the Android ecosystem — Google’s pitch for the developers in attendance — there was a way that even Google can’t yet make the case that AI will help Android woo users away from Apple’s ecosystem. “When is the very best time to modify from iPhone to Android?”, we posed to Googlers of various ranks. “This fall” was the overall response. In other words, Google’s fall hardware event, which should coincide with Apple’s embrace of RCS, an upgrade to SMS that may make Android messaging more competitive with iMessage.

Simply put, consumers’ adoption of AI in personal computing devices may require recent hardware developments — possibly AR glasses? a wiser smartwatch? Gemini-powered Pixel Buds? — but Google isn’t yet able to reveal its hardware updates or even tease them. And, as we’ve seen already, with the Ai Pin and Rabbit’s underwhelming launches, hardware continues to be hard. 

Image Credits: Google

Though much could be done today with Google’s AI technology on Android devices, Google’s accessories just like the Pixel Watch and the system that powers it, WearOS, were largely neglected at I/O, beyond some minor performance improvements. Its Pixel Buds earbuds didn’t even get a shout-out. In Apple’s world, these accessories help lock users into its ecosystem, and will someday connect them with an AI-powered Siri. They’re critical pieces to its overall strategy, not optional add-ons.

Meanwhile, there’s a way of waiting for the opposite shoe to drop: that’s, Apple’s WWDC. The tech giant’s Worldwide Developer Conference guarantees to unveil Apple’s own AI agenda, perhaps through a partnership with OpenAIand even Google. Will or not it’s competitive? How can or not it’s if the AI can’t deeply integrate into the OS, the best way Gemini can on Android? The world is waiting for Apple’s response.

With a fall hardware event, Google has time to review Apple’s launches after which try and craft its own AI moment that’s as powerful, and as immediately comprehensible, as Steve Jobs’ introduction of the iPhone: “An iPod, a phone, and an Web communicator. An iPod, a phone… are you getting it?” 

People got it. But when will they get Google’s AI in the identical way? Not from this I/O, at the very least.



What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
Inline Feedbacks
View all comments

Share this article

Recent posts

Would love your thoughts, please comment.x