I’m an AI expert, and these 8 announcements at Google I/O impressed me the most

The past two Google I/O developer conferences have mainly been AI events, and this year is no different. The tech giant used the stage to unveil features across all its most popular products, even bringing AI experiments that were previously announced to fruition.
This means that dozens of AI features and tools were unveiled. They’re meant to transform how you use Google offerings, including how you shop, video call, sort your inbox, search the web, create images, edit video, code, and more. Since such a firehose of information is packed into a two-hour keynote address, you may be wondering which features are actually worth paying attention to.
Also: Everything announced at Google I/O 2025: Gemini, Search, Android XR, and more
I have spent some time looking through the different offerings and picked the top 8 features that, as a general AI consumer, I think you should be excited about because they will either immediately make your workflow and life easier or are indicative of a transformational change coming your way.
1. Inbox cleanup
Out of all the AI features announced this week, inbox cleanup is perhaps one of the least flashy, and it is easy to lose it in the shuffle of the news. However, it is such a useful application that solves a practical issue. How many times have you wished you could mass delete emails that fit specific criteria?
Also: Run out of Gmail storage? How I got another 15GB for free and without losing any files
The new Inbox Cleanup feature lets you simply ask Gemini to clean your inbox using a conversational prompt. Google gave the example, “Delete all of my unread emails from The Groomed Paw from last year,” and Gemini would take care of it in seconds.
2. Agentic shopping
If you like to score a deal, this “buy for me” feature will be your best friend. Users will soon be able to use the new agentic checkout feature in AI Mode to track the price of an item on any product listing and set the amount you want to spend on it. Once you select the “buy for me” option, when the product falls to that price you indicated, it can place the order for you using Google Pay.
This feature will come in especially handy during big sales events like Black Friday, holidays, birthdays, or even splurges, when you know you want to make the purchase but need it to be at a specific price point to justify the decision.
3. Free Gemini Live multimodal access
One of the best features of AI voice assistants is their multimodality. This allows the voice assistant, which can already speak to you conversationally, to also have the context of your surroundings for an added layer of assistance. Now, Google is raising the ante by making Gemini Live with camera and screen sharing free for everyone on both Android and iOS.
Also: How to use ChatGPT’s Voice Mode (and why you’ll want to)
I have openly shared on ZDNET how often I use ChatGPT’s Advanced Voice Mode, OpenAI’s equivalent to Gemini Live, for the added assistance it offers when performing everyday tasks. However, free users do not have access to the feature, while a ChatGPT Plus subscription of $20 per month gets you access.
(Disclosure: Ziff Davis, ZDNET’s parent company, filed an April 2025 lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)
4. AI Mode for all
There was a large focus at Google I/O related to AI Mode, Google’s search experiment that combines traditional search with the conversational nature of chatbots. Some of the updates were really promising, including the incorporation of Project Mariner, a research prototype built on Gemini 2.0 that can automate tasks in your browser and will be able to purchase tickets and book reservations on your behalf.
Also: Why Google seems to be losing its iron grip on search – and what I use now instead
The most interesting part, however, wasn’t necessarily all the flashy features, but that Google is moving the experience out of Labs and making it available to all users, showing the company wants people to interact with it more. Google even said it will graduate AI Mode features and capabilities into the core Search experience in AI Overviews after collecting feedback. I interpret this as showing that Google Search will be even more different than it is now.
5. Veo 3
When it comes to the video generator space, OpenAI has led the charge with its Sora AI video generator. However, Veo 3 has a competitive edge — it can add audio to the videos, too. This means that with Veo 3, you can generate background noises and even dialogue on a video clip, bringing the usefulness of the clips in content creation to the next level.
Google also launched Flow, an AI filmmaking tool that combines Veo 3 and the newly released Imagen 4 to create high-quality videos. Although the average person may not reach for this feature every day, it is exciting to see Google launch something cutting-edge that the other major AI players haven’t done before.
6. Jules
Google moved its Jules asynchronous coding agent out of Labs and into public beta, making it now available to all users without a waitlist. If you code in any capacity, you’ll want to check out the Jules tool as it can work with your existing repositories, including GitHub.
Also: The best AI for coding in 2025 (including two new top picks – and what not to use)
Then, without your guidance, it reasons through different decisions that allow it to debug your code, build new features, write tests, and more. Jules can even show you their plan before committing the changes. If privacy is a concern, Google ensures the public that Jules doesn’t train on your private code and clones the codebase into a secure Google Cloud virtual machine. OpenAI has a similar feature, Codex, announced last week.
7. Android XR
We have reached the part of the roundup where the value of the announcements is measured by how transformational they will be in the future. Android XR shows a glimpse of how all of the AI advancements we have spoken about above will manifest in the physical world to make our lives easier, acting as a vehicle that bridges the physical world with AI for added assistance.
Also: These XR glasses gave me a 200-inch screen to work with – and you can’t beat the price
For example, the Android XR glasses have microphones, a camera, and speakers to see and hear what you do. When paired with Gemini, this means the AI will have a lot of context about your everyday activities, which allows it to provide even better, more robust assistance. The glasses will also have an optional in-lens display that can privately show you information when you need it.
8. Google Beam
Google Beam was the biggest surprise of the keynote. The AI-first 3D video platform combines its AI video model with a light field display to create a brand-new virtual conversation that feels more natural and intuitive. Despite not wearing a headset or eyewear, you can experience the dimensionality and depth of the person as if you were in person.
The first Google Beam devices are coming early this year with HP.
This technology really has the possibility to transform how workplace communications take place, adding another layer of depth to virtual conversations.
Get the morning’s top stories in your inbox each day with our Tech Today newsletter.