Google I/O 2024: AI Overviews Go Live and Other Product Announcements

Written by
Anna Postol
Reviewed by
Olena Karpova
May 17, 2024
6 min read

The future of search is decidedly getting an AI-powered reboot. The tech giant stunned everyone at its annual Google I/O 2024 event on May 14th with its revolutionary AI-powered search capabilities.

Will AI Overviews finally go live?

The event’s highlight was the long-awaited launch of SGE under the “AI Overviews” moniker. While AI Overviews will debut first in the U.S. market, Google guaranteed an eventual expansion, stating “more countries coming soon.” 

To learn more about SGE’s early stages, check out our article: Inside Google SGE: Uncovering the beta version of AI-driven search.

Danny Sullivan, a Google spokesperson, assured that AI Overviews is near launch-ready. He also mentioned that users engage more with search results and report higher satisfaction rates when AI Overviews are available.

This is an exciting announcement, although there is a tiny “but” worth considering. 

SGE previously generated results with “questionable quality,” as highlighted by Barry Schwartz

That being said, we hope Google will make more progress in optimizing its AI capabilities to enhance the AI’s output quality and align it with ethical principles.

By the way, OpenAI’s recent GPT-4o release has shown promising improvements, setting the new standard for AI language models. Check it out!

What capabilities will AI overviews have? 

Well,  the announcement that AI Overviews is now going live was just the appetizer of this event. Google offered a full multi-course menu of impressive AI search innovations that AI Overviews is capable of. Initially, these features will only be available through Google’s Search Labs for the U.S. market in English. 

Here are some of the features they revealed:

  • Users will be able to ask complex questions that require multi-step planning, like trips.
  • Users will be able to interrupt generated answers to refine or modify queries.
  • AI will be able to help you plan ahead, like 3-day meal plans.
  • For broad queries, AI will group results into subcategories.
  • Google Lens will now work with both photos and live videos. Now you can film something and ask questions about the frame’s contents.

More Google I/O 2024 highlights 

Gemini 1.5 Pro enhancements

Google has upgraded its Gemini AI model to handle much larger inputs, including lengthy documents, code, videos, and audio files. They announced that the new Gemini 1.5 Pro will have a 2 million token context window later this year. This will greatly expand its ability to understand long-form content. It also processes more than any other commercial AI model on the market.

Project Astra: Google’s multimodal AI assistant

Google also announced Project Astra, a multimodal AI assistant capable of engaging in semi-conversational interactions while using the device’s camera to identify objects, people, and their actions in real time. The AI assistant is part of Google’s Gemini project and is currently in its early stages of development.

You can watch the video of Project Astra in action here

Imagen 3 update

Next up in Google’s string of paradigm-shifting announcements was Google’s launch of Imagen 3, its most advanced image generation model. Imagen 3 creates detail-rich, realistic images with fewer flaws than earlier models. This model understands natural language better and can include small details from longer prompts, enabling it to work well with different styles and generate text more accurately. 

Google has made the model available to some creators through a private preview on ImageFX and plans to offer it soon on Vertex AI. The public can join the waitlist for early access. 

Trillium TPU

Google announced Trillium, the sixth and latest generation of its custom AI accelerator, the Tensor Processing Unit (TPU). Trillium TPUs deliver a 4.7x boost in peak compute performance per chip compared to Google’s previous TPU v5e version. Trillium has already become Google’s most sustainable TPU generation, with over 67% greater energy efficiency than TPU v5e. 

Google plans to make Trillium TPUs available to cloud customers later this year.

Ask with Video in Google search

Google also gave a live demonstration of their groundbreaking new search feature that showcases advancements in video understanding. Users can now ask questions in video format. Search can analyze the complex visuals, understand your query, provide explanations and next steps, and offer relevant resources through an AI-generated overview.

Gemini mobile app

Google is enhancing the Gemini app with a slew of new capabilities, including: 

  • The Gemini 1.5 Pro model, with an upgraded 1 million token context window accessible to Gemini Advanced subscribers.
  • File uploads in Gemini Advanced via Google Drive or from your device.
  • Upcoming tools that you can use to analyze data and build charts derived from uploaded data files like spreadsheets
  • Gemini Advanced features a new planning functionality that creates custom itineraries beyond suggested activities.
  • Have natural spoken conversations with Gemini Live. It boasts a new and advanced speech technology, 10 natural-sounding voices to choose from, and mid-response interruption abilities.
  • Integration with Google Messages to chat with Gemini.
  • Gemini Advanced users can create custom “Gems” tailored to specific use cases.
  • Upcoming connections to more Google tools, like Calendar, Tasks, Keep, and Clock.


Google is improving its Gemma language models at scale. The next generation, Gemma 2 (to be launched in June), will feature a 27-billion-parameter model. Google has massively expanded its capabilities with this large model size, which Nvidia has optimized to run efficiently across Google’s AI infrastructure, including next-gen GPUs and TPU hosts in Vertex AI. 


Google has introduced LearnLM, a series of AI language models optimized for educational purposes. These Gemini-based models are currently being used to improve learning features in several Google products, such as Search, YouTube, and Google Classroom. The majority of its features, however, are not available yet. 

Few final remarks

The advancements introduced during Google I/O 2024 are a clear indication that the AI revolution is in full swing. The way we search, learn, and interact with technology will never be the same. From AI Overviews and video searches to custom AI assistants, Google is pushing the boundaries of what’s possible. 

The future is now. Are you ready to embrace it?

Subscribe to our blog!

Sign up for our newsletters and digests to get news, expert articles, and tips on SEO

Thank you!
You have been successfully subscribed to our blog!
Please check your email to confirm the subscription.