Skip to content Skip to sidebar Skip to footer

Google I/O 2025 recap: a new era for AI, XR and smart devices

Every spring, Google I/O brings together developers, tech leaders, and futurists to explore the technologies that will shape the digital world. The 2025 edition, held in Mountain View, marked a major turning point, particularly in the fields of artificial intelligence, extended reality (XR), and ambient computing.

If 2024 was the year of experimentation, 2025 is the year of implementation. From real-world smart glasses to advanced AI models capable of reasoning, seeing, and understanding the world, this year’s announcements show that the future is already in motion.

Gemini 2.5: beyond the chatbot

Google’s Gemini model has evolved with the launch of Gemini 2.5 Pro and Gemini 2.5 Flash. The Pro version brings enhanced capabilities in long-term memory, multi-step reasoning, and complex coding tasks. What sets this model apart is its new “Deep Think” mode: a feature designed to tackle intricate problems by pausing to reflect and analyze, mimicking how humans concentrate on difficult questions.

Meanwhile, Gemini 2.5 Flash focuses on performance. It’s a lightweight version built for speed and efficiency, ideal for on-device or mobile applications. This model is now the default for many Google services, balancing accuracy with lightning-fast response times.

Most importantly, Gemini is no longer just a web tool: it’s becoming an infrastructure layer across Android, ChromeOS, Android Auto, and even Google TV. This shift will redefine how users interact with devices in everyday life.

Google Search goes conversational

Another landmark moment was the rollout of a new AI-powered Search mode. Unlike traditional search that requires keyword optimization, this experience allows users to interact with Google in natural language. You can ask multi-layered questions (like planning a trip, comparing services, or getting product suggestions) and receive cohesive summaries alongside links, insights, and sources.

The new mode opens in a dedicated tab, visually and structurally separate from classic results. It’s a quiet but powerful signal that Google is reinventing the way we search, with AI at the core.

Project Astra: a real-time AI that sees and understands

If Gemini is the brain, Project Astra is the sensory system. Revealed as one of the most exciting demos of the event, Astra is an AI agent that can see, hear, and understand the world in real time. Built for multimodal interaction, Astra can process both visual and audio input, offering context-aware responses based on what it perceives.

In the demo, Astra was able to recognize surroundings, remember past interactions, and respond to questions with precise, on-the-fly reasoning such as identifying landmarks through a phone camera or keeping track of objects that were moved. This lays the foundation for future applications in smart glasses, mobile assistants, and robotics.

Android XR: expanding the mixed reality ecosystem

Although Android XR was first teased earlier this year, Google I/O 2025 marked its true debut. The platform, designed to power next-gen smart glasses and XR headsets, is now available in Developer Preview 2, featuring tools to create immersive, context-aware, and AI-enhanced experiences.

What’s different now is the concrete ecosystem Google is building. Android XR is not just about hardware compatibility: it’s about creating a unified layer for immersive experiences, where Gemini AI becomes the intuitive interface across visual, spatial, and voice inputs.

Smart glasses running Android XR can translate languages in real time, provide turn-by-turn navigation via AR overlays, and offer live assistance, hands-free. And for more immersive use cases, Android XR extends to standalone headsets that support fully immersive environments, gaming, training, and enterprise applications.

Fashion meets function: the new smart glasses

One of the boldest moves was Google’s collaboration with Warby Parker and Gentle Monster: two lifestyle brands known for their design aesthetics. Together, they’re creating smart glasses that aim to be both technically powerful and visually appealing.

Contenuto dell’articolo
Source: Inside Retail

These wearables integrate real-time features such as navigation, translation, and AI-powered assistance, all while maintaining a slim, elegant form factor. The goal is to move away from the “tech gadget” look and toward everyday, wearable design, crucial for mainstream adoption.

Project Moohan: Samsung’s entry into the XR arena

Co-developed with Samsung, Project Moohan is a standalone XR headset designed for rich, immersive experiences. Unlike earlier iterations of AR/VR hardware, this device runs natively on Android XR, making it fully integrated with Google’s AI and spatial computing ecosystem.

Contenuto dell’articolo
Source: Android Authority

Moohan is expected to support applications ranging from enterprise training and industrial design to immersive entertainment and collaboration. With Samsung’s hardware expertise and Google’s software intelligence, this headset could become a serious competitor in the XR space.

Project Aura: XREAL’s optical see-through innovation

Project Aura was also introduced: developed by XREAL, it is a lightweight, optical see-through device that emphasizes augmented reality without bulk. While not as immersive as a full headset, its design makes it ideal for everyday scenarios like smart navigation, contextual prompts, and lightweight computing.

This reflects a growing trend: AR wearables that blend into daily life, powered by AI but designed to disappear into the background.

AI tools for creators: Veo, Imagen & Flow

Beyond productivity, Google is pushing into the creative space with new tools for visual storytelling.

  • Veo 3 allows high-quality video generation from text prompts, ideal for marketing, prototyping, and entertainment.
  • Imagen 4 brings more detail, photorealism, and texture to AI-generated images.
  • Flow is designed as a virtual co-director for studios, assisting in everything from scene creation to editing suggestions.

Together, these tools point to a future where AI augments, not replaces, human creativity.

Android 16 & the new desktop mode

Android 16 arrives with a design refresh called Material 3 Expressive, incorporating more color gradients, blur effects, and motion for a more tactile user experience. But what caught attention was the new desktop mode, developed in collaboration with Samsung.

Contenuto dell’articolo
Source: Tom’s Guide

This feature turns your smartphone into a desktop computer when connected to a larger screen, offering multi-window support, keyboard input, and productivity tools. It’s a step toward device convergence, blurring the lines between mobile and desktop computing.

Final thoughts: the dawn of contextual computing

Google I/O 2025 made one thing clear: we’re entering an era where devices not only respond to us but understand us. Through Gemini, Android XR, and projects like Astra and Moohan, Google is building a world where AI is always-on, always-present, and seamlessly integrated into daily life.

Do you want to discover how XR can enhance your business outcomes?

Let’s talk about it!

Creiamo innovazione.

VRtuale è un prodotto tecnologico promosso da Coderblock, azienda specializzata in soluzioni web, di Realtà Virtuale e XR per le imprese, con un forte focus su esperienze formative immersive basate sull’intelligenza artificiale.

Dove siamo

Via Imperatore Federico, 100 – 90143 Palermo
1395 Brickell Ave, Ste 833, Miami, FL 33131

Contattaci
Siamo parte di

VRtuale è un prodotto Coderblock © 2025 – Tutti i diritti riservati.

This site is registered on wpml.org as a development site. Switch to a production site key to remove this banner.