Gemini, AI agents, and fulfilling the vision of Pixel 4’s new Google Assistant

Gemini, AI agents, and fulfilling the vision of Pixel 4’s new Google Assistant

Recently, there has been a growing buzz surrounding AI agents capable of executing commands by physically interacting with your phone, including tapping and swiping as needed. This concept strongly resonates with the unveiling of the "new Google Assistant" alongside the Pixel 4 in 2019.

At Google I/O 2019, the next-generation Assistant was first showcased, emphasizing on-device voice processing to make traditional phone interactions feel sluggish in comparison.

Google demonstrated simple commands involving app navigation and control, while showcasing a more complex scenario where the Assistant seamlessly orchestrated tasks across various apps. 

For instance, receiving a text, replying via voice, and intuitively searching for and sending a related photo. This capability extended to "Operating" and "Multi-tasking," complemented by a natural language "Compose" feature in Gmail.

The new Assistant debuted on the Pixel 4 later that year and has since been available on all subsequent Google devices.
  • Examples of commands include:"Take a selfie." Followed by "Share this with Ryan.
  • In a chat thread, saying "Reply, I’m on my way."
  • "Search for yoga classes on YouTube." Then instructing to "Share this with mom."
  • "Show me emails from Michelle on Gmail."
  • With Google Photos open, saying "Show me New York pictures." Then specifying "The ones at Central Park."
  • With a recipe site open in Chrome, saying "Search for chocolate brownies with nuts."
  • With a travel app open, instructing to "Find hotels in Paris."
These instances illustrate the core concept behind AI agents. During an Alphabet earnings call last month, Sundar Pichai addressed the impact of generative AI on the Assistant, highlighting its potential to evolve into a more proactive "agent" that doesn't just provide answers but follows through on tasks for users.

Post a Comment