No longer are you simply consuming / reacting to generated media, you’re interacting with it in real time, your actions and intentions are reflected on the world you’ve built around yourself.
We designed an arrangement of Human-Robot Interactions to exemplify how Gemini 2.0 can serve a “brain” for any number of robotic functions. This embodied intelligence was showcased through a series of launch videos and blog posts, of which we handled the creative direction and production.
Action Intelligence, our vision for agents with the ability to take action and complete multi-step tasks on a user’s behalf, was the main point of contact through which we explored these new interaction paradigms. Through Interface Control, Call Assistance, Intelligent Personilization, we devised a narrative throughline that would showcase these feats of strength.
Working directly alongside the Astra research team at DeepMind's London headquarters, the creative process involved divising the sequences through experiential documentation of stress tests against the experimental model, ensuring that each demonstrated capability felt natural and contextually relevant to real-world use cases, while also being 100% authentic to the model’s capabilities at the time.
This experimental research project was multi-pronged both in its creation, but also in the tech that went into it. We were most excited by exploring this universal agent in more than the familair form factor of smartphones, in this case, hands-free AR glasses (Slide 2).
This project served as the platform to launch the first public showcase of Google’s new “Any to Any” generative AI model. There was a mixture of speculative design and high-level creative direction that went into this collective effort aimed at ushering in the aptly named “Gemini Era.”
The centerpiece of the launch consisted of a 5 minute “table-top” style video showcasing various feats of strength found within the model. This video was concepted, shot, and directed by myself and handful of amazing thinkers at the Creative Lab. It spoke to our vision of what we wanted these ever-nearing interactions with the tech of the future to not only look and sound like, but to feel like as well.