Action Intelligence, our vision for agents with the ability to take action and complete multi-step tasks on a user’s behalf, was the main point of contact through which we explored these new interaction paradigms. Through Interface Control, Call Assistance, Intelligent Personilization, we devised a narrative throughline that would showcase these feats of strength.
We designed an arrangement of Human-Robot Interactions to exemplify how Gemini 2.0 can serve a “brain” for any number of robotic functions. This embodied intelligence was showcased through a series of launch videos and blog posts, of which we handled the creative direction and production.
Working directly alongside the Astra research team at DeepMind's London headquarters, the creative process involved both starring in and directing the experiential documentation, ensuring that each demonstrated capability felt natural and contextually relevant to real-world use cases.
This experimental research project was multi-pronged both in its creation, but also in the tech that went into it. We were most excited by exploring this universal agent in more than the familair form factor of smartphones, in this case, hands-free AR glasses (Slide 2).
This project served as the platform to launch the first public showcase of Google’s new “Any to Any” generative AI model. There was a mixture of speculative design and high-level creative direction that went into this collective effort aimed at ushering in the aptly named “Gemini Era.”
The centerpiece of the launch consisted of a 5 minute “table-top” style video showcasing various feats of strength found within the model. This video was concepted, shot, and directed by myself and handful of amazing thinkers at the Creative Lab. It spoke to our vision of what we wanted these ever-nearing interactions with the tech of the future to not only look and sound like, but to feel like as well.
Through a series of experiments with my own father, the film offers a unique perspective on these complex issues. His candid audio reactions to the altered images can be heard throughout the film, giving a glimpse into the power of memory and the influence of external stimuli on our perception.
The XR Limb is the result of a year long research project where I explored synesthesia and how it relates to amputees and the phantom pain they often feel in their missing limb. It is first and foremost an XR rehabilitation program built on the foundation of mirror therapy, reimagined into a 3D, immersive environment.
XR Limb was designed to be as lightweight as possible. I identified and removed the most common barriers found within XR and healthcare. The whole system is completely wireless, no added hardware or sensors.
Throughout the experiment, I became less interested in whether or not people could differentiate between human-sourced and AI generated media, but when it mattered if they couldn’t.
Go to lookclosely.ai for more information.
I wanted to point to the absurdity and call attention to this new method of media ingestion. As an experiment, I designed and programmed an AR filter that would be deployed to platforms like TikTok and Instagram.
The design is inherently inclusive, adapting to the users needs. If one chose to sit/sleep/rest, their experience would be entirely self-curated.