Sam Lawton is an artist, designer, filmmaker, and creative technologist at Google Creative Lab in New York City.
Focused in speculative and empathetic design, his work explores technology and how it interacts with society through a critical and artistic lens.
This project served as the platform to launch the first public showcase of Google’s new “Any to Any” generative AI model. There was a mixture of speculative design and high-level creative direction that went into this collective effort aimed at ushering in the aptly named “Gemini Era.”
The centerpiece of the launch consisted of a 5 minute “table-top” style video showcasing various feats of strength found within the model. This video was concepted, shot, and directed by myself and handful of amazing thinkers at the Creative Lab. It spoke to our vision of what we wanted these ever-nearing interactions with the tech of the future to not only look and sound like, but to feel like as well.
For the better part of 2 years, me and small team of creatives have been operating on the cutting edge of generative AI, taking shape in ways such as smart glasses, universal assistants, realtime chatbots, media generation, etc. For the first time, we’ve expanded into the physical realm through a medley of robotic form factors.
We designed a plethora of Human-Robot Interactions to exemplify how Gemini 2.0 can serve a “brain” for any number of robotic functions. This embodied intelligence was showcased through a series of launch videos and blog posts, of which we handled the creative direction and production.
The most recent showcase of Google’s Project Astra. Worked closely with the Astra team at DeepMind HQ in London, starred in the video, concepted Human-AI interactions.
Performing RnD style explorations into the model to uncover interesting feats of strength, this project resulted in the development of several new interaction patterns between human and AI knowledge receptacles.
This experimental research project was multi-pronged both in its creation, but also in the tech that went into it. We were most excited by exploring this universal agent in more than the familair form factor of smartphones, in this case, hands-free AR glasses (Slide 2).
The XR Limb is the result of a year long research project where I explored synesthesia and how it relates to amputees and the phantom pain they often feel in their missing limb. It is first and foremost an XR rehabilitation program built on the foundation of mirror therapy, reimagined into a 3D, immersive environment.
XR Limb was designed to be as lightweight as possible. I identified and removed the most common barriers found within XR and healthcare. The whole system is completely wireless, no added hardware or sensors.
Through a series of experiments with my own father, the film offers a unique perspective on these complex issues. His candid audio reactions to the altered images can be heard throughout the film, giving a glimpse into the power of memory and the influence of external stimuli on our perception.
Throughout the experiment, I became less interested on whther or not people could differentiate between human-sourced and AI generated media, but when it mattered if they couldn’t.
Go to lookclosely.ai for more information.
I wanted to point to the absurdity and call attention to this new method of media ingestion. As an experiment, I designed and programmed an AR filter that would be deployed to platforms like TikTok and Instagram.
The design is inherently inclusive, adapting to the users needs. If one chose to sit/sleep/rest, their experience would be entirely self-curated.