Sam Lawton is an artist, designer,  filmmaker, and creative technologist at Google Creative Lab in New York City.

Focused in speculative and empathetic design, his work explores technology and how it interacts with society through a critical and artistic lens.

Google Deepmind: Project Gemini12–06–2023
This project served as the platform to launch the first public showcase of Google’s new “Any to Any” generative AI model. There was a mixture of speculative design and high-level creative direction that went into this collective effort aimed at ushering in the aptly named “Gemini Era.” 

The centerpiece of the launch consisted of a 5 minute “table-top” style video showcasing various feats of strength found within the model. This video was concepted, shot, and directed by myself and handful of amazing thinkers at the Creative Lab. It spoke to our vision of what we wanted these ever-nearing interactions with the tech of the future to not only look and sound like, but to feel like as well. 

“Google admits AI viral video was edited to look better” - BBC
“Google's best Gemini demo was faked” - TechCrunch
“Google has a new AI . . at least one demo wasn’t real“ - Verge

Google Deepmind: Gemini Robotics03–12–2025
For the better part of 2 years, me and small team of creatives have been operating on the cutting edge of generative AI, taking shape in ways such as smart glasses, universal assistants, realtime chatbots, media generation, etc. For the first time, we’ve expanded into the physical realm through a medley of robotic form factors.
We designed a plethora of Human-Robot Interactions to exemplify how Gemini 2.0 can serve a “brain” for any number of robotic functions. This embodied intelligence was showcased through a series of launch videos and blog posts, of which we handled the creative direction and production.

See Blog Post
Watch Full Launch Video



Google Deepmind: Exploring London with Project Astra12–07–2024
The most recent showcase of Google’s Project Astra. Worked closely with the Astra team at DeepMind HQ in London, starred in the video, concepted Human-AI interactions.

Performing RnD style explorations into the model to uncover interesting feats of strength, this project resulted in the development of several new interaction patterns between human and AI knowledge receptacles. 



Google Deepmind: Project Astra05–13–2024 This project was my second time working with DeepMind in a creative and collaborative capacity. In a multi-team, multi-continent sprint, we were tasked with demonstrating Google’s view on realtime, multi-modal assistants that can see and reason about the world around you.

This experimental research project was multi-pronged both in its creation, but also in the tech that went into it. We were most excited by exploring this universal agent in more than the familair form factor of smartphones, in this case, hands-free AR glasses (Slide 2). 

Utility Based Knowledge Assistant
Premiered at Google I/O by Demis Hassabis 
Designed Demos for Smartphone, Glasses, and Booths
 

XR Limb11–11–2022
The XR Limb is the result of a year long research project where I explored synesthesia and how it relates to amputees and the phantom pain they often feel in their missing limb. It is first and foremost an XR rehabilitation program built on the foundation of mirror therapy, reimagined into a 3D, immersive environment. 
XR Limb was designed to be as lightweight as possible. I identified and removed the most common barriers found within XR and healthcare. The whole system is completely wireless, no added hardware or sensors.

In collaboration with Limb Lab and UNMC
More effective Myoelectric Prosthesis calibration 
Entirely contained within the $299 Quest 2


Expanded Childhood01–22–2023This film acts as a speculative exploration into the blurred lines between memory, reality, and technology. With the use of generative AI, my childhood photos have been expanded, creating a new, if not false, context for the scene. In doing so, it raises important questions about the nature of memory and how our brains process information. What effects would longterm exposure to these images have on our brains? 

Through a series of experiments with my own father, the film offers a unique perspective on these complex issues. His candid audio reactions to the altered images can be heard throughout the film, giving a glimpse into the power of memory and the influence of external stimuli on our perception.

Runway AI Film Festival -- New York/San Francisco
Immaterial: Digital realities.  -- Tabalakera, Spain
Licensing Con 23 / Hanyang University -- Seoul, SK

Look Closely03–21–2024Look Closely  is first and formeost and experiment in human perception. Angled toward descerning the public’s level of familiarity with generative media, partipants are shown images and tasked with deciding whether each one was AI generated, or a human-sourced photograph. 

Throughout the experiment, I became less interested on whther or not people could differentiate between human-sourced and AI generated media, but when it mattered if they couldn’t.

Go to lookclosely.ai for more information.


Multi-Stream Content Consumption4–14–2023
This project was an investigation into a concerning evolution of how younger generations are consuming content. A new format of media structure that encourages viewer retention while simultaneously degrading the users attention span. With the rise of social media platforms like Tik Tok, we are seeing more and more trends emerge that encourage simultaneous, multi-stream content consumption.  

I wanted to point to the absurdity and call attention to this new method of media ingestion. As an experiment, I designed and programmed an AR filter that would be deployed to platforms like TikTok and Instagram. 

100K Opens in 24 Hours
500K Lifetime Impressions  
Link to Filter

  bruhhh
Abacus Bench1–11–2022
This specualtive solution to public seating takes inspiration from the original calculator. The rail-guided seating modules allows the user to have agency over their seating arrangements. The design takes into account the recent behavioral trend of social distancing, allowing for individual space based on personal comfortability levels. 
The design is inherently inclusive, adapting to the users needs. If one chose to sit/sleep/rest, their experience would be entirely self-curated.


©2024Contact for inquiries  ->  sam@lwtnlabs.com