Look Closely06–21–2024Look Closely  is first and formeost and experiment in human perception. Angled toward descerning the public’s level of familiarity with generative media, partipants are shown images and tasked with deciding whether each one was AI generated, or a human-sourced photograph. 

Throughout the experiment, I became less interested on whther or not people could differentiate between human-sourced and AI generated media, but when it mattered if they couldn’t.

--- more to come---


Expanded Childhood01–22–2023This film acts as a speculative exploration into the blurred lines between memory, reality, and technology. With the use of generative AI, my childhood photos have been expanded, creating a new, if not false, context for the scene. In doing so, it raises important questions about the nature of memory and how our brains process information. What effects would longterm exposure to these images have on our brains? 

Through a series of experiments with my own father, the film offers a unique perspective on these complex issues. His candid audio reactions to the altered images can be heard throughout the film, giving a glimpse into the power of memory and the influence of external stimuli on our perception.

Runway AI Film Festival -- New York/San Francisco
Immaterial: Digital realities.  -- Tabalakera, Spain
Licensing Con 23 / Hanyang University -- Seoul, SK

Google Deepmind: Project Gemini01–22–2023This project served as the platform to launch the first public showcase of Google’s new “Any to Any” generative AI model. There was a mixture of speculative design and high-level creative direction that went into this collective effort aimed at ushering in the aptly named “Gemini Era.” 

The centerpiece of the launch consisted of a 5 minute “table-top” style video showcasing various feats of strength found within the model. This video was concepted, shot, and directed by myself and handful of amazing thinkers at the Creative Lab. It spoke to our vision of what we wanted these ever-nearing interactions with the tech of the future to not only look and sound like, but to feel like as well. 

“Google admits AI viral video was edited to look better” - BBC
“Google's best Gemini demo was faked” - TechCrunch
“Google has a new AI . . at least one demo wasn’t real“ - The Verge

XR Limb11–11–2022
The XR Limb is the result of a year long research project where I explored synesthesia and how it relates to amputees and the phantom pain they often feel in their missing limb. It is first and foremost an XR rehabilitation program built on the foundation of mirror therapy, reimagined into a 3D, immersive environment. 
XR Limb was designed to be as lightweight as possible. I identified and removed the most common barriers found within XR and healthcare. The whole system is completely wireless, no added hardware or sensors.

In collaboration with Limb Lab and UNMC
More effective Myoelectric Prosthesis calibration 
Entirely contained within the $299 Quest 2


Google Deepmind: Project Astra05–13–2024 This project was my second time working with DeepMind in a creative and collaborative capacity. In a multi-team, multi-continent sprint, we were tasked with demonstrating Google’s view on realtime, multi-modal assistants that can see and reason about the world around you.

This experimental research project was multi-pronged both in its creation, but also in the tech that went into it. We were most excited by exploring this universal agent in more than the familair form factor of smartphones, in this case, hands-free AR glasses (Slide 2). 

Utility Based Knowledge Assistant
Premiered at Google I/O by Demis Hassabis 
Designed Demos for Smartphone, Glasses, and Booths
 

Multi-Stream Content  Consumption4–14–2023
This project was an investigation into a concerning evolution of how younger generations are consuming content. A new format of media structure that encourages viewer retention while simultaneously degrading the users attention span. With the rise of social media platforms like Tik Tok, we are seeing more and more trends emerge that encourage simultaneous, multi-stream content consumption.  

I wanted to point to the absurdity and call attention to this new method of me pdia ingestion. As an experiment, I designed and programmed an AR filter that would be deployed to platforms like TikTok and Instagram. 

100K Opens in 24 Hours
500K Lifetime Impressions  
Link to Filter

  bruhhh
Abacus Bench1–11–2022
This specualtive solution to public seating takes inspiration from the original calculator. The rail-guided seating modules allows the user to have agency over their seating arrangements. The design takes into account the recent behavioral trend of social distancing, allowing for individual space based on personal comfortability levels. 
The design is inherently inclusive, adapting to the users needs. If one chose to sit/sleep/rest, their experience would be entirely self-curated.


A4M-Lamp10–24–2022
An industrial design sprint, the A4M Lamp embodies rapid prototyping and digital fabrication. The 3D printed base was created from a 3D scan of my arm, leveraging LiDar as a means to capture and convert the 3D geometry of my body to a functional lighting tool. This lamp is fully modular with articulating flex points allowing to be shaped to the users needs. 

ECO PLA Filament
Rubber Coated Steel Wire
Wireless Rechargable LED Lantern

Slime Mold 6-Inch Boot5–1–2022
Slime Mold (physarum polycephalum) is a eukaryotic organism that can aggregate to form multicellular reproductive structures. The conceptual design process takes inspiration from the innate architectural abilities of slime mold by experimenting with how it will choose to populate the classic Timberland.

 Featured by TIMBERLAND as a winning entry in their CONSTRUCT: 10061 design challenge. The challenge focused on utilizing natural materials as a means to explore advanced methods of manufacturing in order to reimagine the iconic boot.

Timberland TECTRA
CONSTRUCT 10061
Concept Kicks



PLÆTO12–12–2021
A deep dive into virtual worlds and embodied avatars, PLÆTO blurs the line between physical and digital by combing real-time rendering technology with live-motion capture performance. The interpretive dance acts as a loose retelling of the famous cave allegory, a being subjected to a narrowed view of reality, and the subsqeuent cognitive dissononce that follows once a new, realer one is discovered. 
There were multiple realtime elements incorporated throughout the piece. In-engine lighting was manipulated by positional data driven by the dancer's movements. Volume and instrumental stems were tied to specific body actions. A link to the full performance can be found below.

Full Performance
Unreal Engine 5  + OptiTrack Motion Capture
Multidisciplinary Collaboration ( Sound, Choreo, UE5 Tech)


©2024Contact for inquiries  ->  sam@lwtnlabs.com