Celebrating 2026 Graduates

Big congratulations to my amazing students on their Spring 2026 graduation! 🎉

Michail Nikolaenko—who has been an XR-Lab fellow for years during his undergraduate study and contributed to numerous projects—and three outstanding graduate students I’ve had the pleasure of advising on their thesis: (Johnny) Bozhi Peng, Mario Bermejo, and Aleksm Yermakov! A huge shout-out to their impressive work on immersive storytelling, embodied learning, and digital twins. And special congratulations to Johnny for receiving the 2026 Graduate Student Research Award—well deserved!

Immersive Storytelling in VR: Autobiographical Design Exploration

(Johnny) Bozhi Peng,  Master of Design, DAAP, UC

Thesis Chair. Heekyoung Jung. Thesis committee: Ming Tang, Yoshiko Burke 

This study centers on immersive storytelling practices in Virtual Reality (VR). As a medium characterized by embodied interaction, VR reorganizes narrative structures beyond traditional visual control through the interplay of space, body, and interaction. The research is informed by Lin et al. (2025)’s systematic review of emotional design in Extended Reality (XR), which identifies Immersion, Presence, and Interactivity as core structural dimensions of immersive XR experiences. In this study, these three elements are collectively referred to as the Immersion–Presence–Interactivity (IPI) Framework. Rather than positioning the IPI Framework as a theory to be validated or extended, the study treats it as a structural lens for organizing observation and reflection within design practice. While many VR storytelling studies focus on narrative structure or interaction design, fewer examine how spatial design decisions influence immersive experience from a designer’s iterative perspective. Against this backdrop, the study explores how the IPI Framework can function as a structural reference within the design process, supporting reflection on creative logic, the organization of design decisions, and the guidance of iterative development. By translating theoretical concepts into design practice, the research proposes a practice-oriented reflective pathway for creators of immersive VR storytelling. It emphasizes the creative process itself, the mechanisms of design adjustment, and the generative logic of emotional trajectories emerging through the interplay of space, perception, and embodied participation. The core contribution of this study does not lie in extending the IPI Framework, but in demonstrating how an immersive storytelling environment evolves under the guidance of a structural framework and develops an embodied design logic that can be articulated and communicated. 

Perceiving Sustainability:  How Educational Tools Shape Understanding of Earthship Architecture 

Mario Bermejo, Master of Design, DAAP, UC

Thesis Chair : Braden Trauth, Thesis Committee: Ming Tang,

Architecture education currently lacks engagement with experimental systems like Earthships, leaving students mostly unaware of sustainable models beyond the grid. This thesis explores how VR, scale models, videos and site visits influence how students perceive and internalize sustainable principles, using Earthships as a case study. By reframing sustainability as something felt and experienced, this study aims to shape how future architects and designers might envision sustainable living. Earthship architecture represents a hyper-efficient approach to sustainable building. Despite its relatively long history and growing relevance in the face of climate change, Earthship design remains marginal and underrepresented within mainstream architectural education and industry sustainability frameworks such as LEED, Passive House, and the Living Building Challenge. This thesis examines how different educational tools, along with varying degrees of embodied engagement, influence architecture and design students’ comprehension of Earthship principles, knowledge retention, and motivation to explore the topic further. The study employed a mixed-methods experimental design in which participants engaged with Earthship concepts through multiple educational tools at varying levels of embodiment. The educational tools used are physical scale model interaction, virtual reality tours, guided in-person tours, and video sessions. Data was collected through pre- and post-activity surveys, observational notes, and participant feedback to assess comprehension, knowledge retention, and interest in continued learning. Comparative analysis across modalities helped identify how levels of embodiment influenced learning outcomes and engagement. This research demonstrates that experiential and embodied learning environments can transform sustainability from an abstract concept into a meaningful and imaginable reality. Participants who engaged physically or perceptually with Earthship systems showed stronger emotional engagement, curiosity, and interpretive depth compared to less immersive approaches. The findings also suggest that sustainability education benefits from combining complementary modalities, enabling learners to engage with environmental systems through multiple cognitive pathways. Rather than a single optimal method, different tools contribute distinct forms of understanding, supporting movement between conceptual awareness, technical comprehension, and experiential perception. Additionally, experimental systems such as Earthships function as powerful educational catalysts, sparking interest, critical reflection, and motivation among architecture and design students. These systems not only communicate sustainable principles but actively invite engagement with them. Together, these insights suggest that embodied and multimodal learning can be a great strategy for advancing sustainability in design education. By bridging the gap between abstract knowledge and lived experience, this approach supports deeper understanding and opens pathways for more effective integration of environmental systems into design practice. 

Digital Twin-Based Validation Framework for Monocular Gesture and Position Analysis

Aleksm Yermakov. MS in Computer Science. CEAS, UC.

Thesis Chair: Jillian Aurisano, Thesis Committee: Ming Tang, William Hawkins

Vision-based pose estimation models have enabled robust real-time detection of human anatomical keypoints from monocular camera feeds. However, translating these detections into reliable spatial interaction signals remains challenging due to the lack of metric grounding between detected keypoints and the physical environment in which interaction occurs. In applications involving large shared displays or collaborative spaces, accurate interpretation of user presence, position, and directional intent requires a spatially calibrated framework that relates computer vision outputs to the geometry of the surrounding environment. This thesis proposes a spatial interaction validation framework that combines computer vision-based pose detection with a calibrated digital twin of the interaction environment. A simulated representation of the physical room is constructed and scaled using architectural reference measurements to provide metrically accurate ground truth. A fixed camera observes the interaction space and processes video input using a YOLOv8 pose estimation model to detect anatomical keypoints in real time. User presence within an active interaction zone is determined through the detection of ankle keypoints and their projection into a calibrated floor-plane coordinate system. The midpoint between the detected ankle positions provides an estimate of the user’s ground position, which in turn determines the user’s relative distance and field-of-view relationship to the display surface. Directional gesture inference is performed using the elbow-to-wrist vector to estimate pointing intent. Given the known spatial relationship between the user’s estimated position and the display plane, the pointing vector is extended to compute a predicted intersection with the display surface, enabling estimation of the intended target location. The simulated environment provides a mechanism for generating a reference ray representing the ground-truth pointing direction, allowing direct comparison between predicted interaction coordinates and the spatially accurate simulated result. This framework enables systematic cross-comparison between simulated ground truth and model-derived spatial inference, allowing evaluation of positional accuracy, interaction zone classification, and pointing prediction error. By establishing a calibrated mapping between pose estimation outputs and environmental geometry, this work demonstrates how computer vision models can be integrated into spatially aware interaction systems capable of interpreting user intent within a defined physical environment.