Redefining Space in the Age of Spatial Computing

ARCH 4001. Fall. 2024

Redefining Space in the Age of Spatial Computing

Ming Tang
Professor, Registered Architect, NCARB, LEED AP, Director of Extended Reality Lab. UC Digital Futures. Leadership committee, Institute for Research in Sensing (IRiS);
School of Architecture and Interior Design, College of Design, Architecture, Art, and Planning, University of Cincinnati

In the age of spatial computing, architecture transcends the physical boundaries of bricks and mortar, evolving into an expansive realm that merges computational technologies. This studio embraces the transformative potential of AI in the architectural design process, expanding physical spaces into the metaverse and positioning architecture at the forefront of our society’s digital revolution. By serving as a platform for spatial experience, architecture redefines our interaction with space and each other.

This studio invites students to collectively harness the power of AI to create virtual architectural spaces that inspire, connect, and elevate the human spatial experience. Students will design a convention center to host various activities, from large conference events like SIGGRAPH and trade shows like CES to small lectures and social events. They will research the spatial layout of a convention center and explore the possibilities of migrating these activities to a fully 3D virtual space in the metaverse. The studio will also delve into the intersections of AI, digital twin, and XR technologies with the metaverse, enabling students to push the boundaries of traditional architectural design. Through this exploration, students will gain insights into how digital advancements can enhance and transform spatial experiences, fostering innovation and creativity in the architectural field.

Course Objectives:

  1. Project Focus:
    • Create immersive and interactive experiences within a virtual convention center.
    • Utilize the concept of metaverse to enable dynamic environments that respond to and anticipate human needs, fostering a deeper connection between individuals during social interactions.
  2. AI-Assisted Design Exploration:
    • AI Collaborators: Collaborate with AI for form inspiration, iterate design concepts. AI will act as a co-creator, offering insights that drive concept design with efficiency.
  3.  Exploration of new roles of architect:
  • Examine the evolving roles of architects, which now demand new skills and a renewed commitment to digital innovation.
  • Experience Designers: Focus on creating meaningful experiences, considering how spaces—both physical and virtual—affect emotions, behavior, and social interactions. Design with empathy, prioritizing human-centric approaches.
  • Digital Curators: Curate digital experiences by designing virtual spaces that complement and enhance physical environments, crafting immersive experiences in the metaverse while balancing aesthetics, functionality, and user engagement.

Course Structure:

  • Initial Phase: Introduction to the spatial computing age in architecture, the integration of AI, digital twin, XR and the metaverse, and the expanded roles of architects.
  • AI Collaboration: Engage with AI tools and methodologies to enhance design processes and outcomes.
  • Experience Design: Focus on the human experience, both in physical and virtual spaces, ensuring human-centered designs.
  • Project Development: Design and develop a virtual convention center with specified programs and interactive features.
  • Ethical and Inclusive Design: Address and incorporate ethical considerations and inclusivity into the final project.

Expected Outcomes:

By the end of this studio course, students will have developed a comprehensive virtual convention center that showcases the innovative use of metaverse technologies. Students will gain a deeper understanding of their evolving roles as architects and the importance of creating inclusive, empathetic, and ethically sound digital spaces.

Program

  • Design a large-scale virtual convention center that includes a large auditorium, exhibition space for trade shows, and various-sized meeting rooms  for event/daily use and collaboration.

Requirement

Besides the basic software available at the DAAP computer lab, students should have their own computers to install various additional tools.

AI-assistant design workflow will be encouraged in the studio.

 

paper on XR conference

Two papers were accepted in the 2024 International Conference on eXtended Reality. XR Salento 2024.

Tang, Ming, Mikhail Nikolaenko, Evv Boerwinkle, Samuel Obafisoye, Aayush Kumar, Mohsen Rezayat, Sven Lehmann, and Tamara Lorenz. “Evaluation of the Effectiveness of Traditional Training vs. Immersive Training: A Case Study of Building Safety & Emergency Training.” Paper accepted at the International Conference on eXtended Reality (XR SALENTO 2024), Lecce, Italy, September 4-9, 2024.

Virtual Reality (VR) has revolutionized training across healthcare, manufacturing, and service sectors by offering realistic simulations that enhance engagement and knowledge retention. However, assessments that allow for evaluation of the effectiveness of VR training are still sparse. Therefore, we examine VR’s effectiveness in emergency preparedness and building safety, comparing it to traditional training methods. The goal is to evaluate the impact of the unique opportunities VR enables on skill and knowledge development, using digital replicas of building layouts for immersive training experiences. To that end, the research evaluates VR training’s advantages and develops performance metrics by comparing virtual performance with actions in physical reality, using wearable tech for performance data collection and surveys for insights. Participants, split into VR and online groups, underwent a virtual fire drill to test emergency response skills. Findings indicate that VR training boosts urgency and realism perception despite similar knowledge and skill acquisition after more traditional lecture-style training. VR participants reported higher stress and greater effectiveness, highlighting VR’s immersive benefits. The study supports previous notions of VR’s potential in training while also emphasizing the need for careful consideration of its cognitive load and technological demands.

 

Tang, Ming, Jai’La Nored, Matt Anthony, Judy Eschmann, Jennifer Williams, and Linda Dunseath. “VR-based Empathy Experience for Nonprofessional Caregiver Training.” Paper accepted at the International Conference on eXtended Reality (XR SALENTO 2024), Lecce, Italy, September 4-9, 2024.

This paper presents the development of a virtual reality (VR) system designed to simulate various caregiver training scenarios, with the aim of fostering empathy by providing visual and emotional representations of the caregiver’s experience. The COVID-19 pandemic has increased the need for family members to assume caregiving roles, particularly for older adults who are at high risk for severe complications and death. This has led to a significant reduction in the availability of qualified home health workers. More than six million people aged 65 and older require long-term care, and two-thirds of these individuals receive all their care exclusively from family caregivers. Many caregivers are unprepared for the physical and emotional demands of caregiving, often exhibiting clinical signs of depression and higher stress levels.

The VR system, EVRTalk, developed by a multi-institutional team, addresses this gap by providing immersive training experiences. It incorporates theories of empathy and enables caregivers to switch roles with care recipients, navigating common scenarios such as medication management, hallucinations, incontinence, end-of-life conversations, and caregiver burnout. Research demonstrates that VR can enhance empathy, understanding, and communication skills among caregivers. The development process included creating believable virtual characters and interactive scenarios to foster empathy and improve caregiving practices. Initial evaluations using surveys showed positive feedback, indicating that VR training can reduce stress and anxiety for caregivers and improve care quality.

Future steps involve using biofeedback to measure physiological responses and further investigating the ethical implications of VR in caregiving training. The ultimate goal is to deploy VR training in homes, providing family caregivers with the tools and knowledge to manage caregiving responsibilities more effectively, thereby enhancing the quality of life for both caregivers and care recipients.

 

GenAI+AR Siemens

Automatic Scene Creation for Augmented Reality Work Instructions Using Generative AI. Siemens. PI. Ming Tang. co-PI: Tianyu Jiang. $25,000. UC. 4/1/2024-12/31/2024

Sponsor: Siemens through UC MME Industry 4.0/5.0 Institute

 

Poster in AHFE conference

Nancy Daraiseh, Aaron Vaughn, Ming Tang, Mikhail Nikolaenko, Madeline Aeschbury, Alycia Bachtel, Camryn Backman, Chunyan Liu, Maurizio Macaluso . Using Virtual Reality to Enhance Behavioral Staff Training for Interacting with Aggressive Psychiatric Patients. Poster. The 15th International Conference on Applied Human Factors and Ergonomics (AHFE 2024). Nice, France, July 24-27, 2024.

Objective: To conduct a pilot study to enhance staff training and confidence when interacting with aggressive psychiatric patients using a virtual reality (VR) training module depicting an escalating patient scenario.

Significance: Dysregulated emotional outbursts, reactive aggression, and self-injurious behaviors are common in psychiatrically hospitalized patients. These behaviors result in aggressive patient interactions (APIs) which are associated with increased risk of harm to the patient and staff. Minimal research has examined interventions for successful training to effectively reduce or prevent API events and subsequent harm. Despite intensive, standardized trainings in crisis de-escalation protocols, staff continue to experience high rates of API injuries. More realistic training and competency in a safe environment to practice implementation and utilization of de-escalation strategies to avoid APIs and patient harm are needed.

Methods Using a pre – post, quasi-experimental design, 40 Behavioral Health Specialists and Registered Nurses at a pediatric psychiatric facility will participate in VR training depicting a commonly experienced scenario when interacting with an aggressive patient. Participants are stratified by job experience, sex, and VR experience. Study aims are to: i) assess the feasibility and usability of VR training among this population and ii) obtain measures of learner satisfaction and performance. Surveys measure usability, learner satisfaction, and coping with patient aggression. Pre- and post-performance in training will be compared and assessed by percent correct answers on the first attempt; time to correct answer; and the number of successful and unsuccessful attempts.

Preliminary Results (full analyses in progress): Preliminary survey results (N=14) show that 64% perceived the VR experience to be consistent with their real-world experiences: 87% agree that the VR training would help with interactions with aggressive patients: 71% reported the training was effective in identifying de-escalation strategies: 79% stated the training was effective in recognizing stages of patient crisis; training included important skills used in their job; and would recommend the training. Finally, 100% would participate in future VR trainings.

Anticipated Conclusions: We plan to show that using VR to supplement in-place training programs for high-risk situations can improve users’ understanding of essential de-escalation and crisis techniques. We anticipate results will show an enhanced ability and confidence when interacting with aggressive patients. Future studies will expand on results and examine implications on staff and patient harm. 

Check more information on the  VR-based Employee Safety Training. Therapeutic Crisis Intervention Simulation 

UC UPRISE

XR-Lab’s project was selected to join the Undergraduates Pursuing Research in Science and Engineering (UPRISE) Program
UNDERGRADUATE SUMMER RESEARCH PROGRAM IN SCIENCE AND ENGINEERING
May 6 – July 26, 2024

 

Project title: Reinforcement Learning (RL) system in Game Engine

Student: Mikhail Nikolaenko. UC. PI: Ming Tang. 

This project proposes the development of a sophisticated reinforcement learning (RL) system utilizing the robust and versatile environment of Unreal Engine 5 (UE5). The primary objective is to create a flexible and highly realistic simulation platform that can model a multitude of real-life scenarios, ranging from object recognition, urban navigation to emergency response strategies. This platform aims to significantly advance the capabilities of RL algorithms by exposing them to complex, diverse, and dynamically changing environments. Leveraging the advanced graphical and physical simulation capabilities of UE5, the project will focus on creating detailed and varied scenarios in which RL algorithms can be trained and tested. These scenarios will include, but not be limited to, urban traffic systems, natural disaster simulations, and public safety response models. The realism and intricacy of UE5’s environment will provide a challenging and rich training ground for RL models, allowing them to learn and adapt to unpredictable variables akin to those in the real world.

Synthetic material for AI training

Using a game engine to generate synthetic training data offers significant advantages for AI training in image classification, object detection, and animation classification. Synthetic data, created in controlled virtual environments, allows for generating large, diverse, and perfectly labeled datasets. This contrasts with human-labeled material, which is often expensive, time-consuming, and prone to errors. Synthetic data can be tailored to specific needs, ensuring comprehensive coverage of various scenarios and edge cases that might be underrepresented in real-world data. Additionally, synthetic environments facilitate the manipulation of variables, such as lighting and angles, providing a robust training ground for AI models. Overall, the use of synthetic material enhances the efficiency and accuracy of AI training compared to traditional human-labeled datasets.

Example

The Roof AI Training project focuses on training an AI system using Unreal Engine 5 (UE5) to generate synthetic data for machine learning applications, specifically for detecting damaged asphalt shingles on roofs. It highlights the need for massive labeled datasets to train AI effectively and discusses the role of Convolutional Neural Networks (CNNs) in image recognition tasks. UE5 is utilized to create procedurally generated roofing textures and environments, providing a cost-effective and scalable solution for data generation. The research project involves generating high-quality synthetic images of roofs, including damaged areas, to train and evaluate a CNN model. The AI aspect includes using Simplex Noise and 3-tab pattern algorithms for realistic texture generation, followed by training a CNN using PyTorch on the generated data. Preliminary results indicate that the model accurately detects most missing shingles, though further improvements are needed in bounding box precision and detection accuracy. Future goals include enhancing the procedural generation algorithm, optimizing the CNN architecture for better performance, and iteratively refining the AI through continuous training and evaluation using both synthetic and real-world data.

 

Related projectsBuilding Safety Analysis with Machine Learning