Abstract
Despite intensive crisis de-escalation training, psychiatric staff continue to face high injury rates from aggressive patient interactions (APIs). New approaches are needed to enhance the application of effective strategies in managing APIs. This study explored the efficacy and feasibility of VR training for psychiatric staff in recognizing and selecting appropriate de-escalation interventions. A quasi-experimental design with psychiatric staff (N = 33) tested the effectiveness and feasibility of VR training depicting a common API interaction. Effectiveness was assessed through pre-post comparisons of the Confidence in Coping with Patient Aggression (CCPA) survey, correct answer percentages, response times, and attempt success rates. Feasibility was indicated by mean scores above ‘neutral’ on usability, presence, and learner satisfaction surveys. Results showed significant improvements in response times and confidence (p<.0001), with over 75% of participants rating the training positively. VR training is effective and feasible for enhancing de-escalation skills, offering a promising approach for psychiatric facilities.
This design studio invites students to explore the complex relationship between architecture, human emotion, and experiential design through the conceptualization and design of a Museum of Emotion. Students will investigate how spatial design can evoke, mediate, and communicate emotional states—moving beyond functionality to create environments that resonate on a deep psychological and sensory level. By integrating neuroscience, art, culture, and digital technologies, students will develop speculative proposals for a museum that acts not only as a cultural institution but also as a space of introspection, empathy, and transformation.
Studio Objectives
Understand and interpret the spatial, sensory, and material qualities that influence human emotional responses.
Translate research on emotion into architectural language (form, light, material, scale, sequence, etc.).
Design immersive environments that express or evoke specific emotional states.
Engage interdisciplinary methods (AI, Extended Reality, digital media) to inform spatial experience.
Critically assess cultural, ethical, and therapeutic dimensions of designing for emotion.
Key Questions
How can architectural elements—light, space, materiality, proportion—evoke emotional responses?
What is the role of immersive and interactive technology (VR/AR, AI, biometric feedback) in shaping emotional experiences?
How do cultural, personal, and neurophysiological factors affect emotional perception of space?
How can architecture foster emotional literacy, empathy, and collective memory?
Museum concept by UC student Meghan Powell Stella Shover Ainsley Brown. Fall. 2025
Program
Each student (or team) will design a Museum of Emotion on a site of their choice. The museum must include:
Core Zones (Required):
Emotion Lab: Interactive gallery presenting scientific and technological perspectives on emotion.
Rooms of Emotion: A minimum of three immersive emotional environments (e.g., joy, fear, sadness, awe, love, anger).
Memory Archive: A participatory or data-driven installation space where emotional memories are recorded, interpreted, and displayed.
Cultural Expressions Gallery: A rotating exhibition space focused on how different cultures represent and process emotions.
Optional Programs (Student-Defined):
Workshop or educational spaces
Performance or therapeutic spaces
Café or gathering zone
Outdoor sensory garden or emotional path
Design Tools and Methods
Precedent studies of museums, memorials, and immersive installations
Digital modeling and rendering (with emphasis on atmosphere and mood)
Use of AI-assisted simulations, AIGC, and VR walkthroughs
Assessment through user feedback survey
Read:
The Architecture of Happiness by Alain de Botton
Neuroarchitecture: Designing with the Brain in Mind by Ann Sussman
High-Fidelity Digital Human Modeling and Visualization
This research investigates advanced methods for constructing high-fidelity digital humans, with a focus on both technical innovation and applied use in immersive environments. The project integrates generative artificial intelligence, image-based modeling, and visualization pipelines to advance realism, interactivity, and usability in human–computer interaction.
Aim 1. Conversational Digital Agents Driven by Large Language Models (LLMs). The first aim is to utilize large language models (LLMs) as the core engine for conversational digital humans. By embedding LLM-driven reasoning into virtual agents, the project seeks to create responsive, adaptive, and context-aware “talking agents.” These agents will simulate naturalistic dialogue, provide interactive guidance, and adapt to user needs across diverse scenarios such as education, healthcare training, and collaborative design.
Aim 2. Photorealistic Skin and Visual Fidelity Through Scanned Data. The second aim focuses on the visual accuracy of digital humans. High-resolution image scans will be processed to reconstruct human skin with detailed fidelity, including surface textures, translucency, and micro-geometric variations. The resulting models are capable of 4K photorealistic rendering (click image to view sample output), significantly enhancing realism in simulation and visualization. This fidelity is crucial for applications where nuanced perception—such as empathy, trust, or attentiveness—depends on subtle visual cues.
Significance. By combining intelligent conversational capabilities with photorealistic appearance, this research advances the next generation of digital humans. The outcomes will support applications in extended reality (XR), therapeutic and clinical training, collaborative design education, and digital twin environments, where authenticity of both interaction and appearance directly influences user engagement and effectiveness
Ming Tang and Mikhail Nikolaenko presented “AI-Powered Digital Humans for Enhanced Interaction in Extended Reality” at the AI and Emerging Technology Symposium, University of Cincinnati.
The day-long event explored topics around AI and robotic process automation; smart campus innovation; and extended reality, virtual reality, and augmented reality. More on UC News.
AI-Powered Talking Avatars for Enhanced Interaction in Extended Reality
Presenter. Ming Tang, Mikhail Nikolaenko. Feb. 20, 2025 in Tangeman University Center.
This presentation explores two AI-driven talking avatars developed at the UC Extended Reality (XR) Lab, leveraging large language models (LLMs) for realistic interaction in XR environments. The XRLab Bot acts as a virtual tour guide, providing real-time engagement and navigation through the lab with spatial awareness, while the P&G Bot emulates a high-fidelity human likeness, delivering product expertise within a VR setting. These bots highlight advancements in AI, LLMs, and XR, showcasing potential applications in education, customer service, and smart campuses. The presentation will cover AI-driven navigation, multi-client architecture, and XR integration for immersive digital experiences. The session will showcase AI-driven navigation and interaction, demonstrating the bot’s capabilities in translating speech-to-text using Whisper AI, retrieving responses from ChatGPT, and interpreting real-time visitor needs and spatial data to guide users throughout XRLab. It will explore the multi-client, real-time architecture by sharing insights on managing multiple Unreal and Python clients with a central server, coordinating bot actions, face tracking, and area-specific responses in real-time. The discussion will highlight XR integration and smart campus applications, emphasizing the bot’s adaptability within XR platforms using Unreal Engine and its potential for virtual and augmented reality applications in campus tours, orientations, and educational experiences. Additionally, the session will discuss LLM-driven conversational AI, utilizing advanced models to power sophisticated, natural language interactions with users. High-fidelity 3D avatar creation will be addressed, focusing on crafting detailed, lifelike avatars capable of mimicking human expressions and movements. It will also cover customizable AI for chat avatars, enabling personalized, AI-driven avatars tailored to specific user preferences and needs. Interactive avatars with facial animation and motion capture will be demonstrated, showing how avatars can exhibit dynamic facial expressions and reactions during interactions. The session will also explore metaverse creation, showcasing the development of immersive, interconnected virtual worlds where users can interact through their avatars. Finally, the discussion will include virtual reality (VR) and augmented reality (AR) environments and experiences, highlighting their ability to blend digital content with the physical world or create entirely virtual spaces.
This presentation explores two AI-driven talking avatars developed at the UC Extended Reality Lab, leveraging large language models (LLMs) for realistic interaction in XR environments. The XRLab Bot acts as a virtual tour guide, providing real-time engagement and navigation through the lab with spatial awareness, while the P&G Bot emulates a high-fidelity human likeness, delivering product expertise within a VR setting. These bots highlight advancements in AI, LLMs, and XR, showcasing potential applications in education, customer service, and smart campuses. The presentation will cover AI-driven navigation, multi-client architecture, and XR integration for immersive digital experiences.
Presenters Ming Tang, Director of XR-Lab. Professor, DAAP, UC.
Mikhail Nikolaenko, XR-Lab Fellow, UC
Team: Aayush Kumar, Ahmad Alrefai
P&G Bot: A high-fidelity avatar modeled on a real individual, rendered with lifelike facial animations and lip synchronization within a VR environment. This bot is trained on a specialized database containing information on P&G’s shampoo products and company history. Its development process involved scanning a human model, rigging, animating, and integrating the LLM, enhanced through XR-based visualization and UI. The result is a realistic, interactive bot that combines human likeness with expert knowledge.
More information on Digital Human simulation at XR-Lab.
https://i2.wp.com/ming3d.com/new/wp-content/uploads/2024/12/UCAI.jpg?fit=576%2C420&ssl=1420576Ming Tanghttp://ming3d.com/new/wp-content/uploads/2022/01/TY_logo-300x300-new.pngMing Tang2025-02-05 21:48:502025-09-10 17:01:44AI and Emerging Technology Symposium
Virtual Reality Training on Issues of Youth Firearm Possession.
PI. Tang. 8/5/2024-8/4/2025.
Funded by the God.Restoring.Order (GRO) Community, this research project will develop two VR scenarios that simulate environments designed to educate youth on applying critical skills in risky situations.
Team: Ming Tang, XR-Lab, Aaron Mallory, GRO.
XR-Lab students: Aayush Kumar, Mario Bermejo, Jonny Peng, Ahmad Alrefai, Rohit Ramesh, Charlotte Bodie
The XR-Lab collaborated with the GRO community to leverage advanced extended reality (XR) technologies in the development of a virtual reality (VR) training application designed to strengthen the curriculum by reinforcing key competencies through immersive learning activities. In partnership, we evaluated the feasibility of integrating VR technology into the GRO training program, providing participants with an engaging narrative framework while equipping them with practical knowledge applicable to real-world contexts. The immersive VR scenarios addressed high-risk situations, including firearm possession and substance use, thereby creating a controlled environment for experiential learning and skill development.
The XR-Lab has harnessed advanced motion capture technology in this project to translate the movements of real people into lifelike digital characters. Every gesture, shift in posture, and subtle facial expression is carefully recorded and mapped to ensure authenticity and emotional depth in the virtual environment.
Our development team has worked closely and continuously with the GRO community, engaging in multiple motion studies, rehearsals, and testing sessions. This collaboration allows us to capture not just movement, but the nuance behind each action — the personality conveyed through body language, and the emotional context embedded in facial expression.
Through this process, the digital characters become more than avatars; they become authentic extensions of human experience, reflecting the stories. The result is an immersive, emotionally resonant experience where technology and humanity move together.
https://i1.wp.com/ming3d.com/new/wp-content/uploads/2024/09/chatGRO2.jpg?fit=653%2C644&ssl=1644653Ming Tanghttp://ming3d.com/new/wp-content/uploads/2022/01/TY_logo-300x300-new.pngMing Tang2024-09-15 17:35:162025-11-02 15:41:26VR Training on Issues of Youth Firearm Possession