Real-time Visualization & Virtual Reality & Augmented Reality
Explores the interactive virtual reality (VR) and Augmented Reality (AR) system, and real time rendering for architectural visualization, Human Computer Interaction, spatial behavioral and way-finding studies.

Paper: VR Training to De-escalate Patient Aggressive Behavior

Journal Paper: Virtual Reality Training to De-escalate Patient Aggressive Behavior: A Pilot Study

Daraiseh, N. M., Tang, M., Macaluso, M., Aeschbury, M., Bachtel, A., Nikolaenko, M., … Vaughn, A. (2025). Virtual Reality Training to De-escalate Patient Aggressive Behavior: A Pilot StudyInternational Journal of Human–Computer Interaction, 1–16. https://doi.org/10.1080/10447318.2025.2576635

Abstract
Despite intensive crisis de-escalation training, psychiatric staff continue to face high injury rates from aggressive patient interactions (APIs). New approaches are needed to enhance the application of effective strategies in managing APIs. This study explored the efficacy and feasibility of VR training for psychiatric staff in recognizing and selecting appropriate de-escalation interventions. A quasi-experimental design with psychiatric staff (N = 33) tested the effectiveness and feasibility of VR training depicting a common API interaction. Effectiveness was assessed through pre-post comparisons of the Confidence in Coping with Patient Aggression (CCPA) survey, correct answer percentages, response times, and attempt success rates. Feasibility was indicated by mean scores above ‘neutral’ on usability, presence, and learner satisfaction surveys. Results showed significant improvements in response times and confidence (p<.0001), with over 75% of participants rating the training positively. VR training is effective and feasible for enhancing de-escalation skills, offering a promising approach for psychiatric facilities.

More information on the project Therapeutic Crisis Intervention Simulation. P1,P2

Therapeutic Crisis Intervention Simulation, Phase 3

<href=”http://ming3d.com/new/wp-content/uploads/2025/07/CCHMCP3-1.jpg”>We are excited to announce the launch of Phase 3 of the VR-Based Employee Safety Training: Therapeutic Crisis Intervention Simulation project, building on the success of the previous two phases. This interdisciplinary collaboration brings together the Immersive Learning Lab and the Employee Safety Learning Lab at Cincinnati Children’s Hospital Medical Center (CCHMC), in partnership with the Extended Reality Lab (XR-Lab) at the University of Cincinnati. 

This phase will focus on developing an advanced virtual hospital environment populated with digital patients to simulate a variety of real-world Therapeutic Crisis Intervention (TCI) scenarios. The digital twins encompass both the hospital setting and patient avatars. The project aims to design immersive training modules, capture user performance data, and conduct a rigorous evaluation of the effectiveness of VR-based training in enhancing employee safety and crisis response capabilities

Principal Investigator: Ming Tang. Funding Amount: $38,422. Project Period: April 1, 2025 – December 1, 2026

CCHMC Collaborators: Dr. Nancy Daraiseh, Dr. Maurizio Macaluso, Dr. Aaron Vaughn.

Research Domains: Virtual Reality, Safety Training, Therapeutic Crisis Intervention, Mental Health, Digital Twins, Digital Humans, Human Behavior Simulation.

We look forward to continuing this impactful work and advancing the role of immersive technologies in healthcare education and safety training

Concept of Digital Twin: Digital Patient + Digital Hospital.

paper on AI, XR, Metaverse, Digital Twins

 

Metaverse and Digital Twins in the Age of AI and Extended Reality

Tang, Ming, Mikhail Nikolaenko, Ahmad Alrefai, and Aayush Kumar. 2025. “Metaverse and Digital Twins in the Age of AI and Extended Reality” Architecture 5, no. 2: 36. https://doi.org/10.3390/architecture5020036

 

This paper explores the evolving relationship between Digital Twins (DT) and the Metaverse, two foundational yet often conflated digital paradigms in digital architecture. While DTs function as mirrored models of real-world systems—integrating IoT, BIM, and real-time analytics to support decision-making—Metaverses are typically fictional, immersive, multi-user environments shaped by social, cultural, and speculative narratives. Through several research projects, the team investigate the divergence between DTs and Metaverses through the lens of their purpose, data structure, immersion, and interactivity, while highlighting areas of convergence driven by emerging technologies in Artificial Intelligence (AI) and Extended Reality (XR).This study aims to investigate the convergence of DTs and the Metaverse in digital architecture, examining how emerging technologies—such as AI, XR, and Large Language Models (LLMs)—are blurring their traditional boundaries. By analyzing their divergent purposes, data structures, and interactivity modes, as well as hybrid applications (e.g., data-integrated virtual environments and AI-driven collaboration), this study seeks to define the opportunities and challenges of this integration for architectural design, decision-making, and immersive user experiences. Our research spans multiple projects utilizing XR and AI to develop DT and the Metaverse. The team assess the capabilities of AI in DT environments, such as reality capture and smart building management. Concurrently, the team evaluates metaverse platforms for online collaboration and architectural education, focusing on features facilitating multi-user engagement. The paper presents evaluations of various virtual environment development pipelines, comparing traditional BIM+IoT workflows with novel approaches such as Gaussian Splatting and generative AI for content creation. The team further explores the integration of Large Language Models (LLMs) in both domains, such as virtual agents or LLM-powered Non-Player-Controlled Characters (NPC), enabling autonomous interaction and enhancing user engagement within spatial environments. Finally, the paper argues that DTs and Metaverse’s once-distinct boundaries are becoming increasingly porous. Hybrid digital spaces—such as virtual buildings with data-integrated twins and immersive, social metaverses—demonstrate this convergence. As digital environments mature, architects are uniquely positioned to shape these dual-purpose ecosystems, leveraging AI, XR, and spatial computing to fuse data-driven models with immersive and user-centered experiences.
 
Keywords:  metaverse; digital twin; extended reality; AI

The paper is features in the Architecture journal cover page.

AI and Emerging Technology Symposium

Ming Tang and Mikhail Nikolaenko presented “AI-Powered Digital Humans for Enhanced Interaction in Extended Reality” at the AI and Emerging Technology Symposium, University of Cincinnati.

The day-long event explored topics around AI and robotic process automation; smart campus innovation; and extended reality, virtual reality, and augmented reality. More on UC News.

AI-Powered Talking Avatars for Enhanced Interaction in Extended Reality

Presenter. Ming Tang, Mikhail Nikolaenko. Feb. 20, 2025 in Tangeman University Center. 

Read more

VR Training on Issues of Youth Firearm Possession

 Virtual Reality Training on Issues of Youth Firearm Possession.

PI. Tang. 8/5/2024-8/4/2025.

 

Funded by the God.Restoring.Order (GRO) Community, this research project will develop several VR scenarios that simulate environments designed to educate youth on applying critical skills in risky situations.

Team: Ming Tang, XR-Lab, Aaron Mallory, GRO.

XR-Lab students: Aayush Kumar, Mario Bermejo, Jonny Peng, Ahmad Alrefai, Rohit Ramesh, Charlotte Bodie 

The XR-Lab collaborated with the GRO community to leverage advanced extended reality (XR) technologies in the development of a virtual reality (VR) training application designed to strengthen the curriculum by reinforcing key competencies through immersive learning activities. In partnership, we evaluated the feasibility of integrating VR technology into the GRO training program, providing participants with an engaging narrative framework while equipping them with practical knowledge applicable to real-world contexts. The immersive VR scenarios addressed high-risk situations, including firearm possession and substance use, thereby creating a controlled environment for experiential learning and skill development.

The XR-Lab has harnessed advanced motion capture technology in this project to translate the movements of real people into lifelike digital characters. Every gesture, shift in posture, and subtle facial expression is carefully recorded and mapped to ensure authenticity and emotional depth in the virtual environment.

Our development team has worked closely and continuously with the GRO community, engaging in multiple motion studies, rehearsals, and testing sessions. This collaboration allows us to capture not just movement, but the nuance behind each action — the personality conveyed through body language, and the emotional context embedded in facial expression.

Through this process, the digital characters become more than avatars; they become authentic extensions of human experience, reflecting the stories. The result is an immersive, emotionally resonant experience where technology and humanity move together.