Posts

Paper: VR Training to De-escalate Patient Aggressive Behavior

Journal Paper: Virtual Reality Training to De-escalate Patient Aggressive Behavior: A Pilot Study

Daraiseh, N. M., Tang, M., Macaluso, M., Aeschbury, M., Bachtel, A., Nikolaenko, M., … Vaughn, A. (2025). Virtual Reality Training to De-escalate Patient Aggressive Behavior: A Pilot StudyInternational Journal of Human–Computer Interaction, 1–16. https://doi.org/10.1080/10447318.2025.2576635

Abstract
Despite intensive crisis de-escalation training, psychiatric staff continue to face high injury rates from aggressive patient interactions (APIs). New approaches are needed to enhance the application of effective strategies in managing APIs. This study explored the efficacy and feasibility of VR training for psychiatric staff in recognizing and selecting appropriate de-escalation interventions. A quasi-experimental design with psychiatric staff (N = 33) tested the effectiveness and feasibility of VR training depicting a common API interaction. Effectiveness was assessed through pre-post comparisons of the Confidence in Coping with Patient Aggression (CCPA) survey, correct answer percentages, response times, and attempt success rates. Feasibility was indicated by mean scores above ‘neutral’ on usability, presence, and learner satisfaction surveys. Results showed significant improvements in response times and confidence (p<.0001), with over 75% of participants rating the training positively. VR training is effective and feasible for enhancing de-escalation skills, offering a promising approach for psychiatric facilities.

More information on the project Therapeutic Crisis Intervention Simulation. P1,P2

SENSE + AI

ARCH 4001 studio. Fall 2025

SENSE: Spatial Experiences for Narrative and Sensory Emotions with AI-Assisted Design

Final Presentation: Monday, December 1, 1:30 p.m. – 4 p.m.
CGC Lab, Room 4425, College of DAAP bldg, UC

session A: 1:30-3. Three groups
session B: 304. Two groups

Museum concept by UC student Dwayne CarterEmma CekCourtney Reese. Fall. 2025

Read more

Human Digital Twin

High-Fidelity Digital Human Modeling and Visualization

This research investigates advanced methods for constructing high-fidelity digital humans, with a focus on both technical innovation and applied use in immersive environments. The project integrates generative artificial intelligence, image-based modeling, and visualization pipelines to advance realism, interactivity, and usability in human–computer interaction.

Aim 1. Conversational Digital Agents Driven by Large Language Models (LLMs).
The first aim is to utilize large language models (LLMs) as the core engine for conversational digital humans. By embedding LLM-driven reasoning into virtual agents, the project seeks to create responsive, adaptive, and context-aware “talking agents.” These agents will simulate naturalistic dialogue, provide interactive guidance, and adapt to user needs across diverse scenarios such as education, healthcare training, and collaborative design.

Aim 2. Photorealistic Skin and Visual Fidelity Through Scanned Data.
The second aim focuses on the visual accuracy of digital humans. High-resolution image scans will be processed to reconstruct human skin with detailed fidelity, including surface textures, translucency, and micro-geometric variations. The resulting models are capable of 4K photorealistic rendering (click image to view sample output), significantly enhancing realism in simulation and visualization. This fidelity is crucial for applications where nuanced perception—such as empathy, trust, or attentiveness—depends on subtle visual cues.

Significance.
By combining intelligent conversational capabilities with photorealistic appearance, this research advances the next generation of digital humans. The outcomes will support applications in extended reality (XR), therapeutic and clinical training, collaborative design education, and digital twin environments, where authenticity of both interaction and appearance directly influences user engagement and effectiveness

High-fidelity digital human.

Read more

AI and Emerging Technology Symposium

Ming Tang and Mikhail Nikolaenko presented “AI-Powered Digital Humans for Enhanced Interaction in Extended Reality” at the AI and Emerging Technology Symposium, University of Cincinnati.

The day-long event explored topics around AI and robotic process automation; smart campus innovation; and extended reality, virtual reality, and augmented reality. More on UC News.

AI-Powered Talking Avatars for Enhanced Interaction in Extended Reality

Presenter. Ming Tang, Mikhail Nikolaenko. Feb. 20, 2025 in Tangeman University Center. 

Read more

VR Training on Issues of Youth Firearm Possession

 Virtual Reality Training on Issues of Youth Firearm Possession.

PI. Tang. 8/5/2024-8/4/2025.

 

Funded by the God.Restoring.Order (GRO) Community, this research project will develop two VR scenarios that simulate environments designed to educate youth on applying critical skills in risky situations.

Team: Ming Tang, XR-Lab, Aaron Mallory, GRO.

XR-Lab students: Aayush Kumar, Mario Bermejo, Jonny Peng, Ahmad Alrefai, Rohit Ramesh, Charlotte Bodie 

The XR-Lab collaborated with the GRO community to leverage advanced extended reality (XR) technologies in the development of a virtual reality (VR) training application designed to strengthen the curriculum by reinforcing key competencies through immersive learning activities. In partnership, we evaluated the feasibility of integrating VR technology into the GRO training program, providing participants with an engaging narrative framework while equipping them with practical knowledge applicable to real-world contexts. The immersive VR scenarios addressed high-risk situations, including firearm possession and substance use, thereby creating a controlled environment for experiential learning and skill development.

The XR-Lab has harnessed advanced motion capture technology in this project to translate the movements of real people into lifelike digital characters. Every gesture, shift in posture, and subtle facial expression is carefully recorded and mapped to ensure authenticity and emotional depth in the virtual environment.

Our development team has worked closely and continuously with the GRO community, engaging in multiple motion studies, rehearsals, and testing sessions. This collaboration allows us to capture not just movement, but the nuance behind each action — the personality conveyed through body language, and the emotional context embedded in facial expression.

Through this process, the digital characters become more than avatars; they become authentic extensions of human experience, reflecting the stories. The result is an immersive, emotionally resonant experience where technology and humanity move together.