Therapeutic Crisis Intervention Simulation, Phase 3

We are excited to announce the launch of Phase 3 of the VR-Based Employee Safety Training: Therapeutic Crisis Intervention Simulation project, building on the success of the previous two phases. This interdisciplinary collaboration brings together the Immersive Learning Lab and the Employee Safety Learning Lab at Cincinnati Children’s Hospital Medical Center (CCHMC), in partnership with the Extended Reality Lab (XR-Lab) at the University of Cincinnati.

Concept of Digital Twin: Digital Patient + Digital Hospital.

This phase will focus on developing an advanced virtual hospital environment populated with digital patients to simulate a variety of real-world Therapeutic Crisis Intervention (TCI) scenarios. The digital twins encompass both the hospital setting and patient avatars. The project aims to design immersive training modules, capture user performance data, and conduct a rigorous evaluation of the effectiveness of VR-based training in enhancing employee safety and crisis response capabilities

Principal Investigator: Ming Tang. Funding Amount: $38,422. Project Period: April 1, 2025 – December 1, 2026

CCHMC Collaborators: Dr. Nancy Daraiseh, Dr. Maurizio Macaluso, Dr. Aaron Vaughn.

Research Domains: Virtual Reality, Safety Training, Therapeutic Crisis Intervention, Mental Health, Digital Twins, Digital Humans, Human Behavior Simulation.

We look forward to continuing this impactful work and advancing the role of immersive technologies in healthcare education and safety training

SMAT: Scalable Multi-Agent AI for DT

SMAT: Scalable Multi-Agent Machine Learning and Collaborative AI for Digital Twin Platform of Infrastructure and Facility Operations.

Principal Investigators:

  • Prof. Sam Anand, Department of Mechanical Engineering, CEAS
  • Prof. Ming Tang, Extended Reality Lab, Digital Futures, DAAP

Grant: $40,000. UC Industry 4.0/5.0 Institute Consortium Research Project: 03.2025-01.2026

In this phase, we will develop a scalable digital twin that integrates machine, factory, and city-level data with AI-driven real-time decision-making. The key questions we aim to answer are:

  • Can a high-fidelity Digital Twin (DT) be efficiently built using only image and video data?
  • How can multiple specialized Large Language Model (LLM) agents—at machine, factory, and city levels—collaborate to generate relevant insights?
  • How effective is synthetic data from a Digital Twin for object detection and process recognition?
  • Does combining traditional Machine Learning (ML) with Large Language Models (LLMs) improve decision-making in complex manufacturing operations?

The project’s primary goal is to create a scalable, cloud-based digital twin that enhances operational efficiency through AI-driven insights. Additional technical objectives include:

  • Using advanced reality capture techniques (e.g., Gaussian Splatting) to build a Digital Twin from images and videos and simulate fault scenarios at factory and data center levels.
  • Integrating an IIoT data framework to track material flow, process handling, operational metrics, and equipment status for seamless cloud-based analysis.
  • Developing a synthetic data capture system using a simulated drone within the Digital Twin to train reinforcement learning models for fault prediction.
  • Designing a multi-agent AI system combining LLMs, machine learning, and reinforcement learning to enable dynamic communication, prediction, and diagnostics in the factory.

 

last year’s project: IIOT for legacy and intelligent factory machines with XR and LLM feedback with a Digital Twin demonstration of real-time IOT for architecture/building applications using Omniverse.

P&G VISM

Project VISM (Virtual Interactive Simulation for Material customization)
Interactive Visualization with User-Controlled, Procedural-Based, and Physical-Based Material Customization.

PI. Ming Tang.

P&G Team: Kim Jackson, Andrew Fite, Fei Wang, Allison Roman

UC Team: Ming Tang, Aayush Kumar, Yuki Hirota

Sponsor: P&G. 12/01/2024 – 5/31/2025

Amount: $28,350

 

 

 

GRO

Virtual Reality Training on Issues of Youth Firearm Possession.

PI. Tang. 8/5/2024-8/4/2025.

 

Funded by the God.Restoring.Order (GRO) Community, this research project will develop two VR scenarios that simulate environments designed to educate youth on applying critical skills in risky situations.

Team: Ming Tang, XR-Lab, Aaron Mallory, GRO.

XR-Lab students: Aayush Kumar, Mario Bermejo, Jonny Peng, Amad Alrefai

The XR-Lab is excited to collaborate with the GRO community to leverage cutting-edge XR technologies to develop a virtual reality (VR) training app that enhances the curriculum by reinforcing key skills through immersive VR activities. Together, we will assess the feasibility of integrating VR technology into the GRO’s training program, engaging users with a compelling narrative while equipping them with practical knowledge for real-world application.

 

GenAI+AR Siemens

Automatic Scene Creation for Augmented Reality Work Instructions Using Generative AI. Siemens. PI. Ming Tang. co-PI: Tianyu Jiang. $25,000. UC. 4/1/2024-12/31/2024

Students: Aayush Kumar, Mikhail Nikolaenko, Dylan Hutson.

Sponsor: Siemens through UC MME Industry 4.0/5.0 Institute

Investigate integration of LLM Gen-AI with Hololens-based training.