Posts

Therapeutic Crisis Intervention Simulation, Phase 3

We are excited to announce the launch of Phase 3 of the VR-Based Employee Safety Training: Therapeutic Crisis Intervention Simulation project, building on the success of the previous two phases. This interdisciplinary collaboration brings together the Immersive Learning Lab and the Employee Safety Learning Lab at Cincinnati Children’s Hospital Medical Center (CCHMC), in partnership with the Extended Reality Lab (XR-Lab) at the University of Cincinnati.

Concept of Digital Twin: Digital Patient + Digital Hospital.

This phase will focus on developing an advanced virtual hospital environment populated with digital patients to simulate a variety of real-world Therapeutic Crisis Intervention (TCI) scenarios. The digital twins encompass both the hospital setting and patient avatars. The project aims to design immersive training modules, capture user performance data, and conduct a rigorous evaluation of the effectiveness of VR-based training in enhancing employee safety and crisis response capabilities

Principal Investigator: Ming Tang. Funding Amount: $38,422. Project Period: April 1, 2025 – December 1, 2026

CCHMC Collaborators: Dr. Nancy Daraiseh, Dr. Maurizio Macaluso, Dr. Aaron Vaughn.

Research Domains: Virtual Reality, Safety Training, Therapeutic Crisis Intervention, Mental Health, Digital Twins, Digital Humans, Human Behavior Simulation.

We look forward to continuing this impactful work and advancing the role of immersive technologies in healthcare education and safety training

CIC-VISTA

Following six successful phases of the Building Safety Analysis with AI / Geospatial Imagery Analytics Research project (2020–2025), funded by the Cincinnati Insurance Companies (CIC), we are pleased to announce the launch of a new research initiative: VISTA – Virtual Immersive Systems for Training AI.

VISTA – Phase 1 ($81,413) marks the continuation of XR-Lab’s collaborative research efforts at UC with CIC. This new track will explore advanced AI-related topics, including computer vision, synthetic imaging, procedural modeling, machine learning, and reinforcement learning.

Project Title: Virtual Immersive Systems for Training AI, Phase 1. PI: Tang. Award Amount: $81,413. Project Period: 07/01/2025 – 11/01/2026

Bearcat AI Award

UC Bearcat AI Award Supports XR-Lab Student Innovation in 2025–2026

We are excited to share that two XR-Lab student fellows have been selected for the 2025–2026 Bearcat AI Award to support their cutting-edge research projects:

  • Mikhail Nikolaenko ($4,400) — Integrating BIM and GPT for Sustainable Building Analytics and Green Design Education in DAAP and CEAS

  • Aayush Kumar ($5,000) — INARA (Intelligent Navigation and Autonomous Response Agent): An Adaptive Indoor Navigation Assistant for UC Spaces

These projects reflect the XR-Lab’s ongoing commitment to advancing AI-driven solutions in design, education, and campus experience. Congratulations to both recipients!

SMAT: Scalable Multi-Agent AI for DT

SMAT: Scalable Multi-Agent Machine Learning and Collaborative AI for Digital Twin Platform of Infrastructure and Facility Operations.

Principal Investigators:

  • Prof. Sam Anand, Department of Mechanical Engineering, CEAS
  • Prof. Ming Tang, Extended Reality Lab, Digital Futures, DAAP

Grant: $40,000. UC Industry 4.0/5.0 Institute Consortium Research Project: 03.2025-01.2026

In this phase, we will develop a scalable digital twin that integrates machine, factory, and city-level data with AI-driven real-time decision-making. The key questions we aim to answer are:

  • Can a high-fidelity Digital Twin (DT) be efficiently built using only image and video data?
  • How can multiple specialized Large Language Model (LLM) agents—at machine, factory, and city levels—collaborate to generate relevant insights?
  • How effective is synthetic data from a Digital Twin for object detection and process recognition?
  • Does combining traditional Machine Learning (ML) with Large Language Models (LLMs) improve decision-making in complex manufacturing operations?

The project’s primary goal is to create a scalable, cloud-based digital twin that enhances operational efficiency through AI-driven insights. Additional technical objectives include:

  • Using advanced reality capture techniques (e.g., Gaussian Splatting) to build a Digital Twin from images and videos and simulate fault scenarios at factory and data center levels.
  • Integrating an IIoT data framework to track material flow, process handling, operational metrics, and equipment status for seamless cloud-based analysis.
  • Developing a synthetic data capture system using a simulated drone within the Digital Twin to train reinforcement learning models for fault prediction.
  • Designing a multi-agent AI system combining LLMs, machine learning, and reinforcement learning to enable dynamic communication, prediction, and diagnostics in the factory.

 

last year’s project: IIOT for legacy and intelligent factory machines with XR and LLM feedback with a Digital Twin demonstration of real-time IOT for architecture/building applications using Omniverse.

P&G VISM

Project VISM (Virtual Interactive Simulation for Material customization)
Interactive Visualization with User-Controlled, Procedural-Based, and Physical-Based Material Customization.

PI. Ming Tang.

P&G Team: Kim Jackson, Andrew Fite, Fei Wang, Allison Roman

UC Team: Ming Tang, Aayush Kumar, Yuki Hirota

Sponsor: P&G. 12/01/2024 – 5/31/2025

Amount: $28,350