Posts

Human Digital Twin

High-Fidelity Digital Human Modeling and Visualization

This research investigates advanced methods for constructing high-fidelity digital humans, with a focus on both technical innovation and applied use in immersive environments. The project integrates generative artificial intelligence, image-based modeling, and visualization pipelines to advance realism, interactivity, and usability in human–computer interaction.

Aim 1. Conversational Digital Agents Driven by Large Language Models (LLMs).
The first aim is to utilize large language models (LLMs) as the core engine for conversational digital humans. By embedding LLM-driven reasoning into virtual agents, the project seeks to create responsive, adaptive, and context-aware “talking agents.” These agents will simulate naturalistic dialogue, provide interactive guidance, and adapt to user needs across diverse scenarios such as education, healthcare training, and collaborative design.

Aim 2. Photorealistic Skin and Visual Fidelity Through Scanned Data.
The second aim focuses on the visual accuracy of digital humans. High-resolution image scans will be processed to reconstruct human skin with detailed fidelity, including surface textures, translucency, and micro-geometric variations. The resulting models are capable of 4K photorealistic rendering (click image to view sample output), significantly enhancing realism in simulation and visualization. This fidelity is crucial for applications where nuanced perception—such as empathy, trust, or attentiveness—depends on subtle visual cues.

Significance.
By combining intelligent conversational capabilities with photorealistic appearance, this research advances the next generation of digital humans. The outcomes will support applications in extended reality (XR), therapeutic and clinical training, collaborative design education, and digital twin environments, where authenticity of both interaction and appearance directly influences user engagement and effectiveness

High-fidelity digital human.

Read more

AI and Emerging Technology Symposium

Ming Tang and Mikhail Nikolaenko presented “AI-Powered Digital Humans for Enhanced Interaction in Extended Reality” at the AI and Emerging Technology Symposium, University of Cincinnati.

The day-long event explored topics around AI and robotic process automation; smart campus innovation; and extended reality, virtual reality, and augmented reality. More on UC News.

AI-Powered Talking Avatars for Enhanced Interaction in Extended Reality

Presenter. Ming Tang, Mikhail Nikolaenko. Feb. 20, 2025 in Tangeman University Center. 

This presentation explores two AI-driven talking avatars developed at the UC Extended Reality (XR) Lab, leveraging large language models (LLMs) for realistic interaction in XR environments. The XRLab Bot acts as a virtual tour guide, providing real-time engagement and navigation through the lab with spatial awareness, while the P&G Bot emulates a high-fidelity human likeness, delivering product expertise within a VR setting. These bots highlight advancements in AI, LLMs, and XR, showcasing potential applications in education, customer service, and smart campuses. The presentation will cover AI-driven navigation, multi-client architecture, and XR integration for immersive digital experiences. The session will showcase AI-driven navigation and interaction, demonstrating the bot’s capabilities in translating speech-to-text using Whisper AI, retrieving responses from ChatGPT, and interpreting real-time visitor needs and spatial data to guide users throughout XRLab. It will explore the multi-client, real-time architecture by sharing insights on managing multiple Unreal and Python clients with a central server, coordinating bot actions, face tracking, and area-specific responses in real-time. The discussion will highlight XR integration and smart campus applications, emphasizing the bot’s adaptability within XR platforms using Unreal Engine and its potential for virtual and augmented reality applications in campus tours, orientations, and educational experiences. Additionally, the session will discuss LLM-driven conversational AI, utilizing advanced models to power sophisticated, natural language interactions with users. High-fidelity 3D avatar creation will be addressed, focusing on crafting detailed, lifelike avatars capable of mimicking human expressions and movements. It will also cover customizable AI for chat avatars, enabling personalized, AI-driven avatars tailored to specific user preferences and needs. Interactive avatars with facial animation and motion capture will be demonstrated, showing how avatars can exhibit dynamic facial expressions and reactions during interactions. The session will also explore metaverse creation, showcasing the development of immersive, interconnected virtual worlds where users can interact through their avatars. Finally, the discussion will include virtual reality (VR) and augmented reality (AR) environments and experiences, highlighting their ability to blend digital content with the physical world or create entirely virtual spaces.

This presentation explores two AI-driven talking avatars developed at the UC Extended Reality Lab, leveraging large language models (LLMs) for realistic interaction in XR environments. The XRLab Bot acts as a virtual tour guide, providing real-time engagement and navigation through the lab with spatial awareness, while the P&G Bot emulates a high-fidelity human likeness, delivering product expertise within a VR setting. These bots highlight advancements in AI, LLMs, and XR, showcasing potential applications in education, customer service, and smart campuses. The presentation will cover AI-driven navigation, multi-client architecture, and XR integration for immersive digital experiences.

Presenters
Ming Tang, Director of XR-Lab. Professor, DAAP, UC.

Mikhail Nikolaenko, XR-Lab Fellow, UC​

Team: Aayush Kumar, ​Ahmad Alrefai

P&G Bot: A high-fidelity avatar modeled on a real individual, rendered with lifelike facial animations and lip synchronization within a VR environment. This bot is trained on a specialized database containing information on P&G’s shampoo products and company history. Its development process involved scanning a human model, rigging, animating, and integrating the LLM, enhanced through XR-based visualization and UI. The result is a realistic, interactive bot that combines human likeness with expert knowledge.

More information on Digital Human simulation at XR-Lab. 

GRO

Virtual Reality Training on Issues of Youth Firearm Possession.

PI. Tang. 8/5/2024-8/4/2025.

 

Funded by the God.Restoring.Order (GRO) Community, this research project will develop two VR scenarios that simulate environments designed to educate youth on applying critical skills in risky situations.

Team: Ming Tang, XR-Lab, Aaron Mallory, GRO.

XR-Lab students: Aayush Kumar, Mario Bermejo, Jonny Peng, Amad Alrefai

The XR-Lab is excited to collaborate with the GRO community to leverage cutting-edge XR technologies to develop a virtual reality (VR) training app that enhances the curriculum by reinforcing key skills through immersive VR activities. Together, we will assess the feasibility of integrating VR technology into the GRO’s training program, engaging users with a compelling narrative while equipping them with practical knowledge for real-world application.

 

Poster in AHFE conference

Nancy Daraiseh, Aaron Vaughn, Ming Tang, Mikhail Nikolaenko, Madeline Aeschbury, Alycia Bachtel, Camryn Backman, Chunyan Liu, Maurizio Macaluso . Using Virtual Reality to Enhance Behavioral Staff Training for Interacting with Aggressive Psychiatric Patients. Poster. The 15th International Conference on Applied Human Factors and Ergonomics (AHFE 2024). Nice, France, July 24-27, 2024.

Objective: To conduct a pilot study to enhance staff training and confidence when interacting with aggressive psychiatric patients using a virtual reality (VR) training module depicting an escalating patient scenario.

Significance: Dysregulated emotional outbursts, reactive aggression, and self-injurious behaviors are common in psychiatrically hospitalized patients. These behaviors result in aggressive patient interactions (APIs) which are associated with increased risk of harm to the patient and staff. Minimal research has examined interventions for successful training to effectively reduce or prevent API events and subsequent harm. Despite intensive, standardized trainings in crisis de-escalation protocols, staff continue to experience high rates of API injuries. More realistic training and competency in a safe environment to practice implementation and utilization of de-escalation strategies to avoid APIs and patient harm are needed.

Methods Using a pre – post, quasi-experimental design, 40 Behavioral Health Specialists and Registered Nurses at a pediatric psychiatric facility will participate in VR training depicting a commonly experienced scenario when interacting with an aggressive patient. Participants are stratified by job experience, sex, and VR experience. Study aims are to: i) assess the feasibility and usability of VR training among this population and ii) obtain measures of learner satisfaction and performance. Surveys measure usability, learner satisfaction, and coping with patient aggression. Pre- and post-performance in training will be compared and assessed by percent correct answers on the first attempt; time to correct answer; and the number of successful and unsuccessful attempts.

Preliminary Results (full analyses in progress): Preliminary survey results (N=14) show that 64% perceived the VR experience to be consistent with their real-world experiences: 87% agree that the VR training would help with interactions with aggressive patients: 71% reported the training was effective in identifying de-escalation strategies: 79% stated the training was effective in recognizing stages of patient crisis; training included important skills used in their job; and would recommend the training. Finally, 100% would participate in future VR trainings.

Anticipated Conclusions: We plan to show that using VR to supplement in-place training programs for high-risk situations can improve users’ understanding of essential de-escalation and crisis techniques. We anticipate results will show an enhanced ability and confidence when interacting with aggressive patients. Future studies will expand on results and examine implications on staff and patient harm. 

Check more information on the  VR-based Employee Safety Training. Therapeutic Crisis Intervention Simulation 

CVG Airport renovation

Cincinnati/Northern Kentucky International Airport (CVG) renovation project.

This dynamic course delves into designing human-centric, technologically advanced retail spaces at CVG, addressing contemporary challenges. Collaborating directly with CVG, we focus on conceptualizing the “Future CVG Experience,” exploring pivotal questions: envisioning the future look of CVG, the transformative impact of AR and VR on airport experiences, integrating the metaverse and immersive technologies into retail, and the potential for public art and recreational programs to enrich the traveler’s journey.

Faculty: Ming Tang. Director of XR-Lab, DAAP, UC. Thanks the support from Josh Edwards from CVG, and Chris Collins and Eric Camper from UC SIM. 

Twelve proposed scenarios of future CVG. 

Student: ARCH 7014. Fall. 2023.

Stephanie Ahmed, Ben Aidt, Thimesha Amarasena, Heather Cheng, Stephanie Circelli, Catherine D’Amico, Gabby Dashiell, Nikunj Deshpande, Carson Edwards, Olufemi Faminigba, Christopher Fultz, Emma Hausz, Jinfan He, Haley Heitkamp, Robin Jarrell, Emily Jaster, Bhaskar Jyoti Kalita, Analise Kandra, Sreya`Killamshetty, Japneet Kour, Thomas Magee, Mea McCormack, Sepideh Miraba, Dan O’Neill, Shailesh Padalkar, Gaurang Pawar, Urvi Prabhu, Michael Rinaldi-Eichenberg, Kelby Rippy, Will Roberts, Chris Schalk, Miles Sletto, Lizzy Sturgeon, Shruthi Sundararajan, Erika VanSlyke, Clayton Virzi, Yue Wu

The heatmap represents the fixation and gaze. 

Check more research on eye-tracking conducted by Prof. Tang at XR-Lab. >>