Posts

Human Digital Twin

High-Fidelity Digital Human Modeling and Visualization

This research investigates advanced methods for constructing high-fidelity digital humans, with a focus on both technical innovation and applied use in immersive environments. The project integrates generative artificial intelligence, image-based modeling, and visualization pipelines to advance realism, interactivity, and usability in human–computer interaction.

Aim 1. Conversational Digital Agents Driven by Large Language Models (LLMs).
The first aim is to utilize large language models (LLMs) as the core engine for conversational digital humans. By embedding LLM-driven reasoning into virtual agents, the project seeks to create responsive, adaptive, and context-aware “talking agents.” These agents will simulate naturalistic dialogue, provide interactive guidance, and adapt to user needs across diverse scenarios such as education, healthcare training, and collaborative design.

Aim 2. Photorealistic Skin and Visual Fidelity Through Scanned Data.
The second aim focuses on the visual accuracy of digital humans. High-resolution image scans will be processed to reconstruct human skin with detailed fidelity, including surface textures, translucency, and micro-geometric variations. The resulting models are capable of 4K photorealistic rendering (click image to view sample output), significantly enhancing realism in simulation and visualization. This fidelity is crucial for applications where nuanced perception—such as empathy, trust, or attentiveness—depends on subtle visual cues.

Significance.
By combining intelligent conversational capabilities with photorealistic appearance, this research advances the next generation of digital humans. The outcomes will support applications in extended reality (XR), therapeutic and clinical training, collaborative design education, and digital twin environments, where authenticity of both interaction and appearance directly influences user engagement and effectiveness

High-fidelity digital human.

Read more

Digital Twin of Cincinnati

A realtime flythrough demo for Digital Twin of City Cincinnati

Digital Futures Building at the University of Cincinnati

Destroy Alien buildings near the UC campus. Project developed by students Cooper Pflaum and Nishanth Chidambaram. 

Digital Twin, LLM & IIOT

IIOT for legacy and intelligent factory machines with XR and LLM feedback with a Digital Twin demonstration of real-time IOT for architecture/building applications using Omniverse. 

  • PIs: Sam Anand, Ming Tang.
  • Students: Anuj Gautama, Mikhail Nikolaenko, Ahmad Alrefai, Aayush Kumar, Manish Raj Aryal,c, Eian Bennett, Sourabh Deshpande 

$40,000. UC Industry 4.0/5.0 Institute Consortium Research Project: 01.2024-01.2025

The project centers on the development of a Digital Twin (DT) and a multi-agent Large Language Model (LLM) framework designed to access and interpret real-time and historical data through an Industrial Internet of Things (IIoT) platform. Real-time data is sourced from legacy machines and smart machines, integrating Building Information Modeling (BIM) with environmental sensors. The multi-agent LLM framework comprises specialized agents and supports diverse user interfaces, including screen-based systems, Virtual Reality (VR) environments, and mobile devices, enabling versatile interaction, data visualization, and analysis.

The research evaluates leading DT platforms—Autodesk Tandem, NVIDIA Omniverse, and Unreal Engine—focusing on their capabilities to integrate IoT and BIM data while supporting legacy machine systems.  Autodesk Tandem excelled in seamlessly combining BIM metadata with real-time IoT streams for building operations and system scalability.  NVIDIA Omniverse demonstrated unmatched rendering fidelity and collaborative features through its Universal Scene Description (USD) framework. Unreal Engine, notable for its immersive visualization, proved superior for LLM integration, leveraging 3D avatars and conversational AI to enhance user interaction.

Read more

Cloud-based Digital Twin

Clients are one click away from interacting with a Digital Twin model on their personal devices. No installation is required.

The XR-Lab’s project showcases a cloud-based Digital Twin (DT) model, designed for accessibility and interaction via mobile devices. This advanced DT model allows multiple users to engage with its complex features directly through touch screens, eliminating the need for app installations. Clients can effortlessly access the content using a simple URL in a web browser on their personal iOS or Android mobile devices and tablets. The project is distinguished by its photorealistic renderings, which are streamed to clients at high frame rates, ensuring a visually rich and seamless experience. Furthermore, our DT model is an integration of various cutting-edge technologies, including Building Information Modeling (BIM), Metadata, IoT sensor data, 360-degree images/videos, and web3D content, creating a comprehensive and interactive digital environment.

 

More information on Future of Work: Understanding the interrelationships between humans and technology to improve the quality of work-life in smart buildings.