Posts

XR and Gen-AI Technologies in Design

 

  

Left: VR training on welding, Samantha Frickel.  Right: Cinematic universes. Carson Edwards

Extended Reality and Generative-AI in Human-Centered Design

UHP + Architecture Seminar

Student work from the University of Cincinnati’s Honors Seminar and Architecture Design Seminar. This video showcases multiple innovative projects intersecting emerging technologies such has AIGC, XR with human-centered design.The projects include a wide range of demonstrations in the following two categories: 

Training
The first category centers on Virtual Reality-based training applications designed to simulate real-world tasks and enhance learning through immersive experiences. These projects include simulations for welding, firefighter robotics, and driving and instructional environments such as baby car seat installation. Each scenario provides a controlled, repeatable setting for learners to gain confidence and skills in safety-critical and technical domains, demonstrating the practical potential of XR technologies in professional training and education. Digital 3D content creation was augmented by various AIGC tools such as Rodin, Meshy, Tripo, etc.

Future Environment
This group of projects explores imaginative and speculative environments through immersive technologies. Students and researchers have developed experiences ranging from fictional music spaces, virtual zoos, and animal shelters to emotionally responsive architectural designs and future cityscapes. These environments often incorporate interactive elements, such as Augmented Reality on mobile devices or real-time simulations of natural phenomena like flooding. Advanced material simulation is also a focus, including simulating cloth and other soft fabrics that respond dynamically to user interaction. 2D Content creation was augmented by various AIGC tools such as Midjourney, Stable Diffusion, etc.

Read more

SMAT: Scalable Multi-Agent AI for DT

SMAT: Scalable Multi-Agent Machine Learning and Collaborative AI for Digital Twin Platform of Infrastructure and Facility Operations.

Principal Investigators:

  • Prof. Sam Anand, Department of Mechanical Engineering, CEAS
  • Prof. Ming Tang, Extended Reality Lab, Digital Futures, DAAP

Students: Anuj Gautam, Manish Aryal, Aayush Kumar, Ahmad Alrefai, Rohit Ramesh, Mikhail Nikolaenko, Bozhi Peng

Grant: $40,000. UC Industry 4.0/5.0 Institute Consortium Research Project: 03.2025-01.2026

Read more

paper in JMS & NAMRC

 

Anuj Gautam, Manish Raj Aryal, Sourabh Deshpande, Shailesh Padalkar, Mikhail Nikolaenko, Ming Tang, Sam Anand, IIoT-enabled digital twin for legacy and smart factory machines with LLM integration, Journal of Manufacturing Systems, Volume 80, 2025, Pages 511-523, ISSN 0278-6125

The paper is also published in the NAMRC 2025 conference.

Anuj Gautam , Manish Raj Aryal, Sourabh Deshpande, Shailesh Padalkar, Mikhail Nikolaenko, Ming Tang, Sam Anand. IIoT-enabled Digital Twin for legacy and smart factory machines with LLM integration. 53rd SME North American Manufacturing Research Conference (NAMRC), Clemson Univ. 06/2025.

 

Abstract

The recent advancement in Large Language Models (LLMs) has significantly transformed the field of natural data interpretation, translation, and user training. However, a notable gap exists when LLMs are tasked to assist with real-time context-sensitive machine data. The paper presents a multi-agent LLM framework capable of accessing and interpreting real-time and historical data through an Industrial Internet of Things (IIoT) platform for evidence-based inferences. The real-time data is acquired from several legacy machine artifacts (such as seven-segment displays, toggle switches, and knobs), smart machines (such as 3D printers), and building data (such as sound sensors and temperature measurement devices) through MTConnect data streaming protocol. Further, a multi-agent LLM framework that consists of four specialized agents – a supervisor agent, a machine-expertise agent, a data visualization agent, and a fault-diagnostic agent is developed for context-specific manufacturing tasks. This LLM framework is then integrated into a digital twin to visualize the unstructured data in real time. The paper also explores how LLM-based digital twins can serve as real time virtual experts through an avatar, minimizing reliance on traditional manuals or supervisor-based expertise. To demonstrate the functionality and effectiveness of this framework, we present a case study consisting of legacy machine artifacts and modern machines. The results highlight the practical application of LLM to assist and infer real-time machine data in a digital twin environment.

Human Digital Twin

High-Fidelity Digital Human Modeling and Visualization

This research investigates advanced methods for constructing high-fidelity digital humans, with a focus on both technical innovation and applied use in immersive environments. The project integrates generative artificial intelligence, image-based modeling, and visualization pipelines to advance realism, interactivity, and usability in human–computer interaction.

Aim 1. Conversational Digital Agents Driven by Large Language Models (LLMs).
The first aim is to utilize large language models (LLMs) as the core engine for conversational digital humans. By embedding LLM-driven reasoning into virtual agents, the project seeks to create responsive, adaptive, and context-aware “talking agents.” These agents will simulate naturalistic dialogue, provide interactive guidance, and adapt to user needs across diverse scenarios such as education, healthcare training, and collaborative design.

Aim 2. Photorealistic Skin and Visual Fidelity Through Scanned Data.
The second aim focuses on the visual accuracy of digital humans. High-resolution image scans will be processed to reconstruct human skin with detailed fidelity, including surface textures, translucency, and micro-geometric variations. The resulting models are capable of 4K photorealistic rendering (click image to view sample output), significantly enhancing realism in simulation and visualization. This fidelity is crucial for applications where nuanced perception—such as empathy, trust, or attentiveness—depends on subtle visual cues.

Significance.
By combining intelligent conversational capabilities with photorealistic appearance, this research advances the next generation of digital humans. The outcomes will support applications in extended reality (XR), therapeutic and clinical training, collaborative design education, and digital twin environments, where authenticity of both interaction and appearance directly influences user engagement and effectiveness

High-fidelity digital human.

Read more

AI and Emerging Technology Symposium

Ming Tang and Mikhail Nikolaenko presented “AI-Powered Digital Humans for Enhanced Interaction in Extended Reality” at the AI and Emerging Technology Symposium, University of Cincinnati.

The day-long event explored topics around AI and robotic process automation; smart campus innovation; and extended reality, virtual reality, and augmented reality. More on UC News.

AI-Powered Talking Avatars for Enhanced Interaction in Extended Reality

Presenter. Ming Tang, Mikhail Nikolaenko. Feb. 20, 2025 in Tangeman University Center. 

Read more