XR and Gen-AI Technologies in Design

 

  

Left: VR training on welding, Samantha Frickel.  Right: Cinematic universes. Carson Edwards

Extended Reality and Generative-AI in Human-Centered Design

UHP + Architecture Seminar

Student work from the University of Cincinnati’s Honors Seminar and Architecture Design Seminar. This video showcases multiple innovative projects intersecting emerging technologies such has AIGC, XR with human-centered design.The projects include a wide range of demonstrations in the following two categories: 

Training
The first category centers on Virtual Reality-based training applications designed to simulate real-world tasks and enhance learning through immersive experiences. These projects include simulations for welding, firefighter robotics, and driving and instructional environments such as baby car seat installation. Each scenario provides a controlled, repeatable setting for learners to gain confidence and skills in safety-critical and technical domains, demonstrating the practical potential of XR technologies in professional training and education. Digital 3D content creation was augmented by various AIGC tools such as Rodin, Meshy, Tripo, etc.

Future Environment
This group of projects explores imaginative and speculative environments through immersive technologies. Students and researchers have developed experiences ranging from fictional music spaces, virtual zoos, and animal shelters to emotionally responsive architectural designs and future cityscapes. These environments often incorporate interactive elements, such as Augmented Reality on mobile devices or real-time simulations of natural phenomena like flooding. Advanced material simulation is also a focus, including simulating cloth and other soft fabrics that respond dynamically to user interaction. 2D Content creation was augmented by various AIGC tools such as Midjourney, Stable Diffusion, etc.

These two interdisciplinary seminars investigate the application of Extended Reality (XR) technologies, including Virtual Reality (VR), and Augmented Reality (AR), in addressing real-world challenges. Students examined integrating human-computer interaction with immersive digital tools to create embodied, interactive experiences that enhance user engagement and understanding.

In parallel, the courses explored comprehensive design methodology—spanning research, ideation, prototyping, and evaluation—framed through the lens of generative AI and immersive virtual environments. Projects emphasized the role of AI-assisted content creation and immersive media in advancing human-centered design practices with either a fictional metaverse or reality-based digital twins. 

The student work presented reflects a research-driven approach to spatial design, focusing on how digital scenarios influence human perception, emotional response, and cognitive engagement. XR was explored as a medium for fostering empathy, delivering emotional impact, and enhancing the acquisition of knowledge and skills.

Credit: UHP students: Amanda Elizabeth, Logan Daugherty, Valerie Dreith, Samantha Frickel, Aakash Jeyakanthan, Aayush Kumar, TJ Mueller, Rohit Ramesh, Megan Sheth, Ayush Verma.; Architecture students: Brady Bolton, Erik Mathys, Keai Perdue, Gustavo Reyes, Maria Vincenti, Nikunj Deshpande, Carson Edwards, Bhaskar Kalita, Sreya Killamshetty, Japneet Kour, Gaurang Pawar, Shruthi Sundararajan.

 


Student Projects Gallery Show in 2024

SMAT: Scalable Multi-Agent AI for DT

SMAT: Scalable Multi-Agent Machine Learning and Collaborative AI for Digital Twin Platform of Infrastructure and Facility Operations.

Principal Investigators:

  • Prof. Sam Anand, Department of Mechanical Engineering, CEAS
  • Prof. Ming Tang, Extended Reality Lab, Digital Futures, DAAP

Students: Anuj Gautam, Manish Aryal, Aayush Kumar, Ahmad Alrefai, Rohit Ramesh, Mikhail Nikolaenko, Bozhi Peng

Grant: $40,000. UC Industry 4.0/5.0 Institute Consortium Research Project: 03.2025-01.2026

In this phase, we will develop a scalable digital twin that integrates machine, factory, and city-level data with AI-driven real-time decision-making. The key questions we aim to answer are:

  • Can a high-fidelity Digital Twin (DT) be efficiently built using only image and video data?
  • How can multiple specialized Large Language Model (LLM) agents—at machine, factory, and city levels—collaborate to generate relevant insights?
  • How effective is synthetic data from a Digital Twin for object detection and process recognition?
  • Does combining traditional Machine Learning (ML) with Large Language Models (LLMs) improve decision-making in complex manufacturing operations?

The project’s primary goal is to create a scalable, cloud-based digital twin that enhances operational efficiency through AI-driven insights. Additional technical objectives include:

  • Using advanced reality capture techniques (e.g., Gaussian Splatting) to build a Digital Twin from images and videos and simulate fault scenarios at factory and data center levels.
  • Integrating an IIoT data framework to track material flow, process handling, operational metrics, and equipment status for seamless cloud-based analysis.
  • Developing a synthetic data capture system using a simulated drone within the Digital Twin to train reinforcement learning models for fault prediction.
  • Designing a multi-agent AI system combining LLMs, machine learning, and reinforcement learning to enable dynamic communication, prediction, and diagnostics in the factory.

 

last year’s project: IIOT for legacy and intelligent factory machines with XR and LLM feedback with a Digital Twin demonstration of real-time IOT for architecture/building applications using Omniverse.

paper in JMS & NAMRC

 

Anuj Gautam, Manish Raj Aryal, Sourabh Deshpande, Shailesh Padalkar, Mikhail Nikolaenko, Ming Tang, Sam Anand, IIoT-enabled digital twin for legacy and smart factory machines with LLM integration, Journal of Manufacturing Systems, Volume 80, 2025, Pages 511-523, ISSN 0278-6125

The paper is also published in the NAMRC 2025 conference.

Anuj Gautam , Manish Raj Aryal, Sourabh Deshpande, Shailesh Padalkar, Mikhail Nikolaenko, Ming Tang, Sam Anand. IIoT-enabled Digital Twin for legacy and smart factory machines with LLM integration. 53rd SME North American Manufacturing Research Conference (NAMRC), Clemson Univ. 06/2025.

 

Abstract

The recent advancement in Large Language Models (LLMs) has significantly transformed the field of natural data interpretation, translation, and user training. However, a notable gap exists when LLMs are tasked to assist with real-time context-sensitive machine data. The paper presents a multi-agent LLM framework capable of accessing and interpreting real-time and historical data through an Industrial Internet of Things (IIoT) platform for evidence-based inferences. The real-time data is acquired from several legacy machine artifacts (such as seven-segment displays, toggle switches, and knobs), smart machines (such as 3D printers), and building data (such as sound sensors and temperature measurement devices) through MTConnect data streaming protocol. Further, a multi-agent LLM framework that consists of four specialized agents – a supervisor agent, a machine-expertise agent, a data visualization agent, and a fault-diagnostic agent is developed for context-specific manufacturing tasks. This LLM framework is then integrated into a digital twin to visualize the unstructured data in real time. The paper also explores how LLM-based digital twins can serve as real time virtual experts through an avatar, minimizing reliance on traditional manuals or supervisor-based expertise. To demonstrate the functionality and effectiveness of this framework, we present a case study consisting of legacy machine artifacts and modern machines. The results highlight the practical application of LLM to assist and infer real-time machine data in a digital twin environment.

NCBDS conference

Paper “Designing the Future of Retail: Cross-Disciplinary Collaboration in Industrial Design and Architecture Design” published at the 40th National Conference on Begining Design Students Conference proceedings.  North Carolina State University. Raleigh, NC. 2025.

Yong-Gyun Ghim, Ming Tang, University of Cincinnati

 

Abstract

The significance of design’s cross-disciplinary nature has increased alongside technological advancements as emerging technologies present new opportunities and challenges for complex socio-technical systems. Systems thinking has drawn attention to design as a holistic approach to tackling complex systems by examining the interrelationships between elements. This also necessitates cross-disciplinary collaboration to address the multifaceted nature of the problems comprehensively. These aspects of systems thinking further emphasize its importance in design education to help navigate the current era of technological innovation. The future of retail exemplifies this interconnected complexity in the context of emerging technologies because introducing them – such as robotics, artificial intelligence, and mixed reality – into retail environments requires a holistic consideration of the entire system, encompassing physical spaces, service processes, and human interactions.

This study examines a 15-week collaborative studio project between industrial design and architecture. By leveraging a systems thinking approach, the project facilitated cross-disciplinary collaboration to develop future retail concepts, enabling students to integrate their expertise and address the interconnectedness of artifacts, environments, and human interactions. Both disciplines followed a structured design process encompassing research, system design, space and robot design, visualization, and validation, while collaboration was organized around four key steps: planning, learning, prototyping, and communication. The project also involved collaboration with a supermarket chain, providing opportunities for onsite observations, employee interviews, and discussions with industry professionals. Students developed futuristic concepts for retail operations and customer experiences by leveraging the integration of mobile service robots, adaptive spaces, and mixed reality. Industrial design students focused on designing a product-service system of supermarket robots based on their redefinition of customer shopping experience and employee workflow, proposing an automated grocery order fulfillment system. Architecture students designed adaptive retail spaces that seamlessly blur the boundaries between physical and digital worlds, exploring how the Metaverse and mixed-reality interfaces can augment retail spaces and shopping experiences through dynamic, immersive interactions with digital avatars and robots. This cross-disciplinary collaboration resulted in holistic and integrative solutions for complex systems, presented through immersive VR experiences or animated scenarios.

This study’s contribution to design education is threefold. First, it proposes a systems thinking approach with cross-disciplinary collaboration for designing future retail experiences, demonstrating its effectiveness in addressing and designing complex socio-technical systems. Second, it offers insights into how industrial design and architecture can be integrated to create novel user experiences in digital transformation. Lastly, by examining the design and collaboration processes and reflecting on the opportunities and challenges, this study offers insights for its application to future studio courses. Given the increased complexity and dynamics between disciplines, thorough pre-planning and flexibility are critical for success.

Keywords:

Cross-disciplinary collaboration, Design education, Industrial design, Architecture, Future of retail

Project:  Future Service, Retail, Metaverse, and Robotics

 

AI and Emerging Technology Symposium

Ming Tang and Mikhail Nikolaenko presented “AI-Powered Digital Humans for Enhanced Interaction in Extended Reality” at the AI and Emerging Technology Symposium, University of Cincinnati.

The day-long event explored topics around AI and robotic process automation; smart campus innovation; and extended reality, virtual reality, and augmented reality. More on UC News.

AI-Powered Talking Avatars for Enhanced Interaction in Extended Reality

Presenter. Ming Tang, Mikhail Nikolaenko. Feb. 20, 2025 in Tangeman University Center. 

This presentation explores two AI-driven talking avatars developed at the UC Extended Reality (XR) Lab, leveraging large language models (LLMs) for realistic interaction in XR environments. The XRLab Bot acts as a virtual tour guide, providing real-time engagement and navigation through the lab with spatial awareness, while the P&G Bot emulates a high-fidelity human likeness, delivering product expertise within a VR setting. These bots highlight advancements in AI, LLMs, and XR, showcasing potential applications in education, customer service, and smart campuses. The presentation will cover AI-driven navigation, multi-client architecture, and XR integration for immersive digital experiences. The session will showcase AI-driven navigation and interaction, demonstrating the bot’s capabilities in translating speech-to-text using Whisper AI, retrieving responses from ChatGPT, and interpreting real-time visitor needs and spatial data to guide users throughout XRLab. It will explore the multi-client, real-time architecture by sharing insights on managing multiple Unreal and Python clients with a central server, coordinating bot actions, face tracking, and area-specific responses in real-time. The discussion will highlight XR integration and smart campus applications, emphasizing the bot’s adaptability within XR platforms using Unreal Engine and its potential for virtual and augmented reality applications in campus tours, orientations, and educational experiences. Additionally, the session will discuss LLM-driven conversational AI, utilizing advanced models to power sophisticated, natural language interactions with users. High-fidelity 3D avatar creation will be addressed, focusing on crafting detailed, lifelike avatars capable of mimicking human expressions and movements. It will also cover customizable AI for chat avatars, enabling personalized, AI-driven avatars tailored to specific user preferences and needs. Interactive avatars with facial animation and motion capture will be demonstrated, showing how avatars can exhibit dynamic facial expressions and reactions during interactions. The session will also explore metaverse creation, showcasing the development of immersive, interconnected virtual worlds where users can interact through their avatars. Finally, the discussion will include virtual reality (VR) and augmented reality (AR) environments and experiences, highlighting their ability to blend digital content with the physical world or create entirely virtual spaces.

This presentation explores two AI-driven talking avatars developed at the UC Extended Reality Lab, leveraging large language models (LLMs) for realistic interaction in XR environments. The XRLab Bot acts as a virtual tour guide, providing real-time engagement and navigation through the lab with spatial awareness, while the P&G Bot emulates a high-fidelity human likeness, delivering product expertise within a VR setting. These bots highlight advancements in AI, LLMs, and XR, showcasing potential applications in education, customer service, and smart campuses. The presentation will cover AI-driven navigation, multi-client architecture, and XR integration for immersive digital experiences.

Presenters
Ming Tang, Director of XR-Lab. Professor, DAAP, UC.

Mikhail Nikolaenko, XR-Lab Fellow, UC​

Team: Aayush Kumar, ​Ahmad Alrefai

P&G Bot: A high-fidelity avatar modeled on a real individual, rendered with lifelike facial animations and lip synchronization within a VR environment. This bot is trained on a specialized database containing information on P&G’s shampoo products and company history. Its development process involved scanning a human model, rigging, animating, and integrating the LLM, enhanced through XR-based visualization and UI. The result is a realistic, interactive bot that combines human likeness with expert knowledge.

More information on Digital Human simulation at XR-Lab.