The exhibition is supported by the Creative Asian Society and the ArtsWave Impact grant.
“Infinite Loop” reflects my interpretation of the Ohio Valley—an ever-shifting landscape shaped by both deep geological time and layers of human history. Inspired by the region’s porous limestone caves, exposed rock formations, and the powerful erosive force of the Ohio River, the sculpture evokes the continuous movement and natural evolution embedded in this terrain. The form, looping without a clear beginning or end, draws from the valley’s complex strata—both literal and metaphorical. It echoes the industrial legacy of Cincinnati: a city built along railroads, powered by migration, and continually transformed by waves of innovation, creativity, and technology. Each undulating surface captures a sense of motion and continuity, speaking to the rhythms of the river and the resilience of a city in flux. By blending references to natural erosion, flood, and industrial infrastructure, Infinite Loop invites reflection on how we shape—and are shaped by—the landscapes we inhabit. It is a meditation on flow, transformation, and the unbroken cycles that define both the Ohio River and the city of Cincinnati itself.
360 video
The exhibition is supported by the Creative Asian Society and the ArtsWave Impact grant.
The paper is also published in the NAMRC 2025 conference.
Anuj Gautam , Manish Raj Aryal, Sourabh Deshpande, Shailesh Padalkar, Mikhail Nikolaenko, Ming Tang, Sam Anand. IIoT-enabled Digital Twin for legacy and smart factory machines with LLM integration. 53rd SME North American Manufacturing Research Conference (NAMRC), Clemson Univ. 06/2025.
Abstract
The recent advancement in Large Language Models (LLMs) has significantly transformed the field of natural data interpretation, translation, and user training. However, a notable gap exists when LLMs are tasked to assist with real-time context-sensitive machine data. The paper presents a multi-agent LLM framework capable of accessing and interpreting real-time and historical data through an Industrial Internet of Things (IIoT) platform for evidence-based inferences. The real-time data is acquired from several legacy machine artifacts (such as seven-segment displays, toggle switches, and knobs), smart machines (such as 3D printers), and building data (such as sound sensors and temperature measurement devices) through MTConnect data streaming protocol. Further, a multi-agent LLM framework that consists of four specialized agents – a supervisor agent, a machine-expertise agent, a data visualization agent, and a fault-diagnostic agent is developed for context-specific manufacturing tasks. This LLM framework is then integrated into a digital twin to visualize the unstructured data in real time. The paper also explores how LLM-based digital twins can serve as real time virtual experts through an avatar, minimizing reliance on traditional manuals or supervisor-based expertise. To demonstrate the functionality and effectiveness of this framework, we present a case study consisting of legacy machine artifacts and modern machines. The results highlight the practical application of LLM to assist and infer real-time machine data in a digital twin environment.
https://i1.wp.com/ming3d.com/new/wp-content/uploads/2025/04/JMS_NAMRC.jpg?fit=521%2C417&ssl=1417521Ming Tanghttp://ming3d.com/new/wp-content/uploads/2022/01/TY_logo-300x300-new.pngMing Tang2025-03-09 14:54:242025-04-23 16:06:44paper in JMS & NAMRC
Ming Tang and Mikhail Nikolaenko presented “AI-Powered Digital Humans for Enhanced Interaction in Extended Reality” at the AI and Emerging Technology Symposium, University of Cincinnati.
The day-long event explored topics around AI and robotic process automation; smart campus innovation; and extended reality, virtual reality, and augmented reality. More on UC News.
AI-Powered Talking Avatars for Enhanced Interaction in Extended Reality
Presenter. Ming Tang, Mikhail Nikolaenko. Feb. 20, 2025 in Tangeman University Center.
This presentation explores two AI-driven talking avatars developed at the UC Extended Reality (XR) Lab, leveraging large language models (LLMs) for realistic interaction in XR environments. The XRLab Bot acts as a virtual tour guide, providing real-time engagement and navigation through the lab with spatial awareness, while the P&G Bot emulates a high-fidelity human likeness, delivering product expertise within a VR setting. These bots highlight advancements in AI, LLMs, and XR, showcasing potential applications in education, customer service, and smart campuses. The presentation will cover AI-driven navigation, multi-client architecture, and XR integration for immersive digital experiences. The session will showcase AI-driven navigation and interaction, demonstrating the bot’s capabilities in translating speech-to-text using Whisper AI, retrieving responses from ChatGPT, and interpreting real-time visitor needs and spatial data to guide users throughout XRLab. It will explore the multi-client, real-time architecture by sharing insights on managing multiple Unreal and Python clients with a central server, coordinating bot actions, face tracking, and area-specific responses in real-time. The discussion will highlight XR integration and smart campus applications, emphasizing the bot’s adaptability within XR platforms using Unreal Engine and its potential for virtual and augmented reality applications in campus tours, orientations, and educational experiences. Additionally, the session will discuss LLM-driven conversational AI, utilizing advanced models to power sophisticated, natural language interactions with users. High-fidelity 3D avatar creation will be addressed, focusing on crafting detailed, lifelike avatars capable of mimicking human expressions and movements. It will also cover customizable AI for chat avatars, enabling personalized, AI-driven avatars tailored to specific user preferences and needs. Interactive avatars with facial animation and motion capture will be demonstrated, showing how avatars can exhibit dynamic facial expressions and reactions during interactions. The session will also explore metaverse creation, showcasing the development of immersive, interconnected virtual worlds where users can interact through their avatars. Finally, the discussion will include virtual reality (VR) and augmented reality (AR) environments and experiences, highlighting their ability to blend digital content with the physical world or create entirely virtual spaces.
https://i1.wp.com/ming3d.com/new/wp-content/uploads/2024/12/VIZ-3-2024-th.jpg?fit=301%2C243&ssl=1243301Ming Tanghttp://ming3d.com/new/wp-content/uploads/2022/01/TY_logo-300x300-new.pngMing Tang2024-12-28 04:18:212025-01-17 18:18:23course work on AI, Metaverse, XR, Digital Twin, Reality Capture
I’m excited to be part of a group of UC DAAP faculty and student works exhibited at the 2024 Cincinnati Blink. The theme of the exhibition is “Artistic Intelligence with AI.”
https://i2.wp.com/ming3d.com/new/wp-content/uploads/2024/10/blink-logo.jpg?fit=601%2C580&ssl=1580601Ming Tanghttp://ming3d.com/new/wp-content/uploads/2022/01/TY_logo-300x300-new.pngMing Tang2024-10-21 14:27:312025-01-17 18:22:22Artistic Intelligence with AI: Gallery show at Blink