Posts

Human Digital Twin

High-Fidelity Digital Human Modeling and Visualization

This research investigates advanced methods for constructing high-fidelity digital humans, with a focus on both technical innovation and applied use in immersive environments. The project integrates generative artificial intelligence, image-based modeling, and visualization pipelines to advance realism, interactivity, and usability in human–computer interaction.

Aim 1. Conversational Digital Agents Driven by Large Language Models (LLMs).
The first aim is to utilize large language models (LLMs) as the core engine for conversational digital humans. By embedding LLM-driven reasoning into virtual agents, the project seeks to create responsive, adaptive, and context-aware “talking agents.” These agents will simulate naturalistic dialogue, provide interactive guidance, and adapt to user needs across diverse scenarios such as education, healthcare training, and collaborative design.

Aim 2. Photorealistic Skin and Visual Fidelity Through Scanned Data.
The second aim focuses on the visual accuracy of digital humans. High-resolution image scans will be processed to reconstruct human skin with detailed fidelity, including surface textures, translucency, and micro-geometric variations. The resulting models are capable of 4K photorealistic rendering (click image to view sample output), significantly enhancing realism in simulation and visualization. This fidelity is crucial for applications where nuanced perception—such as empathy, trust, or attentiveness—depends on subtle visual cues.

Significance.
By combining intelligent conversational capabilities with photorealistic appearance, this research advances the next generation of digital humans. The outcomes will support applications in extended reality (XR), therapeutic and clinical training, collaborative design education, and digital twin environments, where authenticity of both interaction and appearance directly influences user engagement and effectiveness

High-fidelity digital human.

Read more

AI and Emerging Technology Symposium

Ming Tang and Mikhail Nikolaenko presented “AI-Powered Digital Humans for Enhanced Interaction in Extended Reality” at the AI and Emerging Technology Symposium, University of Cincinnati.

The day-long event explored topics around AI and robotic process automation; smart campus innovation; and extended reality, virtual reality, and augmented reality. More on UC News.

AI-Powered Talking Avatars for Enhanced Interaction in Extended Reality

Presenter. Ming Tang, Mikhail Nikolaenko. Feb. 20, 2025 in Tangeman University Center. 

Read more

Artistic Intelligence with AI: Gallery show at Blink

I’m excited to be part of a group of UC DAAP faculty and student works exhibited at the 2024 Cincinnati Blink. The theme of the exhibition is “Artistic Intelligence with AI.”

Thanks to the support from Midwestcon, DisruptionNowDAAP, and 1819 Innovation Hub.

October 18-20. Cincinnati. OH.

Read more

Redefining Space in the Age of Spatial Computing

ARCH 4001. Fall. 2024

Redefining Space in the Age of Spatial Computing 

Ming Tang
Professor, Registered Architect, NCARB, LEED AP, Director of Extended Reality Lab. UC Digital Futures. Leadership committee, Institute for Research in Sensing (IRiS);
School of Architecture and Interior Design, College of Design, Architecture, Art, and Planning, University of Cincinnati

In the age of spatial computing, architecture transcends the physical boundaries of bricks and mortar, evolving into an expansive realm that merges computational technologies. This studio embraces the transformative potential of AI in the architectural design process, expanding physical spaces into the metaverse and positioning architecture at the forefront of our society’s digital revolution. By serving as a platform for spatial experience, architecture redefines our interaction with space and each other. 

This studio invites students to collectively harness the power of AI to create virtual architectural spaces that inspire, connect, and elevate the human spatial experience. Students design a convention center to host various activities, from large conference events like SIGGRAPH and trade shows like CES to small lectures and social events. They research the spatial layout of a convention center and explore the possibilities of migrating these activities to a fully 3D virtual space in the metaverse. The studio also explores the intersections of AI, digital twin, and XR technologies with the metaverse, enabling students to push the boundaries of traditional architectural design. Through this exploration, students gain insights into how digital advancements can enhance and transform spatial experiences, fostering innovation and creativity in the architectural field.

 

Read more