NCBDS conference

The paper “Designing the Future of Retail: Cross-Disciplinary Collaboration in Industrial Design and Architecture Design” has been accepted at the 40th National Conference on Begining Design Students. North Carolina State University. Raleigh, NC. 2025

Yong-Gyun Ghim, Ming Tang, University of Cincinnati

 

Abstract

The significance of design’s cross-disciplinary nature has increased alongside technological advancements as emerging technologies present new opportunities and challenges for complex socio-technical systems. Systems thinking has drawn attention to design as a holistic approach to tackling complex systems by examining the interrelationships between elements. This also necessitates cross-disciplinary collaboration to address the multifaceted nature of the problems comprehensively. These aspects of systems thinking further emphasize its importance in design education to help navigate the current era of technological innovation. The future of retail exemplifies this interconnected complexity in the context of emerging technologies because introducing them – such as robotics, artificial intelligence, and mixed reality – into retail environments requires a holistic consideration of the entire system, encompassing physical spaces, service processes, and human interactions.

This study examines a 15-week collaborative studio project between industrial design and architecture. By leveraging a systems thinking approach, the project facilitated cross-disciplinary collaboration to develop future retail concepts, enabling students to integrate their expertise and address the interconnectedness of artifacts, environments, and human interactions. Both disciplines followed a structured design process encompassing research, system design, space and robot design, visualization, and validation, while collaboration was organized around four key steps: planning, learning, prototyping, and communication. The project also involved collaboration with a supermarket chain, providing opportunities for onsite observations, employee interviews, and discussions with industry professionals. Students developed futuristic concepts for retail operations and customer experiences by leveraging the integration of mobile service robots, adaptive spaces, and mixed reality. Industrial design students focused on designing a product-service system of supermarket robots based on their redefinition of customer shopping experience and employee workflow, proposing an automated grocery order fulfillment system. Architecture students designed adaptive retail spaces that seamlessly blur the boundaries between physical and digital worlds, exploring how the Metaverse and mixed-reality interfaces can augment retail spaces and shopping experiences through dynamic, immersive interactions with digital avatars and robots. This cross-disciplinary collaboration resulted in holistic and integrative solutions for complex systems, presented through immersive VR experiences or animated scenarios.

This study’s contribution to design education is threefold. First, it proposes a systems thinking approach with cross-disciplinary collaboration for designing future retail experiences, demonstrating its effectiveness in addressing and designing complex socio-technical systems. Second, it offers insights into how industrial design and architecture can be integrated to create novel user experiences in digital transformation. Lastly, by examining the design and collaboration processes and reflecting on the opportunities and challenges, this study offers insights for its application to future studio courses. Given the increased complexity and dynamics between disciplines, thorough pre-planning and flexibility are critical for success.

Keywords:

Cross-disciplinary collaboration, Design education, Industrial design, Architecture, Future of retail

Project:  Future Service, Retail, Metaverse, and Robotics

 

AI and Emerging Technology Symposium

Ming Tang and Mikhail Nikolaenko will present “AI-Powered Digital Humans for Enhanced Interaction in Extended Reality” at the AI and Emerging Technology Symposium

Please join us at the Digital Technology Solutions:  AI and Emerging Technology Symposium at the University of Cincinnati.

The day-long event will explore topics around AI and robotic process automation; smart campus innovation; and extended reality, virtual reality, and augmented reality. More on UC News.

 

AI-Powered Talking Avatars for Enhanced Interaction in Extended Reality

Presenter. Ming Tang, Mikhail Nikolaenko. Feb. 20, 2025 in Tangeman University Center. 

This presentation explores two AI-driven talking avatars developed at the UC Extended Reality (XR) Lab, leveraging large language models (LLMs) for realistic interaction in XR environments. The XRLab Bot acts as a virtual tour guide, providing real-time engagement and navigation through the lab with spatial awareness, while the P&G Bot emulates a high-fidelity human likeness, delivering product expertise within a VR setting. These bots highlight advancements in AI, LLMs, and XR, showcasing potential applications in education, customer service, and smart campuses. The presentation will cover AI-driven navigation, multi-client architecture, and XR integration for immersive digital experiences. The session will showcase AI-driven navigation and interaction, demonstrating the bot’s capabilities in translating speech-to-text using Whisper AI, retrieving responses from ChatGPT, and interpreting real-time visitor needs and spatial data to guide users throughout XRLab. It will explore the multi-client, real-time architecture by sharing insights on managing multiple Unreal and Python clients with a central server, coordinating bot actions, face tracking, and area-specific responses in real-time. The discussion will highlight XR integration and smart campus applications, emphasizing the bot’s adaptability within XR platforms using Unreal Engine and its potential for virtual and augmented reality applications in campus tours, orientations, and educational experiences. Additionally, the session will discuss LLM-driven conversational AI, utilizing advanced models to power sophisticated, natural language interactions with users. High-fidelity 3D avatar creation will be addressed, focusing on crafting detailed, lifelike avatars capable of mimicking human expressions and movements. It will also cover customizable AI for chat avatars, enabling personalized, AI-driven avatars tailored to specific user preferences and needs. Interactive avatars with facial animation and motion capture will be demonstrated, showing how avatars can exhibit dynamic facial expressions and reactions during interactions. The session will also explore metaverse creation, showcasing the development of immersive, interconnected virtual worlds where users can interact through their avatars. Finally, the discussion will include virtual reality (VR) and augmented reality (AR) environments and experiences, highlighting their ability to blend digital content with the physical world or create entirely virtual spaces.

High-fidelity digital human.

Read more

P&G VISM

Project VISM (Virtual Interactive Simulation for Material customization)
Interactive Visualization with User-Controlled, Procedural-Based, and Physical-Based Material Customization.

PI. Ming Tang. Amount: $28,350

Sponsor: P&G. 12/01/2024 – 5/31/2025

 

 

Emotional Architecture in the Digital Age

Emotional Architecture in the Digital Age: Generative AI and Virtual Environments for Human-Centric Design

ARCH 7036-04/ARCH5051-04.Elective Theory Seminar, SAID, DAAP, Spring 2024

Classtime: Monday. 10 am -12:50 pm.  Classroom: 4425-E, CGC Lab, DAAP building. 

Faculty: Ming Tang, Professor in Architecture and Interior Design, DAAP. Director of  Extended Reality Lab. XR-Lab

Gen-AI with Stable Diffusion, by Ming Tang

In his article The Case for a Feeling Architecture and Architecture Is a Deeply Emotional Experience, Jacob DiCrescenzo emphasizes the importance of designing architecture that prioritizes human emotions over abstract concepts. He advocates for a paradigm shift toward emotional responsiveness in design, creating spaces that enrich occupants’ well-being and experience. By integrating insights from neuroscience, architects can craft impactful, memorable environments, harnessing emotion as a core element of human cognition to make buildings more meaningful and engaging. In Emotional Architecture, Arushi Malhotra describes emotional architecture as a two-fold interaction: cognitive, where users interpret space, and emotional, where they react to it. By carefully manipulating elements such as light, color, and sound, architects can create environments that foster positive emotional responses, thereby enhancing user experience and potentially influencing mood and productivity.

On the other hand, the architecture of illusion, as explored by Richard Haas [1] and M.C. Escher, and AI-generated images at Craiyon create a world where perspective and spatial perception are manipulated to challenge the viewer’s understanding of reality. These artists employ techniques that distort conventional architectural forms, constructing scenes that defy gravity, continuity, and logic. Their work invites viewers into spaces that feel simultaneously familiar and impossible, where each element appears coherent individually, yet together, they form an intricate, mind-bending puzzle. Through the use of optical illusions, they create a captivating experience that blurs the line between the real and the surreal.

Course Brief:

This seminar focuses on the intersection of emotional architecture, generative AI, and immersive virtual environments to understand how spatial design impacts human perception and emotional response. Students will leverage cutting-edge technologies to examine how elements like color, light, sound, and spatial layout shape neural responses, eye fixation patterns, and overall emotional experiences within architectural spaces.

Through AI-driven simulations and virtual reality, participants will study how specific architectural features influence occupants’ psychological and physiological reactions. Using eye-tracking technology and biometric feedback, students will gain valuable insights into user interactions and emotional responses to various spatial configurations, developing skills to design spaces that resonate emotionally and enhance human well-being.

This course will explore how architectural experiences can evoke and enhance a range of emotions, including happiness, sadness, fear, surprise, love, excitement, calmness, awe, confusion, trust, pride, relief, compassion, loneliness, and anticipation. Additionally, students will build technical skills in visualizing architectural spaces in VR and assessing emotional responses, preparing them to create impactful, emotion-centered environments.

 

Gen-AI with Stable Diffusion, by Ming Tang

Skills covered:

Unreal Engine 5, Generative AI tools

Key Topics:

  • Theories of emotional architecture and human-centered design.
  • Exploration of architectural drawings from Libbeus Woods, Zaha Hadid about styles and human emotions. 
  • Applications of generative AI in creating adaptive, responsive spatial features
  • Use Unreal Engine to create virtual reality and immersive media for simulating and testing architectural impact
  • Techniques in eye-tracking and biometric measurement to assess engagement and emotional response
  • Analyzing the influence of spatial elements—color, light, sound, material textures—on mood and neural activity
  • Ethical considerations and implications of designing for emotional impact

Eye-tracking heatmap of CVG airport. Tobii VR. 

Learning Outcomes: By the end of this course, students will:

  1. Understand the foundational theories of emotional architecture and its relevance in contemporary design.
  2. Apply generative AI tools to create and adapt architectural designs based on emotional and perceptual data.
  3. Conduct virtual reality, eye-tracking, and biometrics experiments to gauge human responses to spatial features.
  4. Critically analyze and discuss the emotional impact of environmental factors in architecture.
  5. Produce a generative AI-based design project incorporating findings on human emotional responses to enhance user experience in virtual and real spaces.

Assessment:
Assessment will be based on a combination of participation in VR-based labs, a midterm report on emotional architecture theory, and a final project integrating generative AI with VR to present an emotionally engaging spatial design prototype.

Course Prerequisites:
This seminar is designed for advanced undergraduate or graduate students in architecture, design, or related fields. Familiarity with basic architectural design concepts and introductory experience in 3D modeling or AI is recommended but not required.

Tool Description Example
Unreal Engine 5 Used for creating immersive walkthrough and drive-through experiences in virtual environments CVG Airport Renovation;   2023 elective seminar
Gen-AI Utilizes Stable Diffusion with ComfyUI, ControlNet, and 360-degree Lora for generating design concepts and spatial visuals Blink Exhibition, Spatial Computing. ARCH Design Studio  AI and Architecture
Eye-Tracking Tobii VR system for tracking user gaze and interaction within virtual environments to assess wayfinding and emotional response Eye-tracking study on the signage and wayfinding at CVG

Weekly Schedule

Week Topic Skills Covered Deliverable
1 Architecture & Emotions Introduction to emotional architecture theory and its impact on design; exploration of case studies
2 Gen-AI Using Generative AI tools like Stable Diffusion, ControlNet, and ComfyUI for creating spatial concepts based on emotional responses
3 Virtual Reality Introduction to VR fundamentals, Unreal Engine UI, and GitHub for project collaboration
4 VR Content Building VR content in Unreal Engine, focusing on modeling, materials, and lighting to enhance emotional impact Concept
5 VR Interactions Developing interactive elements in VR that respond to user movements and actions, enhancing immersion and engagement
6 Eye-Tracking Introduction to eye-tracking technology to analyze user focus and interaction within VR environments
7-15 Working Time Independent project work to refine VR environments, incorporating feedback loops with eye-tracking data to optimize emotional impact Prototype
16 Final Presentation Presentation of final VR environment, demonstrating emotional and interactive elements to a panel or peers Final Presentation

Reading

Appendix

Unreal student projects. CVG

Stable Diffusion with Control Net 

Stable Fast 3D with ComfyUI