SENSE + AI
ARCH 4001 studio. Fall 2025. DAAP. University of Cincinnati
SENSE: Spatial Experiences for Narrative and Sensory Emotions with AI-Assisted Design

Museum concept by UC student Dwayne CarterEmma CekCourtney Reese. Fall. 2025
ARCH 4001 studio. Fall 2025. DAAP. University of Cincinnati

Museum concept by UC student Dwayne CarterEmma CekCourtney Reese. Fall. 2025
Metaverse and Digital Twins in the Age of AI and Extended Reality
Tang, Ming, Mikhail Nikolaenko, Ahmad Alrefai, and Aayush Kumar. 2025. “Metaverse and Digital Twins in the Age of AI and Extended Reality” Architecture 5, no. 2: 36. https://doi.org/10.3390/architecture5020036
Special Issue: Shaping Architecture with Computation
The paper is features in the Architecture journal cover page.

This research investigates advanced methods for constructing high-fidelity digital humans, with a focus on both technical innovation and applied use in immersive environments. The project integrates generative artificial intelligence, image-based modeling, and visualization pipelines to advance realism, interactivity, and usability in human–computer interaction.
Aim 1. Conversational Digital Agents Driven by Large Language Models (LLMs).
The first aim is to utilize large language models (LLMs) as the core engine for conversational digital humans. By embedding LLM-driven reasoning into virtual agents, the project seeks to create responsive, adaptive, and context-aware “talking agents.” These agents will simulate naturalistic dialogue, provide interactive guidance, and adapt to user needs across diverse scenarios such as education, healthcare training, and collaborative design.
Aim 2. Photorealistic Skin and Visual Fidelity Through Scanned Data.
The second aim focuses on the visual accuracy of digital humans. High-resolution image scans will be processed to reconstruct human skin with detailed fidelity, including surface textures, translucency, and micro-geometric variations. The resulting models are capable of 4K photorealistic rendering (click image to view sample output), significantly enhancing realism in simulation and visualization. This fidelity is crucial for applications where nuanced perception—such as empathy, trust, or attentiveness—depends on subtle visual cues.
Significance.
By combining intelligent conversational capabilities with photorealistic appearance, this research advances the next generation of digital humans. The outcomes will support applications in extended reality (XR), therapeutic and clinical training, collaborative design education, and digital twin environments, where authenticity of both interaction and appearance directly influences user engagement and effectiveness
High-fidelity digital human.
Faculty: Ming Tang, Professor in Architecture and Interior Design, DAAP. Director of Extended Reality Lab. XR-Lab
Gen-AI with Stable Diffusion, by Ming Tang

In his article The Case for a Feeling Architecture and Architecture Is a Deeply Emotional Experience, Jacob DiCrescenzo emphasizes the importance of designing architecture that prioritizes human emotions over abstract concepts. He advocates for a paradigm shift toward emotional responsiveness in design, creating spaces that enrich occupants’ well-being and experience. By integrating insights from neuroscience, architects can craft impactful, memorable environments, harnessing emotion as a core element of human cognition to make buildings more meaningful and engaging. In Emotional Architecture, Arushi Malhotra describes emotional architecture as a two-fold interaction: cognitive, where users interpret space, and emotional, where they react to it. By carefully manipulating elements such as light, color, and sound, architects can create environments that foster positive emotional responses, thereby enhancing user experience and potentially influencing mood and productivity.
On the other hand, the architecture of illusion, as explored by Richard Haas [1] and M.C. Escher, and AI-generated images at Craiyon create a world where perspective and spatial perception are manipulated to challenge the viewer’s understanding of reality. These artists employ techniques that distort conventional architectural forms, constructing scenes that defy gravity, continuity, and logic. Their work invites viewers into spaces that feel simultaneously familiar and impossible, where each element appears coherent individually, yet together, they form an intricate, mind-bending puzzle. Through the use of optical illusions, they create a captivating experience that blurs the line between the real and the surreal.
Course Brief:
This seminar focuses on the intersection of emotional architecture, generative AI, and immersive virtual environments to understand how spatial design impacts human perception and emotional response. Students will leverage cutting-edge technologies to examine how elements like color, light, sound, and spatial layout shape neural responses, eye fixation patterns, and overall emotional experiences within architectural spaces.
Through AI-driven simulations and virtual reality, participants will study how specific architectural features influence occupants’ psychological and physiological reactions. Using eye-tracking technology and biometric feedback, students will gain valuable insights into user interactions and emotional responses to various spatial configurations, developing skills to design spaces that resonate emotionally and enhance human well-being.
This course will explore how architectural experiences can evoke and enhance a range of emotions, including happiness, sadness, fear, surprise, love, excitement, calmness, awe, confusion, trust, pride, relief, compassion, loneliness, and anticipation. Additionally, students will build technical skills in visualizing architectural spaces in VR and assessing emotional responses, preparing them to create impactful, emotion-centered environments.
Gen-AI with Stable Diffusion, by Ming Tang
Skills covered:
Unreal Engine 5, Generative AI tools
Key Topics:
Eye-tracking heatmap of CVG airport. Tobii VR.
Learning Outcomes: By the end of this course, students will:
Assessment:
Assessment will be based on a combination of participation in VR-based labs, a midterm report on emotional architecture theory, and a final project integrating generative AI with VR to present an emotionally engaging spatial design prototype.
Course Prerequisites:
This seminar is designed for advanced undergraduate or graduate students in architecture, design, or related fields. Familiarity with basic architectural design concepts and introductory experience in 3D modeling or AI is recommended but not required.
| Tool | Description | Example |
|---|---|---|
| Unreal Engine 5 | Used for creating immersive walkthrough and drive-through experiences in virtual environments | CVG Airport Renovation; 2023 elective seminar |
| Gen-AI | Utilizes Stable Diffusion with ComfyUI, ControlNet, and 360-degree Lora for generating design concepts and spatial visuals | Blink Exhibition, Spatial Computing. ARCH Design Studio AI and Architecture |
| Eye-Tracking | Tobii VR system for tracking user gaze and interaction within virtual environments to assess wayfinding and emotional response | Eye-tracking study on the signage and wayfinding at CVG |
| Week | Topic | Skills Covered | Deliverable |
|---|---|---|---|
| 1 | Architecture & Emotions | Introduction to emotional architecture theory and its impact on design; exploration of case studies | – |
| 2 | Gen-AI | Using Generative AI tools like Stable Diffusion, ControlNet, and ComfyUI for creating spatial concepts based on emotional responses | – |
| 3 | Virtual Reality | Introduction to VR fundamentals, Unreal Engine UI, and GitHub for project collaboration | – |
| 4 | VR Content | Building VR content in Unreal Engine, focusing on modeling, materials, and lighting to enhance emotional impact | Concept |
| 5 | VR Interactions | Developing interactive elements in VR that respond to user movements and actions, enhancing immersion and engagement | – |
| 6 | Eye-Tracking | Introduction to eye-tracking technology to analyze user focus and interaction within VR environments | – |
| 7-15 | Working Time | Independent project work to refine VR environments, incorporating feedback loops with eye-tracking data to optimize emotional impact | Prototype |
| 16 | Final Presentation | Presentation of final VR environment, demonstrating emotional and interactive elements to a panel or peers | Final Presentation |
Reading
Stable Diffusion with Control Net

