AI symposium, Bearcat AI Award
AI & Emerging Technology Symposium, UC Bearcat AI Award,
Please join us at the UC 2026 AI & Emerging Technology Symposium on 02/18/2026 at UC TUC Center.
AI-Based Spatial Computing with BIM: Performance, Sustainability, and Wayfinding on the UC Campus
This project introduces an AI-enhanced spatial computing framework that integrates building-scale digital twins with intelligent autonomous navigation. Using BIM-derived geometry and utility metadata, the system combines LLM-assisted building-performance analytics with predictive modeling to support sustainable operations across an interactive, campus-scale digital twin environment. In parallel, we present INARA, a ROS 2–based indoor navigation platform that merges BIM-accurate simulation environments with a hybrid deep-reinforcement-learning and classical-control architecture, enabling safe, adaptive mobile-robot navigation within UC facilities.
Together, these systems advance AI-driven spatial computing by unifying building analytics, embodied intelligence, and digital–physical interoperability—laying the foundation for next-generation smart-building management and autonomous robotic applications.
The project is based on two XR-Lab student fellows’ research funded by the 2025–2026 Bearcat AI Award:
-
Mikhail Nikolaenko ($4,400) — Integrating Digital Twin and GPT for Sustainable Building Analytics and Green Design Education in DAAP and CEAS
-
Aayush Kumar ($5,000) — INARA (Intelligent Navigation and Autonomous Response Agent): An Adaptive Indoor Navigation Assistant for UC Spaces
Supervisor: Ming Tang. Collaborator: Sid Thatham. Team support by Semere Abraha.







The project centers on the development of a Digital Twin (DT) and a multi-agent Large Language Model (LLM) framework designed to access and interpret real-time and historical data through an Industrial Internet of Things (IIoT) platform. Real-time data is sourced from legacy machines and smart machines, integrating Building Information Modeling (BIM) with environmental sensors. The multi-agent LLM framework comprises specialized agents and supports diverse user interfaces, including screen-based systems, Virtual Reality (VR) environments, and mobile devices, enabling versatile interaction, data visualization, and analysis.









