Ming3D http://ming3d.com/new Ming Tang, Architect, LEED AP Sun, 17 Mar 2024 02:28:22 +0000 en-US hourly 1 https://wordpress.org/?v=4.8.24 https://i2.wp.com/ming3d.com/new/wp-content/uploads/2017/11/cropped-TY_logo.jpg?fit=32%2C32 Ming3D http://ming3d.com/new 32 32 138164948 P&G Metaverse http://ming3d.com/new/2024/02/28/pg-metaverse/ Wed, 28 Feb 2024 16:00:18 +0000 http://ming3d.com/new/?p=9882 Title: Leveraging Metaverse Platforms for Enhanced Global Professional Networking – Phase 1

Ming Tang, Principal Investigator

Amount: $32,416

Funding: P&G Digital Accelerator.

Gotal: metaverse technologies for global engagement

 

]]>
9882
Fluid Sim in VR http://ming3d.com/new/2024/02/12/fluid-sim-in-vr/ Mon, 12 Feb 2024 02:16:54 +0000 http://ming3d.com/new/?p=9875 Fluid Simulation in Virtual Reality. Unreal Engine. Collision test with hand. 

]]>
9875
Digital Twin, LLM & IIOT http://ming3d.com/new/2024/01/24/digital-twin-with-iiot/ Wed, 24 Jan 2024 18:55:40 +0000 http://ming3d.com/new/?p=9865 IIOT for legacy and intelligent factory machines with AR and LLM feedback with a Digital Twin demonstration of real-time IOT for architecture/building applications using Omniverse.

  • PI: Prof. Sam Anand (Director of Smart-Manufacturing Lab, Dept. of Mechanical Engineering, CEAS)
  • co-PI:  Prof. Ming Tang (Director of XR-Lab, School of Architecture & Interior Design, DAAP)

$40,000. UC Industry 4.0/5.0 Institute Consortium Research Project: 01.2024-01.2025

Primary Objective: To develop a conversational large language modeling system that acquires data from legacy machines, digital machines, environmental data, real-time data, and historical data within an IIoT environment to create a digital twin for assisting in real-time maintenance and assistance (Application Use Case: Digital Future’s Building) 

Student: Manish Raj Aryal, Sourabh Deshpande, Mikhail Nikolaenko.

]]>
9865
AI in Architecture http://ming3d.com/new/2023/12/09/9811/ Sat, 09 Dec 2023 22:09:37 +0000 http://ming3d.com/new/?p=9811 Preface

In recent years, Artificial Intelligence (AI) has made significant inroads into the creative industries, notably in fields such as architecture design. The advent of AI-driven image creation tools and advanced language models has opened new frontiers in ideation, enabling a unique synergy between human creativity and generative AI imaging. This independent study aims to delve deep into the potential impacts of AI on the discipline of architecture and its integration into the design process. Three architecture students from the University of Cincinnati, Alex, Jackman, and Nathan, have embarked on a comprehensive investigation, utilizing cutting-edge AI tools to enhance their design capabilities and challenge and redefine traditional methodologies. Throughout their journey, three students have explored various dimensions of AI applications in architecture – from conceptualization and visualization to more complex aspects like representation and style. We invite you to explore the fruits of their experiments, which demonstrate their skillful application of AI in architecture and offer a glimpse into the future of architectural design, where human ingenuity and AI collaborate to create extraordinary works.

Ming Tang

AI in architecture design

by. Alex Hamilton, Jackman Land, Nathan Rosenberger

Year in review

Through various experiments and trials of multiple proprietary software, we have some findings and processes to aid students in how they navigate the new and ever-changing world of AI. The software we have primarily focused on are Open AI’s Midjourney and Chat GPT, Prome AI, and Adobe’s AI. Our goal in this semester of study was to identify how AI affects the design process, both currently and in the future. Our findings have also helped to identify whether or not AI can close the gap in design that a college education offers.

Our Experiment

Our primary experiment dove into whether or not a college education in architecture would yield better results when writing prompts for midjourney renderings. There is an in-depth report on this experiment, here is the summary of it. Three architecture students and three other people of varying ages and education each wrote 4 prompts aiming to achieve 4 different outcomes: an abstract piece of art, an outdoor rendering, an indoor rendering, and a con concept sketch of a building. The prompts would then be loaded into midjourney and they assembled randomly and given to volunteer survey subjects to see if they could pick which 3 images in each category were developed by the architecture students.

     

Our Findings

Here is what we found from this experiment: while the complexity and length of the prompt does not seem to have a drastic effect on the final production quality, you are able to achieve a much more targeted result by understanding design. The more specific you get, the more controlled the outcome. So while Midjourney closes the gap on quality as it relates to realism, graphic quality, or stylized approaches, the final production will vary drastically if not given very specific prompts driven by key design words.

We also found that, to the untrained eye, there is little to no discernable difference in final production between architecture students and anyone else who may have access to Midjourney. We found this through surveying over 30 subjects of varying ages and backgrounds to try and identify which images were produced by the design students. Even though some images were picked slightly more than others, the answers were not consistent, and when asked how confident the subject was in their answers on a scale of 1-5, nobody answered above a 3. This goes to show that most of the subjects were taking their best guess and could not tell a difference. Lastly, when considering the complexity of the prompt, it becomes much easier to identify design students. With almost every prompt doubling the length of their untrained counterpart and a vast vocabulary for design and architectural principles, the prompts, when laid out, are easily distinguishable as to who wrote them. This further proves the point that while the final production quality remains consistent across the test of whoever writes the prompts, the design students tend to narrow the window and achieve a specific goal with their tailored prompts.

So those results beg the question: what components create a strong architectural prompt? Here is what we found to be the important factors in crafting a prompt for AI generators such as Midjourney.

2D/3D
Firstly it is important to establish whether you want a 3d rendering or a 2d drawing. This will drastically change the outcome of your prompt.

Medium/Material
After you have chosen 2d or 3d, it is important to include how you want this image to be represented. If 2d, include material like paint, pencil, canvas, paper, realistic image, etc . . .

Architectural Element
This is where you define the subject of the image. This could be a single-family home, a bridge, a painting, or any other subject of the image.

In The Style Of…
This is where you establish a known designer and include “in the style of (designer)”. This is one of the most powerful parts of the prompt that will really pinpoint the style you are aiming to achieve. Finding a designer that has the style you want to emulate is arguably the most impactful portion of the prompt.

Architectural Style
This is similar to the last prompt but refers to a design style rather than a designer themself. This could include a design principle, time period, or architectural style.

Description of Design Elements
This portion of the prompt is where you can get into the details you want included in your design, whether this is windows, structure, colors, or other design elements, this is where to include them.

Example Prompt
Create a photorealistic 3D Rendering of a bridge in the style of Zaha Hadid. This should be a suspension bridge that is primarily white and utilizes unique lighting across a beautiful river. The bridge should be futuristic and kinetic. The season is Fall, the time of day is dusk, make it look dramatic.

Our Research

The research portion of our study was comprised of testing multiple software and how they can be used in the field of design. As AI continues to develop there are more and more that are specifically for design purposes. One of these software that has caught the eyes of many designers is called Prome AI, This software is specifically designed to create renders and drawings from sketches, or vice versa, create sketches from photos or renderings. This growing platform cuts the design visualization and production phase down to a fraction of the usual time needed to create a high-quality rendering. Prome has established itself as a design-driven AI software alongside others such as Maket AI and the Artistic Inspiration Renderer (AIR) for Sketchup. These software and others continue to bleed into other steps of the design phase beyond just visualization. Maket prides itself on the ability to create floor plans from a list of room types and dimensions. AIR for Sketchup is able to take simple blocking diagrams and transform them into beautifully designed buildings. This allows the visualization step to be achieved almost anywhere in the design process.

Prome AI Experiment

Thinking back to our first year in architecture, we wanted to see if this newly developed software would have been beneficial within the design process. Replicating a project done early in our academic careers, we took an abstract painting and extruded the shapes to form a design, using the geometrical painting as guidelines. While this was certainly an abridged version, we were curious to see what AI software could do with simple extruded shapes. We asked AI to create “urban skyscrapers” for our blocking design. Prome AI features a setting in which you can control the creative freedom the software has, meaning it can stick strictly to the geometry you provide or branch out for a more concept-driven piece. Below are the two experiments we conducted. First, the abstract piece of art followed by two views of the blocking diagram and extruded shapes as a result of the painting. Using Prome AI, we imputed these simple screenshots to see the result we were able to achieve using simple line drawings as a base. Attached are the three options for each view that we felt were the strongest ranging from being as precise the geometry as possible to giving the software more creative freedom. 

Our Findings

While the renderings produced were certainly not as strong as possible, we were shocked by the quality given there was no prompt provided and or post-production editing. We have continued research with this platform and simple changes could be made that would likely result in stronger results such as specifying a material or site or adding more detail in the initial sketch; however, for this experiment, we wanted to see what the software was able to produce with as little information as possible. We can see this being used in a number of different ways whether that be to generate a concept such as shown in the experiment above and/or produce final renderings by uploading a 2D floor plan or more developed screenshots from your modeling software. We believe this to be one of the stronger AI platforms and expect it to be utilized more frequently in the future given that it is one of the few free AI-based design websites.

How Can Architecture Students Learn Through AI

Those who are familiar with AI understand how quickly and relatively accurate these programs can be when given a prompt. Most students, regardless of their major, use ChatGPT to answer test questions and write their essays. It is a quick and easy software to obtain work that would take a student hours to achieve. Design students are familiar with DALL E and Midjourney. These are AI programs that work just like ChatGPT. A prompt is given that describes the intended image and the AI will produce a quality image in that liking. Most students use these programs to produce work or an idea. However, what if these AI programs were used to better understand the design of architecture? Architecture is a vast field of design with different styles and vocabulary. First-Year students are introduced to architecture and interior design by a series of exercises of hand-drawn work. They are then given some history about architecture and eventually they get to design their first structure. An issue about this is that they don’t get a taste of the vocabulary and styles of architecture that might interest them. A solution to this is to allow and promote the usage of AI programs like ChatGPT to assist in the learning of these architectural styles and terms. We believe an early introduction to AI for students could assist and benefit their design careers. Here is an example of what the combination of ChatGPT and Midjourney could achieve for students:

Exercises like this can allow the student to understand what makes a building follow the brutalist style of architecture. Instead of giving Midjourney a basic prompt like “Create a brutalist pavilion”, the student can learn and practice different architectural style and terms and start to understand different components of a building. Here is an example of how a student can start to understand the spatial layout of a building.

Questions like these might be common sense to those with experience in architecture. However, this could be a concept that first-year architecture students never get the grasp of until later in their education career. Personally, there are things that school didn’t teach me that I later learned by being in an architecture office. They seem like simple things, yet we were never taught them for whatever reason. Allowing a student to learn architecture through AI can boost their learning and prepare them for real work outside of the classroom. It will be interesting to see how AI can influence an eager student who is interested in design.

What Does the Future Hold

As AI becomes more advanced, and especially as it is developed with specific design intent, it will change the industry more and more. As far as we can tell, AI will never replace the designer completely; however, it can aid in the process and significantly speed it up. It is difficult to say how much AI will be adopted into the professional field of work, especially at this early stage where it is changing so quickly. We strongly believe, that at least one person in every office, regardless of size, should be keeping up to date on AI news in design and what changes are happening and coming in the future. This is a tool that is far too powerful and full of potential to go unused. We understand that this topic is uncomfortable for a lot of people who feel their jobs may be at risk down the line as AI expands. We urge you to not think of it as a replacement and use it as a tool. Take this as an example: Say you are in a firm and you specialize in conceptual design and design visualization. Without the use of AI, you are able to produce 3 conceptual designs and 3 renderings in a day. Now with the use of AI, you are able to 10x that amount and create 30 conceptual designs and 30 renderings in the same amount of time. Let’s say that half of the AI-generated or aided designs do not meet the standards you need them to, you have still increased your work by a multiple of 5. As you get better at crafting prompts and using new design-based AI you may be able to get that success rate closer to 100 which means you have increased work by a multiple of 10. You, as a worker who understands how to implement AI into certain parts of your work and manipulate and massage it into a design you envision, have become multiple times more valuable to the company you work for. This technology is unavoidable, so we recommend getting on the right side of it and understanding it from the beginning because no matter how many people say AI is going to take over the world, the much more likely outcome is that it revolutionizes the way we work and the output we can achieve.

Where Should I Start

There are plenty of social media pages, blogs, and news outlets that are constantly covering developments in artificial intelligence. We recommend finding one or two that you enjoy and following them to stay up to date on the latest news, this could be 5 minutes a day glancing over new articles.

Next is to experiment with larger, more fine-tuned, and available AI resources such as Chat-GPT, Bard (google), Dalle, SIri, and Midjourney. These more developed versions of AI are easy to use and understand and are largely free to the public.

Lastly, after you have familiarized yourself with some of the capabilities of the more mainstream intelligences, you should start to research your line or work or a niche that interests you and find what AI software are being developed for that specific purpose. In our case of architectural design, we recommend you begin experimenting with some of the more novel software we spoke of such as Prome AI, Maket, or AIR (just to name a few). Don’t just test these to test them, try to actively think of if and how it could be implemented into your workflow or design process.

Alex Hamilton

University of Cincinnati, DAAP | Bachelor of Architecture | 2024

Jackman Land

University of Cincinnati, DAAP | Bachelor of Architecture | 2024

Nathan Rosenberger

University of Cincinnati. DAAP | Bachelor of Architecture | 2024

 

More information on AI and Prof. Ming Tang’s research: 

Ming Tang has conducted previous AI investigations, including Building Safety Analysis with Machine Learning, AI-driven design automation, and human-machine symbiosis for creative arts.

]]>
9811
paper SpaceXR in HCI 2024 http://ming3d.com/new/2023/12/09/paper-in-hci-2024/ Sat, 09 Dec 2023 21:19:23 +0000 http://ming3d.com/new/?p=9802 Our “SpaceXR: Virtual Reality and Data Mining for Astronomical Visualization ” paper is accepted at the 26th HCI International Conference.  Washington DC, USA. 29 June – 4 July 2024
Authors: Mikhail Nikolaenko, Ming Tang

Abstract

This paper presents a ” SpaceXR ” project that integrates data science, astronomy, and Virtual Reality (VR) technology to deliver an immersive and interactive educational tool. It is designed to cater to a diverse audience, including students, academics, space enthusiasts, and professionals, offering an easily accessible platform through VR headsets. This VR application offers a data-driven representation of celestial bodies, including planets and the sun within our solar system, guided by data from the NASA and Gaia databases. The VR application empowers users with interactive capabilities encompassing scaling, time manipulation, and object highlighting. The potential applications span from elementary educational contexts, such as teaching the star system in astronomy courses, to advanced astronomical research scenarios, like analyzing spectral data of celestial objects identified by Gaia and NASA. By adhering to emerging software development practices and employing a variety of conceptual frameworks, this project yields a fully immersive, precise, and user-friendly 3D VR application that relies on a real, publicly available database to map celestial objects. 

Check more project details on Solar Systems in VR. 

]]>
9802
CVG Airport renovation http://ming3d.com/new/2023/11/21/cvg-airport-renovation/ Tue, 21 Nov 2023 02:15:38 +0000 http://ming3d.com/new/?p=9747 Cincinnati/Northern Kentucky International Airport (CVG) renovation project.

This dynamic course delves into designing human-centric, technologically advanced retail spaces at CVG, addressing contemporary challenges. Collaborating directly with CVG, we focus on conceptualizing the “Future CVG Experience,” exploring pivotal questions: envisioning the future look of CVG, the transformative impact of AR and VR on airport experiences, integrating the metaverse and immersive technologies into retail, and the potential for public art and recreational programs to enrich the traveler’s journey.

Faculty: Ming Tang. Director of XR-Lab, DAAP, UC. Thanks the support from Josh Edwards from CVG, and Chris Collins and Eric Camper from UC SIM. 

Twelve proposed scenarios of future CVG. 

Student: ARCH 7014. Fall. 2023.

Stephanie Ahmed, Ben Aidt, Thimesha Amarasena, Heather Cheng, Stephanie Circelli, Catherine D’Amico, Gabby Dashiell, Nikunj Deshpande, Carson Edwards, Olufemi Faminigba, Christopher Fultz, Emma Hausz, Jinfan He, Haley Heitkamp, Robin Jarrell, Emily Jaster, Bhaskar Jyoti Kalita, Analise Kandra, Sreya`Killamshetty, Japneet Kour, Thomas Magee, Mea McCormack, Sepideh Miraba, Dan O’Neill, Shailesh Padalkar, Gaurang Pawar, Urvi Prabhu, Michael Rinaldi-Eichenberg, Kelby Rippy, Will Roberts, Chris Schalk, Miles Sletto, Lizzy Sturgeon, Shruthi Sundararajan, Erika VanSlyke, Clayton Virzi, Yue Wu

The heatmap represents the fixation and gaze. 

Check more research on eye-tracking conducted by Prof. Tang at XR-Lab. >>

]]>
9747
paper on JEC http://ming3d.com/new/2023/11/19/paper-on-jec/ Sun, 19 Nov 2023 14:32:37 +0000 http://ming3d.com/new/?p=9759

Paper accepted in the Journal of Experimental Criminology.

Cory P. Haberman, Ming Tang, JC Barnes, Clay Driscoll, Bradley J. O’Guinn, Calvin Proffit, The Effect of Checklists on Evidence Collection During Initial Investigations A Randomized Controlled Trial in Virtual Reality. Journal of Experimental Criminology

Objective To examine the impact of an investigative checklist on evidence collection by police officers responding to a routine burglary investigation.

Methods A randomized control trial was conducted in virtual reality to test the effectiveness of an investigative checklist. Officers in the randomly assigned treatment group (n = 25) were provided with a checklist during the simulated investigation. Officers in the control group (n = 26) did not have access to the checklist at any time. The checklist included five evidence items commonly associated with burglary investigations.

Results Officers who were randomly provided with an investigative checklist were significantly more likely to collect two evidence items located outside of the virtual victim’s home. Both treatment and control officers were about equally as likely to collect three evidence items located inside the residence.

Conclusions Investigative checklists represent a promising new tool officers can use to improve evidence collection during routine investigations. More research is needed, however, to determine whether checklists improve evidence collection or case clearances in real-life settings. Virtual reality simulations provide a promising tool for collecting data in otherwise difficult or complex situations to simulate

Keywords: Investigations, Burglary, Checklists, Policing, Experiment, Randomized controlled trial

more information on this VR police training project available here. 

]]>
9759
UHP Spring 2024 http://ming3d.com/new/2023/08/28/uhp-spring-2024/ Mon, 28 Aug 2023 17:21:33 +0000 http://ming3d.com/new/?p=9694 ARCH3051. UHP Spring 2024 course: 

Human-Computer Interaction in the Age of Extended Reality & Metaverse

Ming Tang. Professor, Director of Extended Reality Lab (XR-Lab). UC Digital Future.

Institute for Research in Sensing (IRiS); Industry 4.0 & 5.0 Institute (I45I)
School of Architecture and Interior Design, College of Design Architecture Art and Planning
Class Time: T. TH 12:30pm – 1:50pm. Spring Semester. 2024. 

Location: 4425 E CGC Lab @ DAAP

Note: This course is open to UG students in the UC Hornos Program

Prerequisite: There are no skills needed as prerequisites for this introduction-level course on XR.

Course Description:

This seminar course focuses on real-world problem solving using extended reality applications, with an in-depth investigation of the intersection of human-computer interaction (HCI) with immersive visualization technologies, including Virtual Reality (VR), Augmented Reality(AR), Mixed Reality (MR), Metaverse, and Digital Twin. The class will explore the new immersive experience in the virtual realm and analyze human perceptions through various sensory inputs. Students will learn both the theoretical framework and hands-on skills on XR development. The course will provide students with exposure to cutting-edge AR/VR  technologies at the Digital Future building. Students are encouraged to propose their own or group research on the subject of future HCI with XR.

Course Goals:

Students completing this course will be able to

  • Identify the real-world problems which might be solved through XR technology.
  • Identify the opportunities and challenges of building an inclusive metaverse accessible to everyone.
  • Identify theories and research methods related to XR and Metaverse.

The student will join a series of lectures hosted at the XR-Lab at the UC Digital Future building

  • History and theory related to XR (VR+AR+MR)
  • Emerging trends related to the Metaverse and its impact on the society
  • Theories related to Simulation & Simulacrum
  • Digital Human: perception and spatial experience
  • Biometric data and sensory technologies

Besides the theories, students will gain hands-on skills and demonstrate the development of XR techniques.

  • VR development or
  • AR development

Required Text(s): none

Recommended Text(s): various online lectures and readings

Students are strongly encouraged to look at Metaverse-related products and publications. As students develop their own research, these resources become invaluable in terms of inspiration and motivation. The instructor supports all attempts by students to experiment with techniques that interest and challenge them above and beyond what is taught in the classroom.

Required Materials: 

  • Access to a computer at CGC Lab at DAAP.
  • VR headset (Optional, provided by XR-Lab)

Software: Unreal Engine 5. . available at DAAP CGC Lab.

Schedule for each week.

  1. History and the workflow of metaverse development 
  2. Project management at GitHub (Inspiration from the film due) 
  3. Modeling ( XR vignette proposal due)
  4. Material & Lighting (Project development start) 
  5. Blue Print, node-based programming 
  6. Character and Animation
  7. Interaction and UX @ deploy to XR
  8. Project development time
  9. Project development time
  10. Project development time
  11. Project development time
  12. Project development time
  13. Project development time
  14. Project development time
  15. Project development time
  16. End-of-semester gallery show at DF building

Examples of student projects on XR development

“Reality is just the beginning.”

]]>
9694
Protected: MIND VR http://ming3d.com/new/2023/07/29/1819-venture-lab/ Sat, 29 Jul 2023 14:56:48 +0000 http://ming3d.com/new/?p=9671

This content is password protected. To view it please enter your password below:

]]>
9671
Digital Human http://ming3d.com/new/2023/07/06/meta-human/ Thu, 06 Jul 2023 01:22:19 +0000 http://ming3d.com/new/?p=9661 Facial motion animation experiments at XR-Lab. The digital human project explores high-fidelity digital human modeling and animation powered by ChatGPT and Open AI. 

We are also developing full body motion capture through VR tracking system and Unreal’s meta human. 

 

 

]]>
9661