VR for Police Training

Active Shooter Simulation

Develop several fully immersive 3D VR active shooter scenarios that can run on cost-effective commercially available VR hardware.

Final Report for OCJS Project

Develop and Assess Active Shooter Virtual Reality Training for Ohio Law Enforcement.  PI: J.C Barnes. Co-PI: Tang Office of Criminal Justice Services. $50,000. 09. 2020-09.2021 ( $29,608)

Development of a Virtual Reality Augmented Violence Reduction Training System for Active and Mass Shooting incidents. PI: Ed Latessa. Co-PIs: J.C. Barnes, Ming Tang, Cory Haberman, Dan Gerard, Tim Sabransky. $10,000. Start-up fund. UC Digital Futures anchor tenant cohort.

Shimmer GSR sensor is used to test Physiological stress. 

Checklist

Using Checklists and Virtual Reality to Improve Police Investigations. Collaborative Research Advancement Grants. UC. $25,000. PI: Haberman. Co-PI: Tang, Barnes. Period: 07.2020-01.2022.

Team:

Ming Tang, Cory Haberman, J.C. Barnes, Cheryl Jonson, Dongrui Zhu, Heejin Lee, Jillian Desmond, Ruby Qiu, Snigdha Bhattiprolu, Rishyak Kommineni

Design Process

To create simulated human behavior, either during the active shooting, or the casual human dialogue, the team designed a A.I system to simulate the decision trees. Please watch the technique breakdown demo.

Safety Vests

Grant: Assess the effectiveness of Type 2 and Type 3 safety vests for day and night use-Phase. Ohio Department of Transportation. PI: John Ash. Co-PI: Ming Tang. Julian Wang. $337,366.31. ($191,458.16 in FY2020  and  $145,908.15  in  FY2021) Period: 02.2020-02.2022. 

Ming Tang leads the modeling team constructed the virtual reality driving simulation, and conducted eye-tracking data collection to measure driver’s perception on the construction zone and various vest, signage and vehicles.

Work zones are an essential component of any state transportation agency’s construction and maintenance operations. As such, agencies apply numerous practices to keep their workers safe during construction operations. The Ohio Department of Transportation (ODOT) recently invested in several more advanced items to improve worker safety (and traveler safety, by hopefully reducing the number of crashes overall). Specifically, ODOT invested in Type 2 and 3 safety vests, halo lights, and reflectors on the back of dump trucks. In 2020, a team of researchers from the University of Cincinnati (UC) worked with the Ohio Department of Transportation to assess the effectiveness of safety vests for day and night use.

The simulation-based evaluation used measurements to create realistic retroreflective vests, lights, and other safety equipment in virtual scenarios. These items were then placed in different virtual work zone environments, each of which had different work zone setup conditions, traffic control, vests worn by workers, time of day/ambient lighting, etc. Through an eye-tracking experiment measuring participants’ gaze on workers in different virtual work zone scenarios and a driving simulator experiment in which participants drove through virtual work zones and were asked follow-up questions on worker conspicuity, subjective and objective measures of worker visibility were obtained.

Use Virtual Reality and Eye-tracking to evaluate the safety of vest on the highway construction site.

To access copies of the final report, visit:  https://www.transportation.ohio.gov/programs/research-program/research-projects/02-research-projects  

This research was sponsored by the Ohio Department of Transportation and the Federal Highway Administration.

Protected: Computer Vision of Building Roof Analysis

This content is password protected. To view it please enter your password below:

Open source shading device design tool

Developed by Mark Landis as a part of M. Arch thesis work. 2019. The University of Cincinnati.

Thesis Chair: Ming Tang, RA, LEED AP

Committee Member: Pravin Bhiwapurkar, Amanda Webb.

This open-source free tool is to introduce you to building simulations and some of their applications in architecture through plugins for Rhino. It will help designers create shading devices that save the building as much energy as possible while granting the designer enough freedom to determine what form the shades take. The idea is to ensure a shade will functionally perform while allowing the designer to prioritize other aspects of shading device that are important to them and their project, such as how the shades will look, how they form a system on the building, how they are built, how much they cost, and more. The tool makes use of Energyplus for energy simulations of space and Radiance and Daysim for solar gain and solar position calculations.

Users can download and utilize the simplified tool provided to create and document shading device iterations that all perform in a similar manner. Each iteration should represent different design goals, aesthetics, material choices, etc… and be justified through a thoughtful narrative.

 

Credit: projects from ARCH3014. Spring semester. 2019. DAAP.UC

Prerequisite:

Watch Tutorial:  Performance-driven design Step 1

Tools need to be download and install:

The ghpython2.gha file need to be unlocked and put into “special folder/ components folder”

Making your own shading device and create an energy model

Watch Tutorial:  Performance driven design Step 2

Install: All source files and weather data are here at GitHub. Please choose the right file based on your Rhino version.

The following text note Steps 1-15 are already covered in the video tutorial Step 2 above.

Step 1 – Install the required plugins and programs referenced above or mentioned in the videos provided. And move your EPW file provided to either your documents or downloads (not in a folder inside these areas), or simply C:/ladybug/

Step 2 – Open the Rhino file provided. Do not change any of the layer names or delete anything on the “Baseline” Model layer.

Step 3 – Open the provided Grasshopper script. If you get a message about any components not being recognized take note of which plugin they are associated with and attempt to reinstall this plugin. Upon successfully opening the script you will be prompted to open an EPW file. Select the EPW provided to you.

Step 4 – You should see a dialog box called “shade Designer” with 5 steps under tabs and a debug tab. You do not need to interact with the grasshopper script from this point on. Instructions are effectively stated in the tool but are also reiterated here as well.

Step 5 – Run the baseline model simulation by toggling the switch in the step 1 tab. You should see a command prompt with scrolling numbers appear briefly. The process should take less than a minute. If this does not occur make sure you installed open studio in your C drive as instructed in the tutorial videos provided and ensure you are using the EPW file provided without changing the file name.

Step 6 – Advance to the step 2 tab on the dialog box. While drawing on the “set profile to shade” layer draw one closed, planar, non-curvilinear, polyline representing the area you want to shade. This should include the whole window but could be taken a number of different ways. You could draw an outline around the whole window and shade the whole window at once or you could draw a shape that could be tessellated across the whole window such as a smaller square that can be copied to shade the whole window. Every unique shape must be shaded separately. Only run one polyline at a time. If you intend on shading the window using a pattern involving more than one shape return to this step after solving each portion of the window. Once you have drawn your profile, toggle the switch in the step 2 tab. Be patient, this will take 30 seconds to a minute to update. For more clarification or to see this process in work see the provided tutorial video.

Step 7 – Once your computer has updated after step 6 you should see the preview of a red form attached to the profile you drew. This is going to be referred to as the shading volume. This volume represents the volume of sunlight that the design tool has determined should not be allowed into the space for the purpose of saving energy. If you do not see this form try incrementally changing the division numbers in the debug tab. If this still does not work after a number of attempts it may be that your profile to shade is too complex, non-planar, curvilinear, not closed, or there are more than one curve. Check to make sure these are not the case and as a last resort make your profile simpler. Toggle on and off the option to bake the shade volume, on the step 3 tab, to make future steps simpler.

Step 8 – Use the shade volume you baked to create a shading device that blocks the entire shade volume from the window. Any form that completely or mostly cuts this volume off will perform similarly to any other iteration allow you to have freedom to design. You could use the shade volume you baked to trim an extrusion that goes up and over the window leaving just the intersection to block negative solar gains or you could draw your own interpretation that follows closely the form of the shade volume but is not literally trimmed from it. Below is a series of designs that are all doing the same thing without looking the same. If your profile to shade was a shape meant to tessellate to cover the whole window you can design a trimmed extrusion for just that shape and then copy the shading device to cover the window. If there are any other unique shapes in the screen return to the set profile step and repeat until your whole window has been covered. See the tutorial videos for additional clarification.

Step 9 – Set the shading device or devices to the “shading devices” layer and toggle the option the set them in the energy model and then toggle the switch to run the second simulation. You will see the dialog box appear again and the program will gather results for the baseline model with the addition of your shading devices.

Step 10 – You should now see results in the “step 5” tab. Record these numbers in the provided Excel sheet. The Excel sheet will give you the numbers you need for your assignment you will turn in and help me gather data on all of your projects more easily.

 Step 11 – Once you have your basic design we are asking you to think of your shade in a realistic context. Add elements, structural, material, or otherwise to make your design more architecturally feasible. Think about what the shade is made of, is it wood, metal, concrete, plastic? How big are the wood pieces, metal panels, thickness of the structure? What type of wood, what color finish, texture? Is it spanning too far to exist without supports? What does the structure look like? Model these elements and materials to take your paper thin design and give it some reality. Set materials in Rhino and model elements with appropriate thicknesses. Create your own layers for these designs apart from the ones provided as to not interfere with any future simulations.

Step 12 – Set your view to “render view” in the named views provided (see image below). Render this view. The presets are already established. If for some reason your design gets cutoff in this view you may adjust the camera to make it fit. Save this image as a PNG.

Step 13 – Use the make2D command on your design and baseline model from the render view and export the line work generated as a DWG.

Step 14 – Open the illustrator file provided replace the images and text with your own. Go to “File” and then “Place” to insert your PNG and DWG the PNG should be imported at 100% of original size and the bounding box should align with the left side and touch the top and bottom edges. The DWG should be overtop of the image, outlining your design, and imported at 39.61% to match the exact size of the image. Change the line weights of all the lines in the DWG to 2pt. Use the numbers in the Excel file to add performance data in the bottom right side of each sheet. Write a brief narrative for each design explaining what it is and why you designed it the way you did.

Step 15 – Repeat steps 6 to 14 so that you have a total of 3 unique iterations. You can use this opportunity to test different patterns, shading types, material qualities, to play with different scenarios concerning building program (should a shade be different in form if it was for an office vs a store vs an apartment?), or any number of other conditions.

Credit: projects from ARCH3014. Spring semester. 2019. DAAP.UC

Additional Resources

QA, Trouble Shoot

Q: Why my HB simulation is not running? ( component is red color in GH)?

A: Check your weather data EPW file. Move it to the C:/ladybug folder.  Relaunch Rhino and GH, then point to the new location.

check your HB and LB files in the “special folder/user object folder”, make sure they are “unlocked”.

Check your ghpython2.gha. This file needs to be unlocked and put into “special folder/ components folder”

Q: There are some strange horizontal projects in my 3D mask? 

A: If you are using Rhino 5, open the Rhino 6 version of Grasshopper. 

For other questions, please reach out to Mark Landis at Solconsults

article in IJSW journal

Ming Tang’s paper. Analysis of Signage using Eye-Tracking Technology is published at the  Interdisciplinary Journal of Signage and Wayfinding. 02. 2020.

Abstract

Signs, in all their forms and manifestations, provide visual communication for wayfinding, commerce, and public dialogue and expression. Yet, how effectively a sign communicates and ultimately elicits a desired reaction begins with how well it attracts the visual attention of prospective viewers. This is especially the case for complex visual environments, both outside and inside of buildings. This paper presents the results of an exploratory research design to assess the use of eye-tracking (ET) technology to explore how placement and context affect the capture of visual attention. Specifically, this research explores the use of ET hardware and software in real-world contexts to analyze how visual attention is impacted by location and proximity to geometric edges, as well as elements of contrast, intensity against context, and facial features. Researchers also used data visualization and interpretation tools in augmented reality environments to anticipate human responses to alternative placement and design. Results show that ET methods, supported by the screen-based and wearable eye-tracking technologies, can provide results that are consistent with previous research of signage performance using static images in terms of cognitive load and legibility, and ET technologies offer an advanced dynamic tool for the design and placement of signage.

Issue

ACKNOWLEDGMENT
The research project is supported by the Strategic Collaborative/Interdisciplinary Award of the University of Cincinnati. Thanks to the support from Professor Christopher Auffrey, students from ARCH7014, Fall 2019 semester, ARCH8001 Spring 2019 semester, and ARCH4001, Fall 2018 semester at the University of Cincinnati.

For more information on the wearable ET, screen-based ET, and VR-ET, please check out our research website, or contact Prof. Tang.