Open source shading device design tool

Developed by Mark Landis as a part of M. Arch thesis work. 2019. The University of Cincinnati.

Thesis Chair: Ming Tang, RA, LEED AP

Committee Member: Pravin Bhiwapurkar, Amanda Webb.

This open-source free tool is to introduce you to building simulations and some of their applications in architecture through plugins for Rhino. It will help designers create shading devices that save the building as much energy as possible while granting the designer enough freedom to determine what form the shades take. The idea is to ensure a shade will functionally perform while allowing the designer to prioritize other aspects of shading device that are important to them and their project, such as how the shades will look, how they form a system on the building, how they are built, how much they cost, and more. The tool makes use of Energyplus for energy simulations of space and Radiance and Daysim for solar gain and solar position calculations.

Users can download and utilize the simplified tool provided to create and document shading device iterations that all perform in a similar manner. Each iteration should represent different design goals, aesthetics, material choices, etc… and be justified through a thoughtful narrative.

 

Credit: projects from ARCH3014. Spring semester. 2019. DAAP.UC

Prerequisite:

Watch Tutorial:  Performance-driven design Step 1

Tools need to be download and install:

The ghpython2.gha file need to be unlocked and put into “special folder/ components folder”

Making your own shading device and create an energy model

Watch Tutorial:  Performance driven design Step 2

Install: All source files and weather data are here at GitHub. Please choose the right file based on your Rhino version.

The following text note Steps 1-15 are already covered in the video tutorial Step 2 above.

Step 1 – Install the required plugins and programs referenced above or mentioned in the videos provided. And move your EPW file provided to either your documents or downloads (not in a folder inside these areas), or simply C:/ladybug/

Step 2 – Open the Rhino file provided. Do not change any of the layer names or delete anything on the “Baseline” Model layer.

Step 3 – Open the provided Grasshopper script. If you get a message about any components not being recognized take note of which plugin they are associated with and attempt to reinstall this plugin. Upon successfully opening the script you will be prompted to open an EPW file. Select the EPW provided to you.

Step 4 – You should see a dialog box called “shade Designer” with 5 steps under tabs and a debug tab. You do not need to interact with the grasshopper script from this point on. Instructions are effectively stated in the tool but are also reiterated here as well.

Step 5 – Run the baseline model simulation by toggling the switch in the step 1 tab. You should see a command prompt with scrolling numbers appear briefly. The process should take less than a minute. If this does not occur make sure you installed open studio in your C drive as instructed in the tutorial videos provided and ensure you are using the EPW file provided without changing the file name.

Step 6 – Advance to the step 2 tab on the dialog box. While drawing on the “set profile to shade” layer draw one closed, planar, non-curvilinear, polyline representing the area you want to shade. This should include the whole window but could be taken a number of different ways. You could draw an outline around the whole window and shade the whole window at once or you could draw a shape that could be tessellated across the whole window such as a smaller square that can be copied to shade the whole window. Every unique shape must be shaded separately. Only run one polyline at a time. If you intend on shading the window using a pattern involving more than one shape return to this step after solving each portion of the window. Once you have drawn your profile, toggle the switch in the step 2 tab. Be patient, this will take 30 seconds to a minute to update. For more clarification or to see this process in work see the provided tutorial video.

Step 7 – Once your computer has updated after step 6 you should see the preview of a red form attached to the profile you drew. This is going to be referred to as the shading volume. This volume represents the volume of sunlight that the design tool has determined should not be allowed into the space for the purpose of saving energy. If you do not see this form try incrementally changing the division numbers in the debug tab. If this still does not work after a number of attempts it may be that your profile to shade is too complex, non-planar, curvilinear, not closed, or there are more than one curve. Check to make sure these are not the case and as a last resort make your profile simpler. Toggle on and off the option to bake the shade volume, on the step 3 tab, to make future steps simpler.

Step 8 – Use the shade volume you baked to create a shading device that blocks the entire shade volume from the window. Any form that completely or mostly cuts this volume off will perform similarly to any other iteration allow you to have freedom to design. You could use the shade volume you baked to trim an extrusion that goes up and over the window leaving just the intersection to block negative solar gains or you could draw your own interpretation that follows closely the form of the shade volume but is not literally trimmed from it. Below is a series of designs that are all doing the same thing without looking the same. If your profile to shade was a shape meant to tessellate to cover the whole window you can design a trimmed extrusion for just that shape and then copy the shading device to cover the window. If there are any other unique shapes in the screen return to the set profile step and repeat until your whole window has been covered. See the tutorial videos for additional clarification.

Step 9 – Set the shading device or devices to the “shading devices” layer and toggle the option the set them in the energy model and then toggle the switch to run the second simulation. You will see the dialog box appear again and the program will gather results for the baseline model with the addition of your shading devices.

Step 10 – You should now see results in the “step 5” tab. Record these numbers in the provided Excel sheet. The Excel sheet will give you the numbers you need for your assignment you will turn in and help me gather data on all of your projects more easily.

 Step 11 – Once you have your basic design we are asking you to think of your shade in a realistic context. Add elements, structural, material, or otherwise to make your design more architecturally feasible. Think about what the shade is made of, is it wood, metal, concrete, plastic? How big are the wood pieces, metal panels, thickness of the structure? What type of wood, what color finish, texture? Is it spanning too far to exist without supports? What does the structure look like? Model these elements and materials to take your paper thin design and give it some reality. Set materials in Rhino and model elements with appropriate thicknesses. Create your own layers for these designs apart from the ones provided as to not interfere with any future simulations.

Step 12 – Set your view to “render view” in the named views provided (see image below). Render this view. The presets are already established. If for some reason your design gets cutoff in this view you may adjust the camera to make it fit. Save this image as a PNG.

Step 13 – Use the make2D command on your design and baseline model from the render view and export the line work generated as a DWG.

Step 14 – Open the illustrator file provided replace the images and text with your own. Go to “File” and then “Place” to insert your PNG and DWG the PNG should be imported at 100% of original size and the bounding box should align with the left side and touch the top and bottom edges. The DWG should be overtop of the image, outlining your design, and imported at 39.61% to match the exact size of the image. Change the line weights of all the lines in the DWG to 2pt. Use the numbers in the Excel file to add performance data in the bottom right side of each sheet. Write a brief narrative for each design explaining what it is and why you designed it the way you did.

Step 15 – Repeat steps 6 to 14 so that you have a total of 3 unique iterations. You can use this opportunity to test different patterns, shading types, material qualities, to play with different scenarios concerning building program (should a shade be different in form if it was for an office vs a store vs an apartment?), or any number of other conditions.

Credit: projects from ARCH3014. Spring semester. 2019. DAAP.UC

Additional Resources

QA, Trouble Shoot

Q: Why my HB simulation is not running? ( component is red color in GH)?

A: Check your weather data EPW file. Move it to the C:/ladybug folder.  Relaunch Rhino and GH, then point to the new location.

check your HB and LB files in the “special folder/user object folder”, make sure they are “unlocked”.

Check your ghpython2.gha. This file needs to be unlocked and put into “special folder/ components folder”

Q: There are some strange horizontal projects in my 3D mask? 

A: If you are using Rhino 5, open the Rhino 6 version of Grasshopper. 

For other questions, please reach out to Mark Landis at Solconsults

article in IJSW journal

Ming Tang’s paper. Analysis of Signage using Eye-Tracking Technology is published at the  Interdisciplinary Journal of Signage and Wayfinding. 02. 2020.

Abstract

Signs, in all their forms and manifestations, provide visual communication for wayfinding, commerce, and public dialogue and expression. Yet, how effectively a sign communicates and ultimately elicits a desired reaction begins with how well it attracts the visual attention of prospective viewers. This is especially the case for complex visual environments, both outside and inside of buildings. This paper presents the results of an exploratory research design to assess the use of eye-tracking (ET) technology to explore how placement and context affect the capture of visual attention. Specifically, this research explores the use of ET hardware and software in real-world contexts to analyze how visual attention is impacted by location and proximity to geometric edges, as well as elements of contrast, intensity against context, and facial features. Researchers also used data visualization and interpretation tools in augmented reality environments to anticipate human responses to alternative placement and design. Results show that ET methods, supported by the screen-based and wearable eye-tracking technologies, can provide results that are consistent with previous research of signage performance using static images in terms of cognitive load and legibility, and ET technologies offer an advanced dynamic tool for the design and placement of signage.

Issue

ACKNOWLEDGMENT
The research project is supported by the Strategic Collaborative/Interdisciplinary Award of the University of Cincinnati. Thanks to the support from Professor Christopher Auffrey, students from ARCH7014, Fall 2019 semester, ARCH8001 Spring 2019 semester, and ARCH4001, Fall 2018 semester at the University of Cincinnati.

For more information on the wearable ET, screen-based ET, and VR-ET, please check out our research website, or contact Prof. Tang.

 

Digital Twin of Ohio Highway. Training Simulation for snowplow

Ming Tang led the team that constructed six large-scale Ohio highway digital twins for snowplow drivers. The road models, which are 70 miles long and span three Ohio counties, were based on real-site GIS and TOPO data.

“Evaluate Opportunities to Provide Training Simulation for ODOT Snow and Ice Drivers”. Phase 2. Ohio Department of Transportation. PI: John Ash. Co-PI: Ming Tang, Frank Zhou, Mehdi Norouzi, $952,938. Grant #: 1014440. ODOT 32391 / FHWA Period: 01.2019-03.2022.

“Evaluate Opportunities to Provide Training Simulation for ODOT Snow and Ice Drivers”. Phase-1. Ohio Department of Transportation. PI: Jiaqi Ma. Co-PI: Ming Tang, Julian Wang. $39,249. ODOT 32391 / FHWA. Period: 01.2018- 01.2019.

 

 

Lorain County. District 3. 

Hoicking Mountain District 10.

City of Columbus-south A

 

City of Columbus-West B

City of Columbus-North. C

 

Publications

Raman, M., Tang, M3D Visualization Development of Urban Environments for Simulated Driving Training and VR Development in Transportation Systems. ASCE ICTD 2023 Conference. Austin. TX. 06. 2023

publication in Urban Rail Transit journal

Paper published in the Urban Rail Transit journal

This paper describes an innovative integration of eye-tracking (ET) with virtual reality (VR), and details the application of these combined technologies for the adaptive reuse redesign of the Wudaokou rail station in Beijing. The objective of the research is to develop a hybrid approach, combining ET and VR technologies, as part of an experimental study of how to improve wayfinding and pedestrian movement in crowded environments such as those found in urban subway stations during peak hours. Using ET analysis, design features such as edges, and color contrast are used to evaluate several proposed rail station redesigns. Through VR and screen-based ET, visual attention and related spatial responses are tracked and analyzed for the selected redesign elements. This paper assesses the potential benefits of using ET and VR to assist identification of station design elements that will improve wayfinding and pedestrian movement, and describes how the combination of VR and ET can influence the design process. The research concludes that the combination of VR and ET offers unique advantages for modeling how the design of rail transit hub interiors can influence the visual attention and movement behavior of those using the redesigned station.  This is especially true for crowded conditions in complex interior spaces. The use of integrated ET and VR technology is shown to inform innovative design approaches for facilitating improved wayfinding and pedestrian movement within redesigned rail stations.

Full paper: download PDF, read HTML

Check out Tang’s eye-tracking research with transit hub design studio ARCH4002, Spring 2018.

Mixed Reality for medical data

Grant: “Magic School Bus for Computational Cell”,  the Arts, Humanities, & Social Sciences and Integrated Research Advancement Grants. (AHSS)  UC.  $8,550. PI: Tang. Co-PI: Zhang. T. 2016

The AR & VR project for medical model. Animated heart. magic school bus project at the University of Cincinnati.

 

Funded by the 2017 AHSS and Integrated Research Advancement Grant at UC. Magic School bus for Computational Cell” project constructed a mixed reality visualization at the College of DAAP and College of Medicine by integrating virtual reality (VR) and augmented reality (AR) for molecular and cellular physiology research. The project employed state-of-the-art VR and AR software and hardware, which allows for creative approaches using holographic imaging and computer simulation. This project expanded our cutting-edge research in space modeling & architecture visualization to the new computational cell field, including the creation of 3D models of the intestine tubes, and envisioning cell changes through agent-based simulation.

PI: Ming Tang. Associate Professor. School of Architecture & Interior Design, College of DAAP.

Co-PI:Tongli Zhang. PhD. Assistant Professor. Department of Molecular and Cellular Physiology. College of Medicine.

Data Managment: Tiffany Grant. PhD. Research Informationist. Health Sciences Library. College of Medicine.

the web3D model is here.