Description
Sources of video and photo content are rapidly increasing within construction. In addition to mobile phones and tablets, drones and even wearables are starting to generate their own views of “reality” from the job site. How can this information best be capitalized on for value during construction, and managed afterward for future retrieval? In this case study-driven class, Oliver Smith of Skanska and Josh Kanner of Smartvid.io will explore how a new set of technologies are helping to deal with the content proliferation—and are also going beyond to elevate the power and value of this visual data. New techniques in machine learning are enabling mining of the native content to help tag and search the ever-growing set of videos and photos coming from the field. Once tagged, the content can be integrated with Building Information Modeling (BIM) for ongoing progress updates during the project and other workflows. We will present examples of drone-generated and mobile-phone-generated field-progress capture. This session features Navisworks Manage, AutoCAD, and Revit. AIA Approved
Key Learnings
- Understand the types of visual documentation coming from the field
- Comprehend the basics of machine learning for speech and vision
- Describe BIM-driven reality-capture workflows during construction
- Understand the different types of visual documentation devices in the field today
Downloads
Tags
Product | |
Industries | |
Topics |