Deep learning-based volumetric damage quantification using an inexpensive depth camera

Loading...
Thumbnail Image
Date
2018
Authors
Gomes, Gustavo
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract

The aging of infrastructure in North America has pushed the investigations of new structural health monitoring (SHM) solutions. Visual inspections are commonly performed for SHM, but they can be extensive and often rely on the inspector’s experience. Complex, expensive sensor setups are also used for SHM. Computer vision provides efficient alternatives to these procedures, allowing low-cost, remote data acquisition. In this study, a Faster Region-based Convolutional Neural Network (Faster R- CNN)-based damage detection method coupled with an inexpensive depth sensor is proposed. A database composed of 1091 images with resolution of 853 1440 pixels, labeled for volumetric damage is developed and the deep learning network is modified, trained, and validated using the proposed database. The output from the Faster R-CNN is utilized as a starting point to identify the surface of the member, segment and quantify damage. The methodology is validated using a polystyrene test rig with damage of known volumes, as well as reinforced concrete members. The trained Faster R-CNN presented average precision (AP) of 90.79%. Volume quantifications show mean precision error (MPE) of 9.45% when considering distances from 100 cm to 250 cm between the element and the sensor. Also, MPE of 3.24% was obtained for maximum damage depth measurements across the same distance range. Damages are detected, segmented, and quantified regardless of the distance between the member and the sensor, which allows the system to be implemented in unmanned vehicles for safe data acquisition in hazardous scenarios.

Description
Keywords
Convolutional neural network, Concrete spalling, Volume quantification, Depth sensor, Deep learning
Citation