• Libraries
    • Log in to:
    View Item 
    •   MSpace Home
    • Faculty of Graduate Studies (Electronic Theses and Practica)
    • FGS - Electronic Theses and Practica
    • View Item
    •   MSpace Home
    • Faculty of Graduate Studies (Electronic Theses and Practica)
    • FGS - Electronic Theses and Practica
    • View Item
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Deep learning-based volumetric damage quantification using an inexpensive depth camera

    Thumbnail
    View/Open
    gomes_gustavo.pdf (7.023Mb)
    Date
    2018
    Author
    Gomes, Gustavo
    Metadata
    Show full item record
    Abstract
    The aging of infrastructure in North America has pushed the investigations of new structural health monitoring (SHM) solutions. Visual inspections are commonly performed for SHM, but they can be extensive and often rely on the inspector’s experience. Complex, expensive sensor setups are also used for SHM. Computer vision provides efficient alternatives to these procedures, allowing low-cost, remote data acquisition. In this study, a Faster Region-based Convolutional Neural Network (Faster R- CNN)-based damage detection method coupled with an inexpensive depth sensor is proposed. A database composed of 1091 images with resolution of 853 1440 pixels, labeled for volumetric damage is developed and the deep learning network is modified, trained, and validated using the proposed database. The output from the Faster R-CNN is utilized as a starting point to identify the surface of the member, segment and quantify damage. The methodology is validated using a polystyrene test rig with damage of known volumes, as well as reinforced concrete members. The trained Faster R-CNN presented average precision (AP) of 90.79%. Volume quantifications show mean precision error (MPE) of 9.45% when considering distances from 100 cm to 250 cm between the element and the sensor. Also, MPE of 3.24% was obtained for maximum damage depth measurements across the same distance range. Damages are detected, segmented, and quantified regardless of the distance between the member and the sensor, which allows the system to be implemented in unmanned vehicles for safe data acquisition in hazardous scenarios.
    URI
    http://hdl.handle.net/1993/33222
    Collections
    • FGS - Electronic Theses and Practica [25522]

    DSpace software copyright © 2002-2016  DuraSpace
    Contact Us | Send Feedback
    Theme by 
    Atmire NV
     

     

    Browse

    All of MSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

    My Account

    Login

    Statistics

    View Usage Statistics

    DSpace software copyright © 2002-2016  DuraSpace
    Contact Us | Send Feedback
    Theme by 
    Atmire NV