A 3D vision framework for generalized workpiece localization in robotic manufacturing

dc.contributor.authorHernandez Villa, Alejandro
dc.contributor.examiningcommitteeKhoshdel, Vahab (Electrical and Computer Engineering)
dc.contributor.examiningcommitteeLiang, Xihui (Mechanical Engineering)
dc.contributor.supervisorKhoshdarregi, Matt
dc.date.accessioned2024-08-15T21:21:17Z
dc.date.available2024-08-15T21:21:17Z
dc.date.issued2024-08-14
dc.date.submitted2024-08-11T20:09:01Zen_US
dc.date.submitted2024-08-14T21:53:36Zen_US
dc.degree.disciplineMechanical Engineering
dc.degree.levelMaster of Science (M.Sc.)
dc.description.abstractAutomated workpiece localization through CAD-based Point Cloud Registration (PCR) plays a key role in enabling adaptive robotic tasks, e.g., mapping toolpaths from a CAD model to the current workpiece location. However, existing approaches often lack the robustness required for full autonomy and need fine-tuning and operator intervention especially in the presence of corrupted data due to scene complexity, missing workpiece information, or sensor inaccuracies. These shortcomings are more evident when generalizing to the diversity of objects in highly variable scenarios such as in high-mix manufacturing. In this work, we address this problem by solving the CAD-to-workpiece alignment via a partial-to-partial PCR using 3D vision data from a deep learning and optimization standpoint. Our registration pipeline consists of three modules. First, we leverage pre-trained computer vision models to provide workpiece-agnostic segmentation. Then, a coarse imageguided alignment step is applied, followed by a novel registration architecture with joint global feature embedding and multi-head cross attention. Additionally, we introduce a novel gradient-based optimization method for fine point cloud registration. The quality of registration is primarily evaluated based on registration success rate, rotation, and translation accuracy via isotropic and anisotropic errors as well as runtime. The registration robustness is demonstrated through testing across three distinct datasets aimed to assess the proposed solution with 1) industrial object real-data, 2) diversity of industrial-looking objects through a proposed rule-based synthetic data generation engine with common point cloud corruptions, and 3) on an object-centric dataset standard for benchmarking in the point cloud research community. Our optimization-based method achieves a 99.8% success rate and significantly improves localization robustness compared to existing techniques, demonstrating its potential for real-world scenarios without parameter fine-tuning. Moreover, our proposed optimization-based strategy shows the ability to register partial overlapping point clouds from a coarse-to-fine manner without an initial guess. Our findings suggest that our pipeline, hybrid or purely optimization-based, shapes a step towards fully autonomous systems in manufacturing.
dc.description.noteOctober 2024
dc.identifier.urihttp://hdl.handle.net/1993/38387
dc.language.isoeng
dc.rightsopen accessen_US
dc.subjectHigh-mix low volume manufacturing
dc.subjectAutomated localization
dc.subjectPoint cloud registration
dc.titleA 3D vision framework for generalized workpiece localization in robotic manufacturing
dc.typemaster thesisen_US
local.subject.manitobano
oaire.awardNumberRGPIN-2019-05873
oaire.awardURIhttps://www.nserc-crsng.gc.ca/ase-oro/Details-Detailles_eng.asp?id=760204
project.funder.identifierhttp://dx.doi.org/10.13039/501100000038
project.funder.nameNatural Sciences and Engineering Research Council of Canada
Files
Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
hernandez_villa_alejandro.pdf
Size:
21.69 MB
Format:
Adobe Portable Document Format
Description:
A 12-month embargo is requested to allow for the preparation, revision, and publication of at least one journal article.
License bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
license.txt
Size:
770 B
Format:
Item-specific license agreed to upon submission
Description: