Meta learning for point cloud analysis
Point cloud has been highly attracting the attention of the research community, due to their numerous applications in 3D computer vision. While learning-based approaches for point cloud problems have achieved impressive progress, generalization to unknown testing environments remains a major challenge due to the large discrepancies of data captured by different 3D sensors. Existing methods typically train a generic model and the same trained model is applied on each test instance. This could be sub-optimal since it is difficult for the same model to handle all the variations during testing. In this thesis, we propose novel frameworks for point cloud problems that adapt the model in an instance-specific manner during inference. Our model is trained using a meta-learning scheme to provide the model with the ability of fast and effective adaptation at test time. First, we consider the problem of point cloud registration. The objective is to estimate the 3D transformation that aligns a pair of partially overlapped point clouds. Next, we investigate the point cloud upsampling problem. In this setting, the goal is to generate high-resolution point clouds from sparse point clouds. Experimental results demonstrate the effectiveness of our proposed frameworks in improving the performance of state-of-the-art models and achieving superior results.
Computer Vision, Deep Learning, Domain Adaptation, Meta Learning, Multi-Task Learning, Point Cloud Registration, Point Cloud Upsampling