The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. ex. Some numerals are expressed as "XNUMX".
Copyrights notice
The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. Copyrights notice
3D 환경 모델을 재구성하는 일반적인 접근 방식은 깊이 센서로 환경을 스캔하고 누적된 포인트 클라우드를 3D 모델에 맞추는 것입니다. 이러한 종류의 시나리오에서 일반적인 3D 환경 재구성 응용 프로그램은 시간적으로 연속적인 스캔을 가정합니다. 그러나 일부 실제 사용에서는 이러한 가정이 허용되지 않습니다. 따라서 여러 개의 비연속적인 3D 스캔을 연결하기 위한 포인트 클라우드 매칭 방법이 필요합니다. 포인트 클라우드 매칭은 기본적으로 실제 환경의 희박한 샘플링이기 때문에 특징점 검출에 오류가 자주 포함되며, 무시할 수 없는 양자화 오류가 포함될 수 있습니다. 더욱이 깊이 센서는 관찰된 표면의 반사 특성으로 인해 오류가 발생하는 경향이 있습니다. 따라서 우리는 두 포인트 클라우드 사이의 특징 포인트 쌍에 오류가 포함된다고 가정합니다. 본 연구에서는 위에서 설명한 특징점 등록 오류에 강인한 특징 설명 방법을 제안한다. 이러한 목표를 달성하기 위해 우리는 특징점 주변의 로컬 특징 설명과 전체 포인트 클라우드에 대한 전역 특징 설명으로 구성된 딥러닝 기반 특징 설명 모델을 설계했습니다. 특징점 등록 오류에 강력한 특징 설명을 얻기 위해 오류가 있는 특징점 쌍을 입력하고 메트릭 학습을 통해 모델을 훈련합니다. 실험 결과, 우리의 특징 설명 모델은 특징점 등록 오류가 큰 경우에도 특징점 쌍이 일치로 간주될 만큼 충분히 가까운지 여부를 정확하게 추정할 수 있으며, 우리 모델은 다음과 같은 방법에 비해 더 높은 정확도로 추정할 수 있음을 보여줍니다. FPFH 또는 3DMatch. 또한 로컬 또는 글로벌 포인트 클라우드, 두 가지 유형의 포인트 클라우드 및 인코더를 포함한 입력 포인트 클라우드의 조합에 대한 실험을 수행했습니다.
Kenshiro TAMATA
Osaka University
Tomohiro MASHITA
Osaka University
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
부
Kenshiro TAMATA, Tomohiro MASHITA, "Feature Description with Feature Point Registration Error Using Local and Global Point Cloud Encoders" in IEICE TRANSACTIONS on Information,
vol. E105-D, no. 1, pp. 134-140, January 2022, doi: 10.1587/transinf.2021EDP7082.
Abstract: A typical approach to reconstructing a 3D environment model is scanning the environment with a depth sensor and fitting the accumulated point cloud to 3D models. In this kind of scenario, a general 3D environment reconstruction application assumes temporally continuous scanning. However in some practical uses, this assumption is unacceptable. Thus, a point cloud matching method for stitching several non-continuous 3D scans is required. Point cloud matching often includes errors in the feature point detection because a point cloud is basically a sparse sampling of the real environment, and it may include quantization errors that cannot be ignored. Moreover, depth sensors tend to have errors due to the reflective properties of the observed surface. We therefore make the assumption that feature point pairs between two point clouds will include errors. In this work, we propose a feature description method robust to the feature point registration error described above. To achieve this goal, we designed a deep learning based feature description model that consists of a local feature description around the feature points and a global feature description of the entire point cloud. To obtain a feature description robust to feature point registration error, we input feature point pairs with errors and train the models with metric learning. Experimental results show that our feature description model can correctly estimate whether the feature point pair is close enough to be considered a match or not even when the feature point registration errors are large, and our model can estimate with higher accuracy in comparison to methods such as FPFH or 3DMatch. In addition, we conducted experiments for combinations of input point clouds, including local or global point clouds, both types of point cloud, and encoders.
URL: https://global.ieice.org/en_transactions/information/10.1587/transinf.2021EDP7082/_p
부
@ARTICLE{e105-d_1_134,
author={Kenshiro TAMATA, Tomohiro MASHITA, },
journal={IEICE TRANSACTIONS on Information},
title={Feature Description with Feature Point Registration Error Using Local and Global Point Cloud Encoders},
year={2022},
volume={E105-D},
number={1},
pages={134-140},
abstract={A typical approach to reconstructing a 3D environment model is scanning the environment with a depth sensor and fitting the accumulated point cloud to 3D models. In this kind of scenario, a general 3D environment reconstruction application assumes temporally continuous scanning. However in some practical uses, this assumption is unacceptable. Thus, a point cloud matching method for stitching several non-continuous 3D scans is required. Point cloud matching often includes errors in the feature point detection because a point cloud is basically a sparse sampling of the real environment, and it may include quantization errors that cannot be ignored. Moreover, depth sensors tend to have errors due to the reflective properties of the observed surface. We therefore make the assumption that feature point pairs between two point clouds will include errors. In this work, we propose a feature description method robust to the feature point registration error described above. To achieve this goal, we designed a deep learning based feature description model that consists of a local feature description around the feature points and a global feature description of the entire point cloud. To obtain a feature description robust to feature point registration error, we input feature point pairs with errors and train the models with metric learning. Experimental results show that our feature description model can correctly estimate whether the feature point pair is close enough to be considered a match or not even when the feature point registration errors are large, and our model can estimate with higher accuracy in comparison to methods such as FPFH or 3DMatch. In addition, we conducted experiments for combinations of input point clouds, including local or global point clouds, both types of point cloud, and encoders.},
keywords={},
doi={10.1587/transinf.2021EDP7082},
ISSN={1745-1361},
month={January},}
부
TY - JOUR
TI - Feature Description with Feature Point Registration Error Using Local and Global Point Cloud Encoders
T2 - IEICE TRANSACTIONS on Information
SP - 134
EP - 140
AU - Kenshiro TAMATA
AU - Tomohiro MASHITA
PY - 2022
DO - 10.1587/transinf.2021EDP7082
JO - IEICE TRANSACTIONS on Information
SN - 1745-1361
VL - E105-D
IS - 1
JA - IEICE TRANSACTIONS on Information
Y1 - January 2022
AB - A typical approach to reconstructing a 3D environment model is scanning the environment with a depth sensor and fitting the accumulated point cloud to 3D models. In this kind of scenario, a general 3D environment reconstruction application assumes temporally continuous scanning. However in some practical uses, this assumption is unacceptable. Thus, a point cloud matching method for stitching several non-continuous 3D scans is required. Point cloud matching often includes errors in the feature point detection because a point cloud is basically a sparse sampling of the real environment, and it may include quantization errors that cannot be ignored. Moreover, depth sensors tend to have errors due to the reflective properties of the observed surface. We therefore make the assumption that feature point pairs between two point clouds will include errors. In this work, we propose a feature description method robust to the feature point registration error described above. To achieve this goal, we designed a deep learning based feature description model that consists of a local feature description around the feature points and a global feature description of the entire point cloud. To obtain a feature description robust to feature point registration error, we input feature point pairs with errors and train the models with metric learning. Experimental results show that our feature description model can correctly estimate whether the feature point pair is close enough to be considered a match or not even when the feature point registration errors are large, and our model can estimate with higher accuracy in comparison to methods such as FPFH or 3DMatch. In addition, we conducted experiments for combinations of input point clouds, including local or global point clouds, both types of point cloud, and encoders.
ER -