Comparison Study of 3D Object Matching Techniques for UV Projection Mapping: Image Feature-based and Neural Network-based Approaches

Main Article Content

Narawich Maitreejitr
Chantana Chantrapornchai

Abstract

This work describes a method for comparing 3D objects on an image with a UV map by comparing the UV map in the desired frame with the reference UV map or the frame-derived UV map verified to be valid by humans. For the comparison method, pixel-by-pixel comparison and feature-matching comparison were performed. Then the results obtained from the comparison were used to calculate the discrepancy of the position of 3D objects. The primary objective is to apply the UV map comparison algorithm to the 3D object matching on the image to reduce the time spent and increase the accuracy of the tracking step in Visual Effect (VFX). In this work, we studied two algorithm styles: Image feature-based and neural network-based approaches. Totally, five UV map comparison methods were observed: 1) pixel-by-pixel comparison with Normal Subtract, 2) pixel-by-pixel comparison with Absolute Value of Difference (Absdiff), 3) feature matching method with SIFT, 4) feature matching method with SIFT and 5) Ratio test, and comparison with feature matching method with SuperPoint and SuperGlue. Each of these methods yielded a comparison accuracy for a comparison of the entire UV map of 50%, 100%, 33.33%, 16.67%, and 91.67%, respectively. The specific side comparison yielded 67.5%, 100%, 25%, 8.33%, and 100%, respectively.

Article Details

Section
Engineering Research Articles

References

B. Lévy, S. Petitjean, N. Ray, and J. Maillot, “Least squares conformal maps for automatic texture atlas generation,” ACM Transactions on Graphics, vol. 21, no. 3, pp. 362–371, 2002.

D. Lowe, “Distinctive Image Features from Scale-Invariant Keypoints,” International Journal of Computer Vision, vol. 60, pp. 91–110, 2004.

D. DeTone, T. Malisiewicz and A. Rabinovich. (2018). SuperPoint: Self-supervised interest point detection and description. arXiv. [Online]. Available: https://arxiv.org/pdf/1712.07629.pdf

K. M. Yi, E. Trulls, V. Lepetit, and P. Fua. (2018). LIFT: Learned invariant feature transform. arXiv. [Online]. Available: https://arxiv.org/ pdf/1911.11763.pdf

E. Rublee, V. Rabaud, K. Konolige, and G. Bradski, “ORB: An efficient alternative to SIFT or SURF,” in Proceedings of the 2011 International Conference on Computer Vision, 2011, pp. 2564–2571.

P. Sarlin, D. DeTone, T. Malisiewicz, and A. Rabinovich. (2020). SuperGlue: Learning feature matching with graph neural network. arXiv. [Online]. Available: https://arxiv.org/ pdf/1911.11763.pdf