English  |  正體中文  |  简体中文  |  Post-Print筆數 : 27 |  Items with full text/Total items : 113324/144300 (79%)
Visitors : 51117701      Online Users : 830
RC Version 6.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version
    政大機構典藏 > 資訊學院 > 資訊科學系 > 學位論文 >  Item 140.119/60255
    Please use this identifier to cite or link to this item: https://nccur.lib.nccu.edu.tw/handle/140.119/60255


    Title: 基於多視角幾何萃取精確影像對應之研究
    Accurate image matching based on multiple view geometry
    Authors: 謝明龍
    Hsieh, Ming Lung
    Contributors: 何瑁鎧
    Hor, Maw Kae
    謝明龍
    Hsieh, Ming Lung
    Keywords: 多視角影像
    對應點匹配
    補綴面
    點雲
    三維模型重建
    multi-view images
    corresponding point matching
    patch
    point cloud
    3D model reconstruction
    Date: 2010
    Issue Date: 2013-09-04 17:09:06 (UTC+8)
    Abstract: 近年來諸多學者專家致力於從多視角影像獲取精確的點雲資訊,並藉由點雲資訊進行三維模型重建等研究,然而透過多視角影像求取三維資訊的精確度仍然有待提升,其中萃取影像對應與重建三維資訊方法,是多視角影像重建三維資訊的關鍵核心,決定點雲資訊的形成方式與成效。

    本論文中,我們提出了一套新的方法,由多視角影像之間的幾何關係出發,萃取多視角影像對應與重建三維點,可以有效地改善對應點與三維點的精確度。首先,在萃取多視角影像對應的部份,我們以相互支持轉換、動態高斯濾波法與綜合性相似度評估函數,改善補綴面為基礎的比對方法,提高相似度測量值的辨識力與可信度,可從多視角影像中獲得精確的對應點。其次,在重建三維點的部份,我們使用K均值分群演算法與線性內插法發掘潛在的三維點,讓求出的三維點更貼近三維空間真實物體表面,能在多視角影像中獲得更精確的三維點。

    實驗結果顯示,採用本研究所提出的方法進行改善後,在對應點精確度的提升上有很好的成效,所獲得的點雲資訊存在數萬個精確的三維點,而且僅有少數的離群點。
    Recently, many researchers pay attentions in obtaining accurate point cloud data from multi-view images and use these data in 3D model reconstruction. However, this accuracy still needs to be improved. Among these researches, the methods in extracting the corresponding points as well as computing the 3D point information are the most critical ones. These methods practically affect the final results of the point cloud data and the 3D models so constructed.

    In this thesis, we propose new approaches, based on multi-view geometry, to improve the accuracy of corresponding points and 3D points. Mutual support transformation, dynamic Gaussian filtering, and similarity evaluation function were used to improve the patch-based matching methods in multi-view image correspondence. Using these mechanisms, the discrimination ability and reliability of the similarity function and, hence, the accuracy of the extracted corresponding points can be greatly improved. We also used K-mean algorithms and linear interpolations to find the better 3D point candidates. The 3D point so computed will be much closer to the surface of the actual 3D object. Thus, this mechanism will produce highly accurate 3D points.

    Experimental results show that our mechanism can improve the accuracy of corresponding points as well as the 3D point cloud data. We successfully generated accurate point cloud data that contains tens of thousands 3D points, and, moreover, only has a few outliers.
    Reference: [1]李唐宇,"結合多元資料重建三維房屋模型",中央大學土木工程學所碩士論文,民國96年。
    [2]吳坤信,"從多視角已校正影像改善三維粗略模型",政治大學資訊科學所碩士論文,民國98年。
    [3]林立哲,"融合光達點雲以及航照影像於三維房屋模型之變遷偵測",中央大學土木工程學所碩士論文,民國99年。
    [4]莊子毅,"以三維直線特徵進行地面光達點雲套合",臺灣大學土木工程學所碩士論文,民國95年。
    [5]詹凱軒,"利用地面光達資料自動重建建物模型之研究",政治大學資訊科學所碩士論文,民國96年。
    [6]蔡瑞陽,"從多視角萃取密集影像對應",政治大學資訊科學所碩士論文,民國98年。
    [7]蔡政君,"使用光束調整法與多張影像做相機校正與三維模型重建",政治大學資訊科學所碩士論文,民國98年。
    [8]鄭邦寧,"使用空載光達點雲求定數值地表高程模型之小波法",成功大學測量及空間資訊學所碩士論文,民國96年。
    [9]Sameer Agarwal, Yasutaka Furukawa, Noah Snavely, Brian Curless, Steven M. Seitz and Richard Szeliski, "Reconstructing Rome," Computer, IEEE Computer Society Press, vol. 43, pp. 40-47, 2010.
    [10]Derek Bradley, Tamy Boubekeur and Wolfgang Heidrich, "Accurate Multi-View Reconstruction Using Robust Binocular Stereo and Surface Meshing," IEEE Conference on Computer Vision and Pattern Recognition, pp. 1-8, 2008.
    [11]Neill D.F. Campbell, George Vogiatzis, Carlos Hernández and Roberto Cipolla, "Automatic 3D Object Segmentation in Multiple Views Using Volumetric Graph-Cuts," Image and Vision Computing, vol. 28, pp. 14-25, 2008.
    [12]Neill D.F. Campbell, George Vogiatzis, Carlos Hernández and Roberto Cipolla, "Using Multiple Hypotheses to Improve Depth-Maps for Multi-View Stereo," European Conference on Computer Vision, vol. 5302, pp. 766-779, 2008.
    [13]Yasutaka Furukawa, Brian Curless, Steven M. Seitz and Richard Szeliski, "Towards Internet-scale Multi-view Stereo," IEEE Conference on Computer Vision and Pattern Recognition, pp. 1434-1441, 2010.
    [14]Yasutaka Furukawa and Jean Ponce, "Accurate, Dense, and Robust Multi-View Stereopsis," IEEE Conference on Computer Vision and Pattern Recognition, pp. 1-8, 2007.
    [15]Yasutaka Furukawa and Jean Ponce, "Accurate Camera Calibration from Multi-view Stereo and Bundle Adjustment," IEEE Conference on Computer Vision and Pattern Recognition, pp. 1-8, 2008.
    [16]Michael Goesele, Noah Snavely, Brian Curless, Hugues Hoppe and Steven M. Seitz, "Multi-View Stereo for Community Photo Collections," IEEE International Conference on Computer Vision, pp. 1-8, 2007.
    [17]C. Harris and M. Stephens, "A Combined Corner and Edge Detector," Proceedings of the 4th Alvey Vision Conference, vol. 15, pp. 147-151, 1988.
    [18]R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision, Second Edition, Cambridge University Press, 2003.
    [19]Vu Hoang Hiep, Renaud Keriven, Patrick Labatut and Jean-Philippe Pons, "Towards High-resolution Large-scale Multi-view Stereo," IEEE Conference on Computer Vision and Pattern Recognition, pp. 1430-1437, 2009.
    [20]Jianguo Li, Eric Li, Yurong Chen, Lin Xu and Yimin Zhang, "Bundled Depth-map Merging for Multi-view Stereo," IEEE Conference on Computer Vision and Pattern Recognition, pp. 2769-2776, 2010.
    [21]David G. Lowe, "Distinctive Image Features from Scale-invariant Keypoints," International Journal of Computer Vision, vol. 60 , pp. 91-110, 2004.
    [22]J. B. MacQueen, "Some Methods for Classification and Analysis of Multivariate Observations," Proceedings of 5-th Berkeley Symposium on Mathematical Statistics and Probability, Berkeley, University of California Press, vol. 1, pp. 281-297, 1967.
    [23]Steve M. Seitz, Brian Curless, James Diebel, Daniel Scharstein and Richard Szeliski, "A Comparison and Evaluation of Multi-View Stereo Reconstruction Algorithms," IEEE Conference on Computer Vision and Pattern Recognition, pp. 519-528, 2006.
    [24]Noah Snavely, Steven M. Seitz and Richard Szeliski, "Photo Tourism: Exploring Image Collections in 3D," ACM Transactions on Graphics, vol. 25, pp. 835-846, 2006.
    [25]Noah Snavely, Steven M. Seitz and Richard Szeliski, "Modeling the World from Internet Photo Collections," International Journal of Computer Vision, vol. 80, pp. 189-210, 2008.
    [26]Peng Song, Xiaojun Wu and Michael Yu Wang, "Volumetric Stereo and Silhouette Fusion for Image-based Modeling," The Visual Computer Journal, vol. 26, pp. 1435-1450, 2010.
    [27]C. Strecha, W. von Hansen, L. Van Gool, P. Fua and U. Thoennessen, "On Benchmarking Camera Calibration and Multi-View Stereo for High Resolution Imagery," IEEE Conference on Computer Vision and Pattern Recognition, pp. 1-8, 2008.
    [28]http://cvlab.epfl.ch/~strecha/multiview/denseMVS.html
    [29]http://graphics.stanford.edu/projects/gantry/
    [30]http://meshlab.sourceforge.net/
    [31]http://phototour.cs.washington.edu/bundler/
    [32]http://vision.middlebury.edu/mview/
    [33]http://www.vision.caltech.edu/bouguetj/calib_doc/
    Description: 碩士
    國立政治大學
    資訊科學學系
    98753034
    99
    Source URI: http://thesis.lib.nccu.edu.tw/record/#G0098753034
    Data Type: thesis
    Appears in Collections:[資訊科學系] 學位論文

    Files in This Item:

    File Description SizeFormat
    303401.pdf6163KbAdobe PDF2793View/Open


    All items in 政大典藏 are protected by copyright, with all rights reserved.


    社群 sharing

    著作權政策宣告 Copyright Announcement
    1.本網站之數位內容為國立政治大學所收錄之機構典藏,無償提供學術研究與公眾教育等公益性使用,惟仍請適度,合理使用本網站之內容,以尊重著作權人之權益。商業上之利用,則請先取得著作權人之授權。
    The digital content of this website is part of National Chengchi University Institutional Repository. It provides free access to academic research and public education for non-commercial use. Please utilize it in a proper and reasonable manner and respect the rights of copyright owners. For commercial use, please obtain authorization from the copyright owner in advance.

    2.本網站之製作,已盡力防止侵害著作權人之權益,如仍發現本網站之數位內容有侵害著作權人權益情事者,請權利人通知本網站維護人員(nccur@nccu.edu.tw),維護人員將立即採取移除該數位著作等補救措施。
    NCCU Institutional Repository is made to protect the interests of copyright owners. If you believe that any material on the website infringes copyright, please contact our staff(nccur@nccu.edu.tw). We will remove the work from the repository and investigate your claim.
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - Feedback