政大機構典藏-National Chengchi University Institutional Repository(NCCUR):Item 140.119/152568
English  |  正體中文  |  简体中文  |  Post-Print筆數 : 27 |  全文筆數/總筆數 : 113648/144635 (79%)
造訪人次 : 51624000      線上人數 : 690
RC Version 6.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋
    政大機構典藏 > 資訊學院 > 資訊科學系 > 學位論文 >  Item 140.119/152568
    請使用永久網址來引用或連結此文件: https://nccur.lib.nccu.edu.tw/handle/140.119/152568


    題名: 人機協作與視覺導航應用於無人機河川巡檢任務研究
    The Study of Human-Robot Collaboration and Visual Navigation Applied to Unmanned Aerial Vehicle River Patrol Missions
    作者: 陳佳彣
    Chen, Chia-Wen
    貢獻者: 劉吉軒
    Liu, Jyi Shane
    陳佳彣
    Chen, Chia-Wen
    關鍵詞: 無人機
    人機協作
    河川巡檢
    自主跟隨
    任務控制
    人機互動介面
    UAV
    Human-Robot Collaboration
    River Patrol
    Mission Control
    Graphical User Interface
    日期: 2024
    上傳時間: 2024-08-05 12:45:05 (UTC+8)
    摘要: 近年來,隨著無人機技術的進步,其應用範圍從最初的軍事用途
    逐漸擴展至民生服務和公共事業等領域,其中包括河川巡檢及人員搜
    救等重要議題。無人機以其低部署成本和高機動性等特點,成為解決
    方案中靈活且高效益的選擇。然而在控制方面,若僅依賴傳統搖桿操
    作,會對操作人員帶來諸多不便,包括操作的複雜性、受限的自由度
    以及無法同時觀看無人機影像等困難。此外,操作人員為了掌握無人
    機控制,可能還需要接受專業培訓以習得執行任務所需的技能,這不
    僅提高了操作門檻,也降低了系統應用的普及性。
    為了建立直觀且易於操作的無人機控制系統,本研究提出了一種基
    於視覺導航的人機協作方法,透過視覺化平台進行即時半自動化的飛
    行調控,並將其應用於河川巡檢及人員搜救的任務場景中。
    在任務執行過程中,無人機作為環境感知和執行任務的工具,以
    河川作為跟隨目標,採用全球定位系統(GPS)作為大範圍定位的基
    礎,並融合視覺導航和語意分割模組,以建立精準、穩定的河川跟隨
    模型。同時,操作人員可以通過介面即時監控無人機的飛行數據和影
    像,以便根據需求做出即時決策。本研究所提出的人機協作系統不僅
    適用於河川巡檢任務,亦可應用於其他河川相關任務,如水位追蹤、
    水污染檢測和垃圾檢測等情境。
    In recent years, UAV technology has advanced, leading to its increased use in civilian applications such as river inspection and personnel rescue.
    UAVs are cost-effective and highly mobile, making them efficient solutions for various scenarios.
    However, traditional joystick operations for UAV control present challenges for operators, including complexity, limited freedom, and the inability to check UAV images simultaneously. Furthermore, operators need specialized training for mission execution, raising the operational threshold and reducing accessibility to the system.
    To address these issues, this study proposes a vision-based human-robot collaboration method for intuitive and easy UAV control. This method involves a visual platform for monitoring and real-time semi-automated flight control, specifically applied to river inspection and personnel rescue missions.
    During mission execution, UAV uses GPS for broad-range positioning and integrates visual navigation and semantic segmentation modules to accurately and stably follow the river as a target. Operators can monitor the UAV’s flight data and images in real time through the visual platform, allowing for timely decision-making based on mission needs. Importantly, the human-machine collaboration system proposed in this study is not limited to river inspection tasks but also extends to other river-related activities such as water level tracking, water pollution detection, and waste detection.
    參考文獻: [1] Hazim Shakhatreh, Ahmad H. Sawalmeh, Ala Al-Fuqaha, Zuochao Dou, Eyad Almaita, Issa Khalil, Noor Shamsiah Othman, Abdallah Khreishah, and Mohsen Guizani. Unmanned aerial vehicles (uavs): A survey on civil applications and key research challenges. IEEE Access, 7:48572–48634, 2019. doi: 10.1109/ACCESS.2019.2909530.
    [2] Anupam Keshari Pankaj Singh Yadav R Faiyaz Ahmed, J. C. Mohanta. Recent advances in unmanned aerial vehicles: A review. Arabian Journal for Science and Engineering, 47:7963–7984, 2022. URL https://doi.org/10.1007/s13369-022-06738-0.
    [3] Wu Xiao Zhenqi Hu He Ren, Yanling Zhao. A review of uav monitoring in mining areas: current status and future perspectives. International Journal of Coal Science Technology, 2019. URL https://doi.org/10.1007/s40789-019-00264-5.
    [4] Bin Li, Zesong Fei, and Yan Zhang. Uav communications for 5g and beyond: Recent advances and future trends. IEEE Internet of Things Journal, 6(2):2241–2263, 2019. doi: 10.1109/JIOT.2018.2887086.
    [5] 中 華 民 國 內 政 部 消 防 署. 夏 日 玩 水 內 政 部 提 醒: 禁 制 溪 流 不 要 去. 2022. URL https://www.nfa.gov.tw/kid/index.phpcode=list&flag=detail&ids=1468&article_id=12201.
    [6] Zanchettin A.M. Ivaldi S. et al. Ajoudani, A. Progress and prospects of the human–robot collaboration. Autonomous Robots, page 957–975, 2018. URL https://doi.org/10.1007/s10514-017-9677-2.
    [7] Janis Arents, Valters Abolins, Janis Judvaitis, Oskars Vismanis, Aly Oraby, and Kaspars Ozols. Human–robot collaboration trends and safety aspects: A systematic review. Journal of Sensor and Actuator Networks, 10(3), 2021. ISSN 2224-2708. doi: 10.3390/jsan10030048. URL https://www.mdpi.com/2224-2708/10/3/48.
    [8] Lu Feng, Clemens Wiltsche, Laura Humphrey, and Ufuk Topcu. Controller synthesis for autonomous systems interacting with human operators. page 70–79, 2015. doi:10.1145/2735960.2735973. URL https://doi.org/10.1145/2735960.2735973.
    [9] Michael A. Goodrich, Joseph L. Cooper, Julie A. Adams, Curtis Humphrey, Ron Zeeman, and Brian G. Buss. Using a mini-uav to support wilderness search and rescue: Practices for human-robot teaming. pages 1–6, 2007. doi: 10.1109/SSRR.2007.4381284.
    [10] Julie A Adams. Critical considerations for human-robot interface development.pages 1–8, 2002.
    [11] UAS Europe. Skyview ground control system. 2013. URL https://www.uas-europe.se/documents/SkyView%20GCS.pdf.
    [12] Ardu Pilot. Introduction to pixhawk ground control station. 2010. URL https://diydrones.com/profiles/blogs/introduction-to-pixhawk-ground?xg_source=activity.
    [13] Nicolas Audebert, Bertrand Le Saux, and Sébastien Lefèvre. Segment-before-detect:Vehicle detection and classification through semantic segmentation of aerial images.Remote Sensing, 9(4), 2017. ISSN 2072-4292. doi: 10.3390/rs9040368. URL https://www.mdpi.com/2072-4292/9/4/368.
    [14] Jonathan Long, Evan Shelhamer, and Trevor Darrell. Fully convolutional networks for semantic segmentation. pages 3431–3440, 2015. doi: 10.1109/CVPR.2015.7298965.
    [15] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. pages 234–241, 2015.
    [16] Vijay Badrinarayanan, Alex Kendall, and Roberto Cipolla. Segnet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(12):2481–2495, 2017. doi:10.1109/TPAMI.2016.2644615.
    [17] Changqian Yu, Changxin Gao, Jingbo Wang, Gang Yu, Chunhua Shen, and Nong
    Sang. Bisenet v2: Bilateral network with guided aggregation for real-time semantic segmentation. 2020.
    [18] Te-Wei Chen, Yen-Ting Huang, and Wen-Hung Liao. Far-sighted bisenet v2 for real-time semantic segmentation. pages 1–8, 2021. doi: 10.1109/AVSS52988.2021.9663738.
    [19] Keyan Chen, Chenyang Liu, Hao Chen, Haotian Zhang, Wenyuan Li, Zhengxia Zou, and Zhenwei Shi. Rsprompter: Learning to prompt for remote sensing instance segmentation based on visual foundation model. IEEE Transactions on Geoscience and Remote Sensing, 2024.
    [20] Zhengxia Zou, Keyan Chen, Zhenwei Shi, Yuhong Guo, and Jieping Ye. Object detection in 20 years: A survey. Proceedings of the IEEE, 111(3):257–276, 2023. doi: 10.1109/JPROC.2023.3238524.
    [21] Hui Zhang, Yu Du, Shurong Ning, Yonghua Zhang, Shuo Yang, and Chen Du. Pedestrian detection method based on faster r-cnn. pages 427–430, 2017. doi:10.1109/CIS.2017.00099.
    [22] A. P. P. Abdul Majeed L. L. Thai M. A. Mohd Razman J. C. Tang, A. F. B. Ab. Nasir and I. Mohd Khairuddin. Fine-tuned retinanet models for vision-based human presence detection. MEKATRONIKA, 4(2):16–23, November 2022. doi: 10.15282/mekatronika.v4i2.8850. URL https://journal.ump.edu.my/mekatronika/article/view/8850.
    [23] Wei-Yen Hsu and Wen-Yen Lin. Ratio-and-scale-aware yolo for pedestrian detection.
    IEEE Transactions on Image Processing, 30:934–947, 2021. doi: 10.1109/TIP.2020.3039574.
    [24] H.g Yu Y. Yang K. Duan G. Li W.g Zhang Q. Huang Q. Tian D. Du, Y. Qi. The unmanned aerial vehicle benchmark: Object detection and tracking. European Conference on Computer Vision (ECCV), 2018.
    [25] Pengfei Zhu, Longyin Wen, Dawei Du, Xiao Bian, Heng Fan, Qinghua Hu, and Haibin Ling. Detection and tracking meet drones challenge. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(11):7380–7399, 2021.
    [26] Jian Ding, Nan Xue, Gui-Song Xia, Xiang Bai, Wen Yang, Michael Yang, Serge Belongie, Jiebo Luo, Mihai Datcu, Marcello Pelillo, and Liangpei Zhang. Object detection in aerial images: A large-scale benchmark and challenges. IEEE Transactions on Pattern Analysis and Machine Intelligence, pages 1–1, 2021. doi:10.1109/TPAMI.2021.3117983.
    [27] Sasa Sambolek and Marina Ivasic-Kos. Automatic person detection in search and rescue operations using deep cnn detectors. IEEE Access, 9:37905–37922, 2021. doi: 10.1109/ACCESS.2021.3063681.
    [28] Igharoro O. Liu Y. et al. Hong, A. Investigating human-robot teams for learningbased semi-autonomous control in urban search and rescue environments. Journal of Intelligent Robotic Systems, page 669–686, 2019. URL https://doi.org/10.1007/s10846-018-0899-0.
    [29] Jeyan J. V. M. Ramesh, P. S. Comparative analysis of fixed-wing, rotary-wing
    and hybrid mini unmanned aircraft systems (uas) from the applications perspective.
    INCAS Bulletin, pages 137–151, 2022. URL https://doi.org/10.13111/2066-8201.2022.14.1.12.
    [30] Martin Molina, Pedro Frau, Dario Maraval, Jose Luis Sanchez Lopez, Hriday Bavle, and Pascual Campoy. Human-robot cooperation in surface inspection aerial missions. 2017.
    [31] Yenting Huang, Gongyi Lee, Rutai Soong, and Jyishane Liu. Real-time vision-based river detection and lateral shot following for autonomous uavs. pages 421–426, 2020. doi: 10.1109/RCAR49640.2020.9303263.
    [32] Norel Ya Qine Abderrahim, Saadane Abderrahim, and Azmi Rida. Road segmentation using u-net architecture. pages 1–4, 2020. doi: 10.1109/Morgeo49228.2020. 9121887.
    [33] Bipendra Basnyat, Nirmalya Roy, and Aryya Gangopadhyay. Flood detection using
    semantic segmentation and multimodal data fusion. pages 135–140, 2021. doi:
    10.1109/PerComWorkshops51409.2021.9430985.
    [34] Kun Zhao, Li Liu, Yu Meng, and Qing Gu. Feature deep continuous aggregation for
    3d vehicle detection. Applied Sciences, 9:5397, 12 2019. doi: 10.3390/app9245397.
    [35] Yuqi Chen, Xiangbin Zhu, Yonggang Li, Yuanwang Wei, and Lihua Ye. Enhanced semantic feature pyramid network for small object detection. Signal Processing: Image Communication, 113:116919, 2023. ISSN 0923-5965. doi: https://doi.org/10.1016/j.image.2023.116919. URL https://www.sciencedirect.com/science/article/pii/S0923596523000012.
    [36] Selim Seferbekov, Vladimir Iglovikov, Alexander Buslaev, and Alexey Shvets. Feature pyramid network for multi-class land segmentation. pages 272–275, 2018.
    [37] Michele Colledanchise and Petter Ögren. Behavior trees in robotics and ai: An introduction. 2018.
    [38] Hartmut Prautzsch, Wolfgang Boehm, and Marco Paluszny. Bézier and b-spline techniques. 6, 2002.
    [39] Jyi-Shane Liu and Gong-Yi Lee. A carrot in probabilistic grid approach for quadrotor
    line following on vertical surfaces. pages 1234–1241, 2019. doi: 10.1109/ICUAS.2019.8797792.
    [40] ultralytics. Yolov8. URL https://github.com/ultralytics/ultralytics?tab=readme-ov-file.
    [41] Lukas Weber and Daniela Schenk. Automatische zusammenführung zertrennter konstruktionspläne von wasserbauwerken. Bautechnik, 99(5):340, 2022. URL https://hdl.handle.net/20.500.11970/112696.
    描述: 碩士
    國立政治大學
    資訊科學系
    111753119
    資料來源: http://thesis.lib.nccu.edu.tw/record/#G0111753119
    資料類型: thesis
    顯示於類別:[資訊科學系] 學位論文

    文件中的檔案:

    檔案 描述 大小格式瀏覽次數
    311901.pdf110750KbAdobe PDF0檢視/開啟


    在政大典藏中所有的資料項目都受到原著作權保護.


    社群 sharing

    著作權政策宣告 Copyright Announcement
    1.本網站之數位內容為國立政治大學所收錄之機構典藏,無償提供學術研究與公眾教育等公益性使用,惟仍請適度,合理使用本網站之內容,以尊重著作權人之權益。商業上之利用,則請先取得著作權人之授權。
    The digital content of this website is part of National Chengchi University Institutional Repository. It provides free access to academic research and public education for non-commercial use. Please utilize it in a proper and reasonable manner and respect the rights of copyright owners. For commercial use, please obtain authorization from the copyright owner in advance.

    2.本網站之製作,已盡力防止侵害著作權人之權益,如仍發現本網站之數位內容有侵害著作權人權益情事者,請權利人通知本網站維護人員(nccur@nccu.edu.tw),維護人員將立即採取移除該數位著作等補救措施。
    NCCU Institutional Repository is made to protect the interests of copyright owners. If you believe that any material on the website infringes copyright, please contact our staff(nccur@nccu.edu.tw). We will remove the work from the repository and investigate your claim.
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 回饋