政大機構典藏-National Chengchi University Institutional Repository(NCCUR):Item 140.119/149467
English  |  正體中文  |  简体中文  |  Post-Print筆數 : 27 |  全文笔数/总笔数 : 113451/144438 (79%)
造访人次 : 51340704      在线人数 : 806
RC Version 6.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜寻范围 查询小技巧:
  • 您可在西文检索词汇前后加上"双引号",以获取较精准的检索结果
  • 若欲以作者姓名搜寻,建议至进阶搜寻限定作者字段,可获得较完整数据
  • 进阶搜寻
    政大機構典藏 > 商學院 > 資訊管理學系 > 學位論文 >  Item 140.119/149467


    请使用永久网址来引用或连结此文件: https://nccur.lib.nccu.edu.tw/handle/140.119/149467


    题名: 深度偽造的投毒防禦技術研究
    Poison Defense on Deepfake Attacks
    作者: 郭丞堯
    Guo, Cheng-Yao
    贡献者: 郁方
    Yu, Fang
    郭丞堯
    Guo, Cheng-Yao
    关键词: 毒藥防禦
    深度偽造
    對抗例攻擊
    Poison defense
    Deepfake
    Adversarial attack
    日期: 2024
    上传时间: 2024-02-01 10:56:20 (UTC+8)
    摘要: 深度偽造技術的應用已趨於成熟,並在網路上廣泛傳播。這項技術的濫用會導致人們的隱私與安全受到侵害。為應對這一威脅,我們引入了一種新的防禦策略,稱為“糖衣毒藥”。該方法涉及在生成模型的學習過程中有策略地擾動潛在向量。通過設計,我們的方法旨在將視覺效果視為“毒藥”,引發故意的干擾,同時最小化重建損失充當“糖衣”,誤導深度偽造模型。這種雙重目的的策略能有效地抵禦深度偽造攻擊,並提高被檢測率。我們提出了一種系統性的機制,來生成影片補丁,以減輕與深度偽造相關的風險,並在特定的深度偽造人臉交換應用背景下實施和驗證我們的方法。
    The application of deepfake face-swapping technology has matured and become widespread on the internet, posing a significant threat to application security and privacy. In response to this growing concern, we introduce a novel defense strategy named "sugar-coated poison." This approach involves strategically perturbing latent vectors during the learning process of generative models. By design, our method aims to treat visual effects as the "poison," inducing intentional disruptions, while minimizing reconstruction loss acts as the "sugar," misleading the deepfake model. This dual-purpose strategy effectively defends against deepfake attacks. We present a systematic mechanism for generating video patches to mitigate the risks associated with deepfake manipulation, implementing and validating our approach specifically in the context of deepfake face-swapping applications.
    參考文獻: [1] Dlib library. http://dlib.net/. Accessed: 2023-12-01.
    [2] Faceswap. https://github.com/deepfakes/faceswap. Accessed: 2023-12-01.
    [3] Ffmpeg. https://ffmpeg.org/. Accessed: 2023-12-01.
    [4] S. Agarwal, H. Farid, Y. Gu, M. He, K. Nagano, and H. Li. Protecting world leaders against deep fakes. In CVPR workshops, volume 1, page 38, 2019.
    [5] S. Aneja, L. Markhasin, and M. Nießner. Tafim: Targeted adversarial attacks against facial image manipulations. In Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XIV, pages 58–75. Springer, 2022.
    [6] S. Bond-Taylor, A. Leach, Y. Long, and C. G. Willcocks. Deep generative modelling: A comparative review of vaes, gans, normalizing flows, energy-based and autoregressive models. IEEE transactions on pattern analysis and machine intelligence, 2021.
    [7] L. Buitinck, G. Louppe, M. Blondel, F. Pedregosa, A. Mueller, O. Grisel, V. Niculae, P. Prettenhofer, A. Gramfort, J. Grobler, R. Layton, J. VanderPlas, A. Joly, B. Holt, and G. Varoquaux. API design for machine learning software: experiences from the scikit-learn project. In ECML PKDD Workshop: Languages for Data Mining and Machine Learning, pages 108–122, 2013.
    [8] Y. Choi, M. Choi, M. Kim, J.-W. Ha, S. Kim, and J. Choo. Stargan: Unified generative adversarial networks for multi-domain image-to-image translation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 8789–8797, 2018.
    [9] Y. Choi, Y. Uh, J. Yoo, and J.-W. Ha. Stargan v2: Diverse image synthesis for multiple domains. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 8188–8197, 2020.
    [10] U. A. Ciftci, I. Demir, and L. Yin. Fakecatcher: Detection of synthetic portrait videos using biological signals. IEEE transactions on pattern analysis and machine intelligence, 2020.
    [11] K. T. Co, L. Muñoz-González, L. Kanthan, B. Glocker, and E. C. Lupu. Universal adversarial robustness of texture and shape-biased models. In 2021 IEEE International Conference on Image Processing (ICIP), pages 799–803. IEEE, 2021.
    [12] Z. He, W. Zuo, M. Kan, S. Shan, and X. Chen. Attgan: Facial attribute editing by only changing what you want. IEEE transactions on image processing, 28(11):5464– 5478, 2019.
    [13] T. Karras, S. Laine, and T. Aila. A style-based generator architecture for generative adversarial networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 4401–4410, 2019.
    [14] T. Karras, S. Laine, M. Aittala, J. Hellsten, J. Lehtinen, and T. Aila. Analyzing and improving the image quality of stylegan. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 8110–8119, 2020.
    [15] X. Li, S. Zhang, J. Hu, L. Cao, X. Hong, X. Mao, F. Huang, Y. Wu, and R. Ji. Image-to-image translation via hierarchical style disentanglement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8639–8648, 2021.
    [16] Y. Li, M.-C. Chang, and S. Lyu. In ictu oculi: Exposing ai created fake videos by detecting eye blinking. In 2018 IEEE international workshop on information forensics and security (WIFS), pages 1–7. IEEE, 2018.
    [17] Y. Li, X. Yang, B. Wu, and S. Lyu. Hiding faces in plain sight: Disrupting ai face synthesis with adversarial perturbations. arXiv preprint arXiv:1906.09288, 2019.
    [18] S. M. Lundberg and S.-I. Lee. A unified approach to interpreting model predictions. Advances in neural information processing systems, 30, 2017.
    [19] X. Luo, R. Zhan, H. Chang, F. Yang, and P. Milanfar. Distortion agnostic deep watermarking. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 13548–13557, 2020.
    [20] A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083, 2017.
    [21] F. Marra, D. Gragnaniello, D. Cozzolino, and L. Verdoliva. Detection of gangenerated fake images over social networks. In 2018 IEEE conference on multimedia information processing and retrieval (MIPR), pages 384–389. IEEE, 2018.
    [22] F. Matern, C. Riess, and M. Stamminger. Exploiting visual artifacts to expose deepfakes and face manipulations. In 2019 IEEE Winter Applications of Computer Vision Workshops (WACVW), pages 83–92. IEEE, 2019.
    [23] T. T. Nguyen, Q. V. H. Nguyen, D. T. Nguyen, D. T. Nguyen, T. Huynh-The, S. Nahavandi, T. T. Nguyen, Q.-V. Pham, and C. M. Nguyen. Deep learning for deepfakes creation and detection: A survey. Computer Vision and Image Understanding, 223:103525, 2022.
    [24] I. Perov, D. Gao, N. Chervoniy, K. Liu, S. Marangonda, C. Umé, M. Dpfks, C. S. Facenheim, L. RP, J. Jiang, et al. Deepfacelab: Integrated, flexible and extensible face-swapping framework. arXiv preprint arXiv:2005.05535, 2020.
    [25] A. Pumarola, A. Agudo, A. M. Martinez, A. Sanfeliu, and F. Moreno-Noguer. Ganimation: Anatomically-aware facial animation from a single image. In Proceedings of the European conference on computer vision (ECCV), pages 818–833, 2018.
    [26] E. Richardson, Y. Alaluf, O. Patashnik, Y. Nitzan, Y. Azar, S. Shapiro, and D. Cohen-Or. Encoding in style: a stylegan encoder for image-to-image translation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 2287–2296, 2021.
    [27] N. Ruiz, S. A. Bargal, and S. Sclaroff. Disrupting deepfakes: Adversarial attacks against conditional image translation networks and facial manipulation systems. In Computer Vision–ECCV 2020 Workshops: Glasgow, UK, August 23–28, 2020, Proceedings, Part IV 16, pages 236–251. Springer, 2020.
    [28] E. Segalis and E. Galili. Ogan: Disrupting deepfakes with an adversarial attack that survives training. arXiv preprint arXiv:2006.12247, 2020.
    [29] A. Shafahi, M. Najibi, Z. Xu, J. Dickerson, L. S. Davis, and T. Goldstein. Universal adversarial training. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 5636–5643, 2020.
    [30] A. Shrikumar, P. Greenside, A. Shcherbina, and A. Kundaje. Not just a black box: Learning important features through propagating activation differences. arXiv preprint arXiv:1605.01713, 2016.
    [31] P. Sun, Y. Li, H. Qi, and S. Lyu. Landmark breaker: obstructing deepfake by disturbing landmark extraction. In 2020 IEEE International Workshop on Information Forensics and Security (WIFS), pages 1–6. IEEE, 2020.
    [32] M. Tancik, B. Mildenhall, and R. Ng. Stegastamp: Invisible hyperlinks in physical photographs. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 2117–2126, 2020.
    [33] R. Tolosana, R. Vera-Rodriguez, J. Fierrez, A. Morales, and J. Ortega-Garcia. An Introduction to Digital Face Manipulation, pages 3–26. Springer International Publishing, Cham, 2022.
    [34] C. Yang, L. Ding, Y. Chen, and H. Li. Defending against gan-based deepfake attacks via transformation-aware adversarial faces. In 2021 international joint conference on neural networks (IJCNN), pages 1–8. IEEE, 2021.
    [35] S. Yang, L. Jiang, Z. Liu, and C. C. Loy. Styleganex: Stylegan-based manipulation beyond cropped aligned faces. arXiv preprint arXiv:2303.06146, 2023.
    [36] X. Yang, Y. Li, and S. Lyu. Exposing deep fakes using inconsistent head poses. In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 8261–8265. IEEE, 2019.
    [37] Y. Yang, C. Liang, H. He, X. Cao, and N. Z. Gong. Faceguard: Proactive deepfake detection. arXiv preprint arXiv:2109.05673, 2021.
    [38] C.-Y. Yeh, H.-W. Chen, S.-L. Tsai, and S.-D. Wang. Disrupting image-translationbased deepfake algorithms with adversarial attacks. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision Workshops, pages 53–62, 2020.
    [39] J. Zhu, R. Kaplan, J. Johnson, and L. Fei-Fei. Hidden: Hiding data with deep networks. In Proceedings of the European conference on computer vision (ECCV), pages 657–672, 2018.
    描述: 碩士
    國立政治大學
    資訊管理學系
    110356024
    資料來源: http://thesis.lib.nccu.edu.tw/record/#G0110356024
    数据类型: thesis
    显示于类别:[資訊管理學系] 學位論文

    文件中的档案:

    档案 描述 大小格式浏览次数
    602401.pdf2557KbAdobe PDF0检视/开启


    在政大典藏中所有的数据项都受到原著作权保护.


    社群 sharing

    著作權政策宣告 Copyright Announcement
    1.本網站之數位內容為國立政治大學所收錄之機構典藏,無償提供學術研究與公眾教育等公益性使用,惟仍請適度,合理使用本網站之內容,以尊重著作權人之權益。商業上之利用,則請先取得著作權人之授權。
    The digital content of this website is part of National Chengchi University Institutional Repository. It provides free access to academic research and public education for non-commercial use. Please utilize it in a proper and reasonable manner and respect the rights of copyright owners. For commercial use, please obtain authorization from the copyright owner in advance.

    2.本網站之製作,已盡力防止侵害著作權人之權益,如仍發現本網站之數位內容有侵害著作權人權益情事者,請權利人通知本網站維護人員(nccur@nccu.edu.tw),維護人員將立即採取移除該數位著作等補救措施。
    NCCU Institutional Repository is made to protect the interests of copyright owners. If you believe that any material on the website infringes copyright, please contact our staff(nccur@nccu.edu.tw). We will remove the work from the repository and investigate your claim.
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 回馈