English  |  正體中文  |  简体中文  |  Post-Print筆數 : 27 |  Items with full text/Total items : 118204/149236 (79%)
Visitors : 74240190      Online Users : 71
RC Version 6.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version
    政大機構典藏 > 商學院 > 資訊管理學系 > 學位論文 >  Item 140.119/159087
    Please use this identifier to cite or link to this item: https://nccur.lib.nccu.edu.tw/handle/140.119/159087


    Title: ChatGPT對資安風險分析之應用 : 以智慧瓦斯平台為例
    Application of ChatGPT to Information Security Risk Analysis: A Case Study of a Smart Gas Platform
    Authors: 陳蕙鈞
    Chen, Hui-Jun
    Contributors: 洪為璽
    Hung, Wei-Hsi
    陳蕙鈞
    Chen, Hui-Jun
    Keywords: 生成式人工智慧
    ChatGPT
    智慧瓦斯平台
    資訊安全
    行動研究
    Generative AI
    ChatGPT
    Information Security
    IoT Security
    Action Research
    Date: 2025
    Issue Date: 2025-09-01 15:03:15 (UTC+8)
    Abstract: 本研究旨在探討生成式人工智慧,特別是ChatGPT,在智慧瓦斯平台資訊安全分析中的應用價值與效能。隨著物聯網技術在能源管理領域的廣泛應用,智慧瓦斯平台的安全性成為一項重要課題。然而傳統的資安分析方法往往耗時費力且需要專業知識,難以滿足快速發展的系統需求。本研究透過行動研究方法,系統性地評估ChatGPT在識別和分析安全風險方面的能力,並著重探討人機協作下的效能優化。
    研究結果不僅顯示ChatGPT能夠有效識別智慧瓦斯平台在應用程式安全、網路安全、資料安全、操作系統安全及合規性等五個面向的潛在風險,並提供具體改進建議;同時也揭示了其固有局限性,例如無法完全理解業務背景、缺乏明確優先級排序或實施成本評估,強調人工專家介入與多元方法組合之必要性。
    尤為重要的是,本研究從實作經驗中歸納出三大關鍵指令撰寫原則:包括任務導向與提供背景資訊、分步驟提問,以及提供關鍵上下文資訊。這些原則證明能顯著提升AI回應的精準度與實用性,使所獲資安建議更符合系統實際需求與脈絡,進而優化人機協作效率。透過兩次行動研究循環與多模型交叉驗證,確保了研究發現的可靠性。改進後系統的實際運行效果顯著,尤其在資料庫連接異常處理、IoT數據傳輸監控和配置管理等方面的優化,有效提升了系統的穩定性、可靠性與安全性。
    總體而言,本研究不僅拓展了生成式AI在資安分析領域的應用潛力,提出了一套結合行動研究與多模型驗證的創新研究框架,更為智慧物聯網系統的安全建設提供了實務指南。本研究顯示了如何有效發揮AI潛能、克服其局限,並實現人機協作在複雜資安挑戰中的最佳化,為學術界與業界在未來相關領域的發展提供了一些具價值的探索方向。
    This study investigates the application value and effectiveness of generative artificial intelligence, particularly ChatGPT, in information security analysis for smart gas platforms. With the growing integration of IoT technologies into energy management, ensuring the cybersecurity of such platforms has become increasingly critical. However, traditional security analysis methods are often time-consuming, labor-intensive, and reliant on expert knowledge, making them insufficient for rapidly evolving systems.
    Using an action research methodology, this study systematically evaluates ChatGPT’s capabilities in identifying and analyzing risks, focusing on performance optimization under human-AI collaboration. Results show that ChatGPT can effectively detect potential vulnerabilities across five domains: application security, network security, data security, operating system security, and regulatory compliance. It also offers concrete suggestions for improvement. Nevertheless, limitations such as insufficient understanding of business context, lack of risk prioritization, and absence of cost evaluation underscore the need for expert oversight and hybrid approaches.
    Importantly, the study proposes three key prompt design principles—task-oriented instruction with background information, step-by-step questioning, and inclusion of critical contextual data—which significantly improve the accuracy and relevance of AI-generated responses. Through two action research cycles and multi-model cross-validation, the findings were confirmed to be reliable. System improvements in database error handling, IoT data transmission monitoring, and configuration management further enhanced stability, reliability, and security.
    In conclusion, this study not only expands the potential of generative AI in cybersecurity analysis but also presents a novel research framework combining action research and multi-model validation. It provides practical guidance for securing smart IoT systems and optimizing human-AI collaboration in complex cybersecurity contexts.
    Reference: 台灣大哥大 (2020 年 10 月 13 日)。台灣大 IoT 技術助陣欣彰天然氣智慧瓦斯表上線。取自https://www.twmsolution.com/hotnews/news_20201022_142916.html
    Aburomman, A. A., & Reaz, M. B. I. (2017). A survey of intrusion detection systems based on ensemble and hybrid classifiers. Computers & security, 65, 135-152.
    Ahmad, W. U., Chakraborty, S., Ray, B., & Chang, K. W. (2021). Unified pre-training for program understanding and generation. arXiv preprint arXiv:2103.06333.
    Ahmed, A. (2023). Chat GPT achieved one million users in record time: Revolutionizing time-saving in various fields. Digital Information World. Retrieved January 27, 2023, from https://www.digitalinformationworld.com/2023/01/chat-gpt-achieved-one-million-users-in.html
    Alayrac, J. B., Donahue, J., Luc, P., Miech, A., Barr, I., Hasson, Y., ... & Simonyan, K. (2022). Flamingo: a visual language model for few-shot learning. Advances in neural information processing systems, 35, 23716-23736.
    Allamanis, M., Peng, H., & Sutton, C. (2016). A Convolutional Attention Network for Extreme Summarization of Source Code. ArXiv, abs/1602.03001.
    Amarasinghe, K., Kenney, K., & Manic, M. (2018, July). Toward explainable deep neural network based anomaly detection. In 2018 11th international conference on human system interaction (HSI) (pp. 311-317). IEEE.
    Apruzzese, G., Colajanni, M., Ferretti, L., Guido, A., & Marchetti, M. (2018, May). On the effectiveness of machine and deep learning for cyber security. In 2018 10th international conference on cyber Conflict (CyCon) (pp. 371-390). IEEE.
    Avison, D. E., Lau, F., Myers, M. D., & Nielsen, P. A. (1999). Action research. Communications of the ACM, 42(1), 94-97.
    Bach, S., Binder, A., Montavon, G., Klauschen, F., Müller, K. R., & Samek, W. (2015). On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PloS one, 10(7), e0130140.
    Basharat, I., Azam, F., & Muzaffar, A. W. (2012). Database security and encryption: A survey study. International Journal of Computer Applications, 47(12).
    Baskerville, R. L., & Wood-Harper, A. T. (1996). A critical perspective on action research as a method for information systems research. Journal of information Technology, 11, 235-246.
    Baxter, P., & Jack, S. (2008). Qualitative case study methodology: Study design and implementation for novice researchers. The qualitative report, 13(4), 544-559.
    Bertino, E., & Sandhu, R. (2005). Database security-concepts, approaches, and challenges. IEEE Transactions on Dependable and secure computing, 2(1), 2-19.
    Bourimi, M., Tesoriero, R., Villanueva, P. G., Karatas, F., & Schwarte, P. (2011, October). Privacy and security in multi-modal user interface modeling for social media. In 2011 IEEE Third International Conference on Privacy, Security, Risk and Trust and 2011 IEEE Third International Conference on Social Computing (pp. 1364-1371). IEEE.
    Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agarwal, S., Herbert-Voss, A., Henighan, G., Gordon, R., McKinney, M., Child, R., Chen, O., Ramesh, A., Ziegler, M., ... & Amodei, D. (2020). Language models are few-shot learners. In H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, & H. Lin (Eds.), Advances in Neural Information Processing Systems 33 (pp. 1877-1901). NeurIPS.
    Buchholz, K. (2023, January 24). ChatGPT Sprints to One Million Users. Statista. https://www.statista.com/chart/29174/time-to-one-million-users/
    Buczak, A. L., & Guven, E. (2015). A survey of data mining and machine learning methods for cyber security intrusion detection. IEEE Communications surveys & tutorials, 18(2), 1153-1176.
    Cambiaso, E., & Caviglione, L. (2023). Scamming the scammers: Using chatgpt to reply mails for wasting time and resources. arXiv preprint arXiv:2303.13521.
    Cao, Y., Li, S., Liu, Y., Yan, Z., Dai, Y., Yu, P. S., & Sun, L. (2023). A comprehensive survey of ai-generated content (aigc): A history of generative ai from gan to chatgpt. arXiv preprint arXiv:2303.04226.
    Carr, D. F. (2024, March 8). ChatGPT, Custom GPTs, and AI Chat Challengers: CoPilot, Gemini, Perplexity, and More. Similarweb. https://www.similarweb.com/blog/insights/ai-news/chatgpt-challengers/
    Cavusoglu, H., Mishra, B., & Raghunathan, S. (2004). A model for evaluating IT security investments. Communications of the ACM, 47(7), 87-92.
    Chakraborty, S., Ding, Y., Allamanis, M., & Ray, B. (2018). CODIT: Code Editing With Tree-Based Neural Models. IEEE Transactions on Software Engineering, 48, 1385-1399.
    Chen, M., Tworek, J., Jun, H., Yuan, Q., Pinto, H. P. D. O., Kaplan, J., ... & Zaremba, W. (2021). Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374.
    Cheng, J., Chen, W., Tao, F., & Lin, C. L. (2018). Industrial IoT in 5G environment towards smart manufacturing. Journal of Industrial Information Integration, 10, 10-19.
    Chiasson, S., Forget, A., Biddle, R., & Van Oorschot, P. C. (2009). User interface design affects security: Patterns in click-based graphical passwords. International Journal of Information Security, 8, 387-398.
    Coghlan, D., & Brannick, T. (2019). Doing action research in your own organization (5th ed.). Sage.
    Debar, H., Dacier, M., & Wespi, A. (1999). Towards a taxonomy of intrusion-detection systems. Computer networks, 31(8), 805-822.
    Dhariwal, P., Jun, H., Payne, C., Kim, J. W., Radford, A., & Sutskever, I. (2020). Jukebox: A generative model for music. arXiv preprint arXiv:2005.00341.
    Dhillon, G., & Backhouse, J. (2001). Current directions in IS security research: towards socio‐organizational perspectives. Information systems journal, 11(2), 127-153.
    Dutta, S., Joyce, G., & Brewer, J. (2018). Utilizing chatbots to increase the efficacy of information security practitioners. In Advances in Human Factors in Cybersecurity: Proceedings of the AHFE 2017 International Conference on Human Factors in Cybersecurity, July 17− 21, 2017, The Westin Bonaventure Hotel, Los Angeles, California, USA 8 (pp. 237-243). Springer International Publishing.
    Ferrag, M. A., Battah, A., Tihanyi, N., Debbah, M., Lestable, T., & Cordeiro, L. C. (2023). Securefalcon: The next cyber reasoning system for cyber security. arXiv preprint arXiv:2307.06616.
    Flyvbjerg, B. (2006). Five misunderstandings about case-study research. Qualitative inquiry, 12(2), 219-245.
    Fried, D., Aghajanyan, A., Lin, J., Wang, S.I., Wallace, E., Shi, F., Zhong, R., Yih, W., Zettlemoyer, L., & Lewis, M. (2022). InCoder: A Generative Model for Code Infilling and Synthesis. ArXiv, abs/2204.05999.
    Gupta, M., Akiri, C., Aryal, K., Parker, E., & Praharaj, L. (2023). From chatgpt to threatgpt: Impact of generative ai in cybersecurity and privacy. IEEE Access.
    Holzinger, A., Langs, G., Denk, H., Zatloukal, K., & Müller, H. (2019). Causability and explainability of artificial intelligence in medicine. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 9(4), e1312.
    Hsu, C., Lee, J. N., & Straub, D. W. (2012). Institutional influences on information systems security innovations. Information systems research, 23(3-part-2), 918-939.
    Hu, X., Li, G., Xia, X., Lo, D., & Jin, Z. (2018). Deep Code Comment Generation. 2018 IEEE/ACM 26th International Conference on Program Comprehension (ICPC), 200-20010.
    Goodfellow, I. J., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., ... & Bengio, Y. (2014). Generative adversarial nets. Advances in neural information processing systems, 27.
    Jumper, J., Evans, R., Pritzel, A., Green, T., Figurnov, M., Ronneberger, O., ... & Hassabis, D. (2021). Highly accurate protein structure prediction with AlphaFold. Nature, 596(7873), 583-589.
    Karras, T., Laine, S., & Aila, T. (2019). A style-based generator architecture for generative adversarial networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 4401-4410).
    Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., & Aila, T. (2020). Analyzing and improving the image quality of stylegan. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 8110-8119).
    Kemmis, S., & McTaggart, R. (2000). Participatory action research. In N. K. Denzin & Y. S. Lincoln (Eds.), Handbook of qualitative research (2nd ed., pp. 567-605). Sage.
    Kereopa-Yorke, B. (2024). Building resilient SMEs: Harnessing large language models for cyber security in Australia. Journal of AI, Robotics & Workplace Automation, 3(1), 15-27.
    Kong, Z., Ping, W., Huang, J., Zhao, K., & Catanzaro, B. (2020). Diffwave: A versatile diffusion model for audio synthesis. arXiv preprint arXiv:2009.09761.
    Kotulic, A. G., & Clark, J. G. (2004). Why there aren’t more information security research studies. Information & Management, 41(5), 597-607.
    Lewin, K. (1946). Action research and minority problems. Journal of social issues, 2(4), 34-46.
    Lewis, M., Liu, Y., Goyal, N., Ghazvininejad, M., Mohamed, A., Levy, O., ... & Zettlemoyer, L. (2019). Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461.
    Li, W., Tug, S., Meng, W., & Wang, Y. (2019). Designing collaborative blockchained signature-based intrusion detection in IoT environments. Future Generation Computer Systems, 96, 481-489.
    Lu, S., Guo, D., Ren, S., Huang, J., Svyatkovskiy, A., Blanco, A., ... & Liu, S. (2021). Codexglue: A machine learning benchmark dataset for code understanding and generation. arXiv preprint arXiv:2102.04664.
    Malik, E. (2023, November 9). Artificial intelligence (AI) and ChatGPT timelines. Office Timeline. https://www.officetimeline.com/blog/artificial-intelligence-ai-and-chatgpt-history-and-timelines
    McGraw, G. (2006). Software security: Building security in. Addison-Wesley.
    McKay, J., & Marshall, P. (2001). The dual imperatives of action research. Information Technology & People, 14(1), 46-59.
    McKee, F., & Noever, D. (2023). Chatbots in a honeypot world. arXiv preprint arXiv:2301.03771.
    McNiff, J., & Whitehead, J. (2011). All you need to know about action research (2nd ed.). SAGE Publications.
    Merriam, S. B. (1998). Qualitative research and case study applications in education. Jossey-Bass.
    Mihajlov, M., Blazic, B. J., & Josimovski, S. (2011, July). Quantifying usability and security in authentication. In 2011 IEEE 35th Annual Computer Software and Applications Conference (pp. 626-629). IEEE.
    Mills, C. W. (1959). The sociological imagination. Oxford University Press.
    Minami, M., Suzaki, K., & Okumura, T. (2011, January). Security considered harmful a case study of tradeoff between security and usability. In 2011 IEEE Consumer Communications and Networking Conference (CCNC) (pp. 523-524). IEEE.
    Mirhoseini, A., Goldie, A., Yazgan, M., Jiang, J., Songhori, E., Wang, S., ... & Dean, J. (2020). Chip placement with deep reinforcement learning. arXiv preprint arXiv:2004.10746.
    Mitnick, K. D., & Simon, W. L. (2003). The art of deception: Controlling the human element of security. John Wiley & Sons.
    Mockel, C. (2011, July). Usability and security in eu e-banking systems-towards an integrated evaluation framework. In 2011 IEEE/IPSJ International Symposium on Applications and the Internet (pp. 230-233). IEEE.
    Möller, S., Ben-Asher, N., Engelbrecht, K. P., Englert, R., & Meyer, J. (2011). Modeling the behavior of users who are confronted with security mechanisms. Computers & Security, 30(4), 242-256.
    Mukherjee, B., Heberlein, L. T., & Levitt, K. N. (1994). Network intrusion detection. IEEE network, 8(3), 26-41.
    Neshenko, N., Bou-Harb, E., Crichigno, J., Kaddoum, G., & Ghani, N. (2019). Demystifying IoT security: An exhaustive survey on IoT vulnerabilities and a first empirical look on Internet-scale IoT exploitations. IEEE Communications Surveys & Tutorials, 21(3), 2702-2733.
    Nijkamp, E., Pang, B., Hayashi, H., Tu, L., Wang, H., Zhou, Y., Savarese, S., & Xiong, C. (2022). A Conversational Paradigm for Program Synthesis. ArXiv, abs/2203.13474.
    OpenAI. (2019). Better language models and their implications. https://openai.com/research/better-language-models
    OpenAI. (2020). OpenAI API. [Blog post]. Retrieved from https://openai.com/blog/openai-api
    OpenAI. (2022, November 3). Introducing ChatGPT. https://openai.com/blog/chatgpt/
    OpenAI. (2023). GPT-4 technical report. https://openai.com/research/gpt-4
    Pearce, H., Tan, B., Ahmad, B., Karri, R., & Dolan-Gavitt, B. (2023, May). Examining zero-shot vulnerability repair with large language models. In 2023 IEEE Symposium on Security and Privacy (SP) (pp. 2339-2356). IEEE.
    Polykovskiy, D., Zhebrak, A., Sanchez-Lengeling, B., Golovanov, S., Tatanov, O., Belyaev, S., ... & Zhavoronkov, A. (2020). Molecular sets (MOSES): A benchmarking platform for molecular generation models. Frontiers in Pharmacology, 11, 565644. https://doi.org/10.3389/fphar.2020.565644
    Prenner, J.A., Babii, H., & Robbes, R. (2022). Can OpenAI's Codex Fix Bugs?: An evaluation on QuixBugs. 2022 IEEE/ACM International Workshop on Automated Program Repair (APR), 69-75.
    Radford, A., Metz, L., & Chintala, S. (2015). Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434.
    Radford, A., Narasimhan, K., Salimans, T., & Sutskever, I. (2018). Improving language understanding by generative pre-training [Technical report]. OpenAI. https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf
    Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., ... & Liu, P. J. (2020). Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of machine learning research, 21(140), 1-67.
    Reason, P., & Bradbury, H. (Eds.). (2001). Handbook of action research: Participative inquiry and practice. sage.
    Salomaa, J. (2019). Measuring and creating situational awareness in cybersecurity: The requirements specification for situational awareness and metrics platform [Master’s thesis, Laurea University of Applied Sciences]. Theseus. https://www.theseus.fi/handle/10024/266581
    Sandoval, G., Pearce, H., Nys, T., Karri, R., Garg, S., & Dolan-Gavitt, B. (2023). Lost at c: A user study on the security implications of large language model code assistants. In 32nd USENIX Security Symposium (USENIX Security 23) (pp. 2205-2222).
    Scarfone, K., & Mell, P. (2007). Guide to intrusion detection and prevention systems (idps). NIST special publication, 800(2007), 94.
    Sebescen, N., & Vitak, J. (2017). Securing the human: Employee security vulnerability risk in organizational settings. Journal of the Association for Information Science and Technology, 68(9), 2237-2247.
    Siponen, M., & Willison, R. (2009). Information security management standards: Problems and solutions. Information & management, 46(5), 267-270.
    Sobania, D., Briesch, M., Hanna, C., & Petke, J. (2023). An Analysis of the Automatic Bug Fixing Performance of ChatGPT. 2023 IEEE/ACM International Workshop on Automated Program Repair (APR), 23-30.
    Sommer, R., & Paxson, V. (2010, May). Outside the closed world: On using machine learning for network intrusion detection. In 2010 IEEE symposium on security and privacy (pp. 305-316). IEEE.
    Stake, R. E. (1995). The art of case study research. SAGE Publications.
    Tunstall, L., Von Werra, L., & Wolf, T. (2022). Natural language processing with transformers. " O'Reilly Media, Inc.".
    Tuor, A. R., Baerwolf, R., Knowles, N., Hutchinson, B., Nichols, N., & Jasper, R. (2018, June). Recurrent neural network language models for open vocabulary event-level cyber anomaly detection. In Workshops at the thirty-second AAAI conference on artificial intelligence.
    Usama, M., Qadir, J., Raza, A., Arif, H., Yau, K. L. A., Elkhatib, Y., ... & Al-Fuqaha, A. (2019). Unsupervised machine learning for networking: Techniques, applications and research challenges. IEEE access, 7, 65579-65615.
    Van Den Oord, A., Dieleman, S., Zen, H., Simonyan, K., Vinyals, O., Graves, A., ... & Kavukcuoglu, K. (2016). Wavenet: A generative model for raw audio. arXiv preprint arXiv:1609.03499, 12.
    Walden, J., & Doyle, M. (2012). Savi: Static-analysis vulnerability indicator. IEEE Security & Privacy, 10(3), 32-39.
    Walsham, G. (1995). Interpretive case studies in IS research: nature and method. European Journal of information systems, 4(2), 74-81.
    Wang, T. C., Liu, M. Y., Zhu, J. Y., Tao, A., Kautz, J., & Catanzaro, B. (2018). High-resolution image synthesis and semantic manipulation with conditional gans. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 8798-8807).
    Warkentin, M., & Willison, R. (2009). Behavioral and policy issues in information systems security: the insider threat. European Journal of Information Systems, 18(2), 101-105.
    Werlinger, R., Hawkey, K., & Beznosov, K. (2009). An integrated view of human, organizational, and technological challenges of IT security management. Information Management & Computer Security, 17(1), 4-19.
    Whitten, A., & Tygar, J. D. (1999, August). Why Johnny Can't Encrypt: A Usability Evaluation of PGP 5.0. In USENIX security symposium (Vol. 348, pp. 169-184).
    Yang, Y., Zheng, K., Wu, C., Niu, X., & Yang, Y. (2019). Building an effective intrusion detection system using the modified density peak clustering algorithm and deep belief networks. Applied Sciences, 9(2), 238.
    Ye, H., Martinez, M., & Monperrus, M. (2022). Neural program repair with execution-based backpropagation. In Proceedings of the 44th international conference on software engineering (pp. 1506-1518).
    Yin, C., Zhu, Y., Fei, J., & He, X. (2017). A deep learning approach for intrusion detection using recurrent neural networks. Ieee Access, 5, 21954-21961.
    Yin, R. K. (2018). Case study research and applications: Design and methods (6th ed.). SAGE Publications.
    Zhavoronkov, A., Ivanenkov, Y. A., Aliper, A., Veselov, M. S., Aladinskiy, V. A., Aladinskaya, A. V., ... & Aspuru-Guzik, A. (2019). Deep learning enables rapid identification of potent DDR1 kinase inhibitors. Nature biotechnology, 37(9), 1038-1040.
    Zheng, J., Williams, L., Nagappan, N., Snipes, W., Hudepohl, J. P., & Vouk, M. A. (2006). On the value of static analysis for fault detection in software. IEEE transactions on software engineering, 32(4), 240-253.
    Zhu, J. Y., Park, T., Isola, P., & Efros, A. A. (2017). Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE International Conference on Computer Vision (pp. 2223–2232).
    Description: 碩士
    國立政治大學
    資訊管理學系
    111356055
    Source URI: http://thesis.lib.nccu.edu.tw/record/#G0111356055
    Data Type: thesis
    Appears in Collections:[資訊管理學系] 學位論文

    Files in This Item:

    File Description SizeFormat
    605501.pdf3243KbAdobe PDF0View/Open


    All items in 政大典藏 are protected by copyright, with all rights reserved.


    社群 sharing

    著作權政策宣告 Copyright Announcement
    1.本網站之數位內容為國立政治大學所收錄之機構典藏,無償提供學術研究與公眾教育等公益性使用,惟仍請適度,合理使用本網站之內容,以尊重著作權人之權益。商業上之利用,則請先取得著作權人之授權。
    The digital content of this website is part of National Chengchi University Institutional Repository. It provides free access to academic research and public education for non-commercial use. Please utilize it in a proper and reasonable manner and respect the rights of copyright owners. For commercial use, please obtain authorization from the copyright owner in advance.

    2.本網站之製作,已盡力防止侵害著作權人之權益,如仍發現本網站之數位內容有侵害著作權人權益情事者,請權利人通知本網站維護人員(nccur@nccu.edu.tw),維護人員將立即採取移除該數位著作等補救措施。
    NCCU Institutional Repository is made to protect the interests of copyright owners. If you believe that any material on the website infringes copyright, please contact our staff(nccur@nccu.edu.tw). We will remove the work from the repository and investigate your claim.
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - Feedback