English  |  正體中文  |  简体中文  |  Post-Print筆數 : 27 |  Items with full text/Total items : 113648/144635 (79%)
Visitors : 51680693      Online Users : 508
RC Version 6.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version
    政大機構典藏 > 資訊學院 > 資訊科學系 > 學位論文 >  Item 140.119/122134
    Please use this identifier to cite or link to this item: https://nccur.lib.nccu.edu.tw/handle/140.119/122134


    Title: 應用深度學習架構於社群網路資料分析:以Twitter圖文資料為例
    Analyzing Social Network Data Using Deep Neural Networks: A Case Study Using Twitter Posts
    Authors: 楊子萲
    Yang, Tzu-Hsuan
    Contributors: 廖文宏
    Liao, Wen-Hung
    楊子萲
    Yang, Tzu-Hsuan
    Keywords: 推特
    圖文分析
    Word2Vec
    深度學習
    社群網路
    Twitter
    Social networks
    Graphical and text analysis
    Word2Vec
    Deep learning
    Date: 2018
    Issue Date: 2019-01-23 14:59:44 (UTC+8)
    Abstract: 社群平台的發展日益蓬勃,人們分享動態的方式不僅只有文字,發文時搭配影像也是使用者常見的互動方式,然而有時候僅靠單方面的文字或是圖片並不能了解使用者真正想傳達的訊息,因此本研究以影像與文字分析技術為基礎,期望可藉由社群平台的多樣化資訊,分析圖片與文字之間的關係。
    由於Twitter的發文字數限制使得Twitter上的使用者較容易在貼文中明確表達重點,因此本研究從Twitter蒐集了2017年間擁有台灣關鍵字的推文資料,經過資料清洗後,從中分析哪些推文屬於觀光類型,哪些推文屬於非觀光類型,利用深度學習模型框架將圖文資訊進行整合,最後再進行分群,探討各類別的特性。
    透過此研究,可探索圖文之間相互輔助的關聯性,也可瞭解社群平台的貼文類型分佈,深化我們對於社群平台的理解,亦可透過本研究的框架提供質化分析研究者必要的資訊。
    Interaction on various social networking platforms has become an important part of our daily life. Apart from text messages, image is also a popular media format utilized for online communication. Text or image alone, however, cannot fully convey the ideas that users wish to express. In the thesis, we employ computer vision and word embedding techniques to analyze the relationship between image content and text messages and explore the rich information entangled.
    The limitation on the total number of characters compels Twitter users to compose their messages more succinctly, suggesting a stronger association between text and image. In this study, we collected all tweets which include keywords related to Taiwan during 2017. After data cleaning, we apply machine learning techniques to classify tweets into to ‘travel’ and ‘non-travel’ types. This is achieved by employing deep neural networks to process and integrate text and image information. Within each class, we use hierarchical clustering to further partition the data into different clusters and investigate their characteristics.
    Through this research, we expect to identify the relationship between text and images in a tweet and gain more understanding of the properties of tweets on social networking platforms. The proposed framework and corresponding analytical results should also prove useful for qualitative research.
    Reference: [1] Google Clound Vision API Documentation. https://cloud.google.com/vision/docs/.
    [2] Amazon Rekognition. https://aws.amazon.com/rekognition/?nc1=h_ls.
    [3] 中華民國交通部觀光局,觀光統計圖表。https://admin.taiwan.net.tw/public/public.aspx?no=315
    [4] GU, Chunhui, et al. AVA: A video dataset of spatio-temporally localized atomic visual actions. arXiv preprint arXiv:1705.08421, 2017, 3.4: 6.
    [5] HUBEL, David H.; WIESEL, Torsten N. Receptive fields, binocular interaction and functional architecture in the cat`s visual cortex. The Journal of physiology, 1962, 160.1: 106-154.
    [6] HINTON, Geoffrey E.; OSINDERO, Simon; TEH, Yee-Whye. A fast learning algorithm for deep belief nets. Neural computation, 2006, 18.7: 1527-1554.
    [7] RANJAN, Rajeev; PATEL, Vishal M.; CHELLAPPA, Rama. Hyperface: A deep multi-task learning framework for face detection, landmark localization, pose estimation, and gender recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2019, 41.1: 121-135.
    [8] KRIZHEVSKY, Alex; SUTSKEVER, Ilya; HINTON, Geoffrey E. Imagenet classification with deep convolutional neural networks. In: Advances in neural information processing systems. 2012. p. 1097-1105.
    [9] HE, Kaiming, et al. Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2016. p. 770-778.
    [10] HU, Jie; SHEN, Li; SUN, Gang. Squeeze-and-excitation networks. arXiv preprint arXiv:1709.01507, 2017, 7.
    [11] 林之昫,HubertLin(2017)。最後一屆ImageNet大規模視覺識別大賽(ILSVRC2017)順利落幕,而WebVision圖像大賽會是下一個ImageNet大賽嗎?。https://goo.gl/5rHG1y。
    [12] LI, Wen, et al. Webvision database: Visual learning and understanding from web data. arXiv preprint arXiv:1708.02862, 2017.
    [13] WebVision. https://www.vision.ee.ethz.ch/webvision/2017/index.html.
    [14] WebVision Challenge Results. https://www.vision.ee.ethz.ch/webvision/2017/challenge_results.html.
    [15] HU, Yuheng, et al. What We Instagram: A First Analysis of Instagram Photo Content and User Types. In: Icwsm. 2014.
    [16] SZEGEDY, Christian, et al. Going deeper with convolutions. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2015. p. 1-9.
    [17] IOFFE, Sergey; SZEGEDY, Christian. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015.
    [18] SZEGEDY, Christian, et al. Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2016. p. 2818-2826.
    [19] CHOLLET, François. Xception: Deep learning with depthwise separable convolutions. arXiv preprint, 2017, 1610.02357.
    [20] HOWARD, Andrew G., et al. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861, 2017.
    [21] Keras Documentation. https://keras.io/applications/.
    [22] HARRIS, Zellig S. Distributional structure. Word, 1954, 10.2-3: 146-162.
    [23] Vector Representations of Words. https://www.tensorflow.org/tutorials/word2vec.
    [24] MIKOLOV, Tomas, et al. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781, 2013.
    [25] 李維平、張加憲(2013)。使用N組連結平均法的階層式自動分群。電子商務學報,第十五卷(第一期),35-56。
    [26] MAATEN, Laurens van der; HINTON, Geoffrey. Visualizing data using t-SNE. Journal of machine learning research, 2008, 9.Nov: 2579-2605.
    [27] HINTON, Geoffrey E.; ROWEIS, Sam T. Stochastic neighbor embedding. In: Advances in neural information processing systems. 2003. p. 857-864.
    [28] Flood and Fire Twitter Capture and Analysis Toolset, ff-tcat. https://github.com/Sparklet73/ff-tcat.git
    [29] Sara Robinson(2016), Google Cloud Vision – Safe Search Detection API. https://cloud.google.com/blog/big-data/2016/08/filtering-inappropriate-content-with-the-cloud-vision-api
    [30] Caffe. http://caffe.berkeleyvision.org/
    [31] Open NSFW Model, yahoo. https://github.com/yahoo/open_nsfw.git
    [32] GODIN, Fréderic, et al. Multimedia Lab $@ $ ACL WNUT NER Shared Task: Named Entity Recognition for Twitter Microposts using Distributed Word Representations. In: Proceedings of the Workshop on Noisy User-generated Text. 2015. p. 146-153.
    Description: 碩士
    國立政治大學
    資訊科學系
    105753041
    Source URI: http://thesis.lib.nccu.edu.tw/record/#G0105753041
    Data Type: thesis
    DOI: 10.6814/THE.NCCU.CS.002.2019.B02
    Appears in Collections:[資訊科學系] 學位論文

    Files in This Item:

    File SizeFormat
    304101.pdf4928KbAdobe PDF2733View/Open


    All items in 政大典藏 are protected by copyright, with all rights reserved.


    社群 sharing

    著作權政策宣告 Copyright Announcement
    1.本網站之數位內容為國立政治大學所收錄之機構典藏,無償提供學術研究與公眾教育等公益性使用,惟仍請適度,合理使用本網站之內容,以尊重著作權人之權益。商業上之利用,則請先取得著作權人之授權。
    The digital content of this website is part of National Chengchi University Institutional Repository. It provides free access to academic research and public education for non-commercial use. Please utilize it in a proper and reasonable manner and respect the rights of copyright owners. For commercial use, please obtain authorization from the copyright owner in advance.

    2.本網站之製作,已盡力防止侵害著作權人之權益,如仍發現本網站之數位內容有侵害著作權人權益情事者,請權利人通知本網站維護人員(nccur@nccu.edu.tw),維護人員將立即採取移除該數位著作等補救措施。
    NCCU Institutional Repository is made to protect the interests of copyright owners. If you believe that any material on the website infringes copyright, please contact our staff(nccur@nccu.edu.tw). We will remove the work from the repository and investigate your claim.
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - Feedback