政大機構典藏-National Chengchi University Institutional Repository(NCCUR):Item 140.119/70998
English  |  正體中文  |  简体中文  |  Post-Print筆數 : 27 |  全文筆數/總筆數 : 113318/144297 (79%)
造訪人次 : 51055182      線上人數 : 935
RC Version 6.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋
    政大機構典藏 > 資訊學院 > 資訊科學系 > 學位論文 >  Item 140.119/70998
    請使用永久網址來引用或連結此文件: https://nccur.lib.nccu.edu.tw/handle/140.119/70998


    題名: 基於 RGBD 影音串流之肢體表情語言表現評估
    Estimation and Evaluation of Body Language Using RGBD Data
    作者: 吳怡潔
    Wu, Yi Chieh
    貢獻者: 廖文宏
    Liao, Wen Hung
    吳怡潔
    Wu, Yi Chieh
    關鍵詞: 肢體語言
    RGBD Kinect 感測器
    表現評估
    聲音處理
    模式分類
    Body language
    RGBD Kinect sensor
    performance evaluation
    audio processing
    pattern classification
    日期: 2013
    上傳時間: 2014-11-03 10:11:57 (UTC+8)
    摘要: 本論文基於具備捕捉影像深度的RGBD影音串流裝置-Kinect感測器,在簡報場域中,作為擷取簡報者肢體動作、表情、以及語言表現模式的設備。首先我們提出在特定時段內的表現模式,可以經由大眾的評估,而具有喜歡/不喜歡的性質,我們將其分別命名為Period of Like(POL)以及Period of Dislike(POD)。論文中並以三種Kinect SDK所提供的影像特徵:動畫單元、骨架關節點、以及3D臉部頂點,輔以35位評估者所提供之評估資料,以POD/POL取出的特徵模式,分析是否具有一致性,以及是否可用於未來預測。最後將研究結果開發應用於原型程式,期許這樣的預測系統,能夠為在簡報中表現不佳而困擾的人們,提點其優劣之處,以作為後續改善之依據。
    In this thesis, we capture body movements, facial expressions, and voice data of subjects in the presentation scenario using RGBD-capable Kinect sensor. The acquired videos were accessed by a group of reviewers to indicate their preferences/aversions to the presentation style. We denote the two classes of ruling as Period of Like (POL) and Period of Dislike (POD), respectively. We then employ three types of image features, namely, animation units (AU), skeletal joints, and 3D face vertices to analyze the consistency of the evaluation result, as well as the ability to classify unseen footage based on the training data supplied by 35 evaluators. Finally, we develop a prototype program to help users to identify their strength/weakness during their presentation so that they can improve their skills accordingly.
    參考文獻: [1] Rachael E. Jack, “Facial expressions of emotion are not culturally universal”, PNAS online, 2012.
    [2] Alex Pentland, ” Honest Signals : How They Shape Our World”, The MIT Press, August.2008.
    [3] Oxford Dictionaries,
    http://www.oxforddictionaries.com/definition/english/body-language.
    [4] Wikipedia contributors, "Kinect," Wikipedia, The Free Encyclopedia,
    http://en.wikipedia.org/w/index.php?title=Kinect&oldid=612754262 (accessed June 29, 2014).
    [5] MSDN, ”Face Tracking",
    http://msdn.microsoft.com/en-us/library/jj130970.aspx.
    [6] MSDN, ”Tracking Users with Kinect Skeletal Tracking",
    http://msdn.microsoft.com/en-us/library/jj131025.aspx.
    [7] Microsoft, “Pre-order the Kinect for Windows v2 sensor”,
    http://www.microsoft.com/en-us/kinectforwindows/Purchase/developer-sku.aspx
    [8] M. E. Hoque, M. Courgeon, B. Mutlu, J-C. Martin, R. W. Picard, “MACH: My Automated Conversation coacH “, In the 15th International Conference on Ubiquitous Computing (Ubicomp), September 2013.
    [9] S. Feese, B. Arnrich, G. Tröster, B. Meyer, K. Jonas, “Automatic Clustering of Conversational Patterns from Speech and Motion Data”, Measuring Behavior 2012.
    [10] Nick Morgan, “7 Surprising Truths about Body Language”,
    http://www.forbes.com/sites/nickmorgan/2012/10/25/7-surprising-truths-about-body-language/.
    [11] Alex Pentland, ” Honest Signals : How They Shape Our World”, The MIT Press, p.3-4, August.2008.
    [12] Marco Pasch, Monica Landoni, “Building Corpora of Bodily Expressions of Affect”, Measuring Behavior 2012.
    [13] Xsens MVN suit, http://www.xsens.com/products/xsens-mvn/.
    [14] Wouter van Teijlingen, Egon L. van den Broek, Reinier Könemann, John G.M. Schavemaker, “Towards Sensing Behavior Using the Kinect”, Measuring Behavior 2012.
    [15] MSDN, “Using the Kinect as an Audio Device”,
    http://msdn.microsoft.com/en-us/library/jj883682.aspx.
    [16] Posner MI, “Timing the Brain: Mental Chronometry as a Tool in Neuroscience”, PLoS Biol 3(2): e51. doi:10.1371/journal.pbio.0030051, 2005,
    http://www.plosbiology.org/article/info:doi/10.1371/journal.pbio.0030051.
    [17] Ken Goldberg, Siamak Faridani, Ron Alterovitz, “A New Derivation and Dataset for Fitts` Law of Human Motion”, Technical Report No. UCB/EECS-2013-171 , October 22, 2013, http://www.tele-actor.net/fitts-dataset/.
    [18] FFmpeg, https://www.ffmpeg.org/.
    [19] w3schools, “HTML <video> Tag”,
    http://www.w3schools.com/tags/tag_video.asp.
    [20] ETSI, 2002, http://www.etsi.org/,
    http://www.etsi.org/deliver/etsi_es/202000_202099/202050/01.01.05_60/.
    [21] ETSI, “Speech Processing, Transmission and Quality Aspects (STQ); Distributed speech recognition; Advanced front-end feature extraction algorithm; Compression algorithms”, 2002,
    http://www.etsi.org/deliver/etsi_es/202000_202099/202050/01.01.05_60/es_202050v010105p.pdf.
    [22] A. Acero, C. Crespo, C. de la Torre, J. C. Torrecilla, “Robust HMM-based endpoint detector”, Third European Conference on Speech Communication and Technology, EUROSPEECH 1993, Berlin, Germany, September 22-25, 1993.
    [23] Brian MacWhinney, “The CHILDES Project, Tools for Analyzing Talk – Electronic Edition, Part 2: The CLAN Programs”,
    http://childes.psy.cmu.edu/manuals/CLAN.pdf.
    [24] Chih-Chung Chang and Chih-Jen Lin, LIBSVM : a library for support vector machines. ACM Transactions on Intelligent Systems and Technology, 2:27:1--27:27, 2011. Software available at
    http://www.csie.ntu.edu.tw/~cjlin/libsvm.
    [25] Heroku, https://www.heroku.com/.
    [26] Amazon S3, https://aws.amazon.com/documentation/s3/.
    [27] w3schools, “HTML Event Attributes”,
    http://www.w3schools.com/tags/ref_eventattributes.asp.
    [28] w3schools, “HTML DOM Events”,
    http://www.w3schools.com/jsref/dom_obj_event.asp.
    [29] R Core Team (2014). R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria.
    URL http://www.R-project.org/.
    [30] Giorgino T (2009). &quot;Computing and Visualizing Dynamic Time Warping Alignments in R: The dtw Package.&quot; _Journal of Statistical Software_,*31*(7), pp. 1-24. <URL: http://www.jstatsoft.org/v31/i07/>.
    [31] Tormene, P.; Giorgino, T.; Quaglini, S. & Stefanelli, M. Matching incomplete time series with dynamic time warping: an algorithm and an application to post-stroke rehabilitation. Artif Intell Med, 2009, 45, 11-34.
    [32] MSDN, “NUI_SKELETON_DATA Structure”,
    http://msdn.microsoft.com/en-us/library/nuisensor.nui_skeleton_data.aspx.
    [33] MSDN, “Face Tracking Visualization C++ Sample”,
    http://msdn.microsoft.com/en-us/library/jj131045.aspx.
    描述: 碩士
    國立政治大學
    資訊科學學系
    101971004
    102
    資料來源: http://thesis.lib.nccu.edu.tw/record/#G0101971004
    資料類型: thesis
    顯示於類別:[資訊科學系] 學位論文

    文件中的檔案:

    檔案 大小格式瀏覽次數
    100401.pdf9611KbAdobe PDF2489檢視/開啟


    在政大典藏中所有的資料項目都受到原著作權保護.


    社群 sharing

    著作權政策宣告 Copyright Announcement
    1.本網站之數位內容為國立政治大學所收錄之機構典藏,無償提供學術研究與公眾教育等公益性使用,惟仍請適度,合理使用本網站之內容,以尊重著作權人之權益。商業上之利用,則請先取得著作權人之授權。
    The digital content of this website is part of National Chengchi University Institutional Repository. It provides free access to academic research and public education for non-commercial use. Please utilize it in a proper and reasonable manner and respect the rights of copyright owners. For commercial use, please obtain authorization from the copyright owner in advance.

    2.本網站之製作,已盡力防止侵害著作權人之權益,如仍發現本網站之數位內容有侵害著作權人權益情事者,請權利人通知本網站維護人員(nccur@nccu.edu.tw),維護人員將立即採取移除該數位著作等補救措施。
    NCCU Institutional Repository is made to protect the interests of copyright owners. If you believe that any material on the website infringes copyright, please contact our staff(nccur@nccu.edu.tw). We will remove the work from the repository and investigate your claim.
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 回饋