政大機構典藏-National Chengchi University Institutional Repository(NCCUR):Item 140.119/144596
English  |  正體中文  |  简体中文  |  Post-Print筆數 : 27 |  全文笔数/总笔数 : 113451/144438 (79%)
造访人次 : 51289970      在线人数 : 870
RC Version 6.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜寻范围 查询小技巧:
  • 您可在西文检索词汇前后加上"双引号",以获取较精准的检索结果
  • 若欲以作者姓名搜寻,建议至进阶搜寻限定作者字段,可获得较完整数据
  • 进阶搜寻
    政大機構典藏 > 理學院 > 應用數學系 > 學位論文 >  Item 140.119/144596


    请使用永久网址来引用或连结此文件: https://nccur.lib.nccu.edu.tw/handle/140.119/144596


    题名: 波形時間序列經由小波轉換應用HuBERT於外匯市場預測
    Time Series Data as Waveform Input with Wavelet Transform Using HuBERT on Foreign Exchange Market Prediction
    作者: 黃乾唐
    Huang, Chien-Tang
    贡献者: 蔡炎龍
    Tsai, Yen-Lung
    黃乾唐
    Huang Chien-Tang
    关键词: Transformer
    深度學習
    語音辨識
    HuBERT
    貨幣對
    小波轉換
    外匯市場
    Transformer
    Deep Learning
    Speech Recognition
    HuBERT
    Currency Pair
    Foreign Exchange Market
    Wavelet Transform
    日期: 2023
    上传时间: 2023-05-02 15:04:55 (UTC+8)
    摘要: 近期在人工智慧研究中,Transformer,一種基於自注意力機制架構的模型在整個人工智慧研究裡面舉足輕重,最初是應用在自然語言處理上,有名的語言模型像是BERT或GPT等等都是使用此架;並且被廣泛的應用於其他研究方面,像是圖像分析、語音辨識等等。由於想解決人工資料標籤的問題,自監督式學習成為深度學習中一個很重要的研究方法;特徵學習是讓模型學習到資料的潛在向量,而無需像傳統方法去學習標籤。在這篇文章中,我們將使用Hidden Unit BERT (HuBERT) 一種基於自監督學習方法的語音辨識模型,應用於外匯市場的預測上。在資料處理方面,我們還採用小波轉換去比較,是否在價格資料上也擁有去除雜訊的效果。結果顯示HuBERT不僅在語音辨識上表現出色,而且在數值型時間序列資料上也有相當的表現。
    Transformer, a self-attention-based architecture, has become the state-of-the-art model for most AI research. Initially used for natural language processing tasks, it has since been applied to other fields such as computer vision and speech, with models such as BERT and GPT based on it exhibiting exceptional performance. With the challenge of data labeling, self-supervised learning has emerged as an important approach in deep learning research, enabling models to learn latent vectors of data without the need for end-to-end labeling targets. In this paper, we explore the use of Hidden Units BERT (HuBERT), a speech representation model trained with a self-supervised approach, on forex data, which is numerical. We also incorporate Wavelet Transform to denoise the data, comparing its performance to that of numerical data. Our results show that HuBERT not only performs exceptionally well on speech tasks but also exhibits promising results on numerical time series data.
    參考文獻: [1] Mohammad Zoynul Abedin, Mahmudul Hasan Moon, M Kabir Hassan, and Petr Hajek.
    Deep learning-based exchange rate prediction during the covid-19 pandemic. Annals of Operations Research, pages 1–52, 2021.
    [2] Mark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, and Ilya Sutskever. Generative pretraining from pixels. In International conference on machine
    learning, pages 1691–1703. PMLR, 2020.
    [3] Alexander Jakob Dautel, Wolfgang Karl Härdle, Stefan Lessmann, and Hsin-Vonn Seow.
    Forex exchange rate forecasting using deep recurrent neural networks. Digital Finance, 2:69–96, 2020.
    [4] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding, 2018.
    [5] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain
    Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale, 2020.
    [6] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition, 2015.
    [7] Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation, 9(8):1735–1780, 1997.
    [8] Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, and Abdelrahman Mohamed. Hubert: Self-supervised speech representation learning by masked prediction of hidden units, 2021.
    [9] Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever, et al. Improving language understanding by generative pre-training. 2018.
    [10] David E Rumelhart, Geoffrey E Hinton, and Ronald J Williams. Learning representations by back-propagating errors. nature, 323(6088):533–536, 1986.
    [11] Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. Sequence to sequence learning with neural networks, 2014.
    [12] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need, 2017.
    [13] Shu wen Yang, Po-Han Chi, Yung-Sung Chuang, Cheng-I Jeff Lai, Kushal Lakhotia, Yist Y. Lin, Andy T. Liu, Jiatong Shi, Xuankai Chang, Guan-Ting Lin, Tzu-Hsien Huang,
    Wei-Cheng Tseng, Ko tik Lee, Da-Rong Liu, Zili Huang, Shuyan Dong, Shang-Wen Li, Shinji Watanabe, Abdelrahman Mohamed, and Hung yi Lee. Superb: Speech processing universal performance benchmark, 2021.
    描述: 碩士
    國立政治大學
    應用數學系
    109751001
    資料來源: http://thesis.lib.nccu.edu.tw/record/#G0109751001
    数据类型: thesis
    显示于类别:[應用數學系] 學位論文

    文件中的档案:

    档案 大小格式浏览次数
    index.html0KbHTML2146检视/开启


    在政大典藏中所有的数据项都受到原著作权保护.


    社群 sharing

    著作權政策宣告 Copyright Announcement
    1.本網站之數位內容為國立政治大學所收錄之機構典藏,無償提供學術研究與公眾教育等公益性使用,惟仍請適度,合理使用本網站之內容,以尊重著作權人之權益。商業上之利用,則請先取得著作權人之授權。
    The digital content of this website is part of National Chengchi University Institutional Repository. It provides free access to academic research and public education for non-commercial use. Please utilize it in a proper and reasonable manner and respect the rights of copyright owners. For commercial use, please obtain authorization from the copyright owner in advance.

    2.本網站之製作,已盡力防止侵害著作權人之權益,如仍發現本網站之數位內容有侵害著作權人權益情事者,請權利人通知本網站維護人員(nccur@nccu.edu.tw),維護人員將立即採取移除該數位著作等補救措施。
    NCCU Institutional Repository is made to protect the interests of copyright owners. If you believe that any material on the website infringes copyright, please contact our staff(nccur@nccu.edu.tw). We will remove the work from the repository and investigate your claim.
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 回馈