Loading...
|
Please use this identifier to cite or link to this item:
https://nccur.lib.nccu.edu.tw/handle/140.119/144596
|
Title: | 波形時間序列經由小波轉換應用HuBERT於外匯市場預測 Time Series Data as Waveform Input with Wavelet Transform Using HuBERT on Foreign Exchange Market Prediction |
Authors: | 黃乾唐 Huang, Chien-Tang |
Contributors: | 蔡炎龍 Tsai, Yen-Lung 黃乾唐 Huang Chien-Tang |
Keywords: | Transformer 深度學習 語音辨識 HuBERT 貨幣對 小波轉換 外匯市場 Transformer Deep Learning Speech Recognition HuBERT Currency Pair Foreign Exchange Market Wavelet Transform |
Date: | 2023 |
Issue Date: | 2023-05-02 15:04:55 (UTC+8) |
Abstract: | 近期在人工智慧研究中,Transformer,一種基於自注意力機制架構的模型在整個人工智慧研究裡面舉足輕重,最初是應用在自然語言處理上,有名的語言模型像是BERT或GPT等等都是使用此架;並且被廣泛的應用於其他研究方面,像是圖像分析、語音辨識等等。由於想解決人工資料標籤的問題,自監督式學習成為深度學習中一個很重要的研究方法;特徵學習是讓模型學習到資料的潛在向量,而無需像傳統方法去學習標籤。在這篇文章中,我們將使用Hidden Unit BERT (HuBERT) 一種基於自監督學習方法的語音辨識模型,應用於外匯市場的預測上。在資料處理方面,我們還採用小波轉換去比較,是否在價格資料上也擁有去除雜訊的效果。結果顯示HuBERT不僅在語音辨識上表現出色,而且在數值型時間序列資料上也有相當的表現。 Transformer, a self-attention-based architecture, has become the state-of-the-art model for most AI research. Initially used for natural language processing tasks, it has since been applied to other fields such as computer vision and speech, with models such as BERT and GPT based on it exhibiting exceptional performance. With the challenge of data labeling, self-supervised learning has emerged as an important approach in deep learning research, enabling models to learn latent vectors of data without the need for end-to-end labeling targets. In this paper, we explore the use of Hidden Units BERT (HuBERT), a speech representation model trained with a self-supervised approach, on forex data, which is numerical. We also incorporate Wavelet Transform to denoise the data, comparing its performance to that of numerical data. Our results show that HuBERT not only performs exceptionally well on speech tasks but also exhibits promising results on numerical time series data. |
Reference: | [1] Mohammad Zoynul Abedin, Mahmudul Hasan Moon, M Kabir Hassan, and Petr Hajek.
Deep learning-based exchange rate prediction during the covid-19 pandemic. Annals of Operations Research, pages 1–52, 2021.
[2] Mark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, and Ilya Sutskever. Generative pretraining from pixels. In International conference on machine
learning, pages 1691–1703. PMLR, 2020.
[3] Alexander Jakob Dautel, Wolfgang Karl Härdle, Stefan Lessmann, and Hsin-Vonn Seow.
Forex exchange rate forecasting using deep recurrent neural networks. Digital Finance, 2:69–96, 2020.
[4] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding, 2018.
[5] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain
Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale, 2020.
[6] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition, 2015.
[7] Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation, 9(8):1735–1780, 1997.
[8] Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, and Abdelrahman Mohamed. Hubert: Self-supervised speech representation learning by masked prediction of hidden units, 2021.
[9] Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever, et al. Improving language understanding by generative pre-training. 2018.
[10] David E Rumelhart, Geoffrey E Hinton, and Ronald J Williams. Learning representations by back-propagating errors. nature, 323(6088):533–536, 1986.
[11] Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. Sequence to sequence learning with neural networks, 2014.
[12] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need, 2017.
[13] Shu wen Yang, Po-Han Chi, Yung-Sung Chuang, Cheng-I Jeff Lai, Kushal Lakhotia, Yist Y. Lin, Andy T. Liu, Jiatong Shi, Xuankai Chang, Guan-Ting Lin, Tzu-Hsien Huang,
Wei-Cheng Tseng, Ko tik Lee, Da-Rong Liu, Zili Huang, Shuyan Dong, Shang-Wen Li, Shinji Watanabe, Abdelrahman Mohamed, and Hung yi Lee. Superb: Speech processing universal performance benchmark, 2021. |
Description: | 碩士 國立政治大學 應用數學系 109751001 |
Source URI: | http://thesis.lib.nccu.edu.tw/record/#G0109751001 |
Data Type: | thesis |
Appears in Collections: | [應用數學系] 學位論文
|
Files in This Item:
File |
Size | Format | |
index.html | 0Kb | HTML2 | 138 | View/Open |
|
All items in 政大典藏 are protected by copyright, with all rights reserved.
|