Reference: | [1] Y. Bengio, A. C. Courville, and P. Vincent, “Representation learning: A review and new perspectives,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, pp. 1798–1828, 2013. [2] K. P. F.R.S., “Liii. on lines and planes of closest fit to systems of points in space,” Philosophical Magazine Series 1, vol. 2, pp. 559–572. [3] M. J. Greenacre and J. Blasius, “Multiple correspondence analysis and related methods,” 2006. [4] W.-J. Li, D.-Y. Yeung, and Z. Zhang, “Probabilistic relational pca,” in NIPS, 2009. [5] M. Germain, K. Gregor, I. Murray, and H. Larochelle, “Made: Masked autoencoder for distribution estimation,” in ICML, 2015. [6] J. Chung, K. Kastner, L. Dinh, K. Goel, A. C. Courville, and Y. Bengio, “A recurrent latent variable model for sequential data,” in NIPS, 2015. [7] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” CoRR, vol. abs/1409.1556, 2015. [8] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778, 2016. [9] T. Mikolov, K. Chen, G. S. Corrado, and J. Dean, “Efficient estimation of word representations in vector space,” in ICLR, 2013. [10] T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean, “Distributed representations of words and phrases and their compositionality,” in NIPS, 2013. [11] I. Sutskever, O. Vinyals, and Q. V. Le, “Sequence to sequence learning with neural networks,” in NIPS, 2014. [12] M. E. Peters, M. Neumann, M. Iyyer, M. Gardner, C. Clark, K. Lee, and L. Zettlemoyer, “Deep contextualized word representations,” in NAACL, 2018. [13] Google, “The wordpiece algorithm in open source bert,” Oct 2018. [14] T. Kudo and J. Richardson, “Sentencepiece: A simple and language independent subword tokenizer and detokenizer for neural text processing,” in EMNLP, 2018. [15] D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning representations by backpropagating errors,” Nature, vol. 323, pp. 533–536, 1986. [16] A. Vaswani, N. M. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin, “Attention is all you need,” ArXiv, vol. abs/1706.03762, 2017. [17] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “Bert: Pre-training of deep bidirectional transformers for language understanding,” ArXiv, vol. abs/1810.04805, 2019. [18] A. Radford and K. Narasimhan, “Improving language understanding by generative pre-training, 2018. [19] C. Raffel, N. M. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y. Zhou, W. Li, and P. J. Liu, “Exploring the limits of transfer learning with a unified text-to-text transformer,” ArXiv, vol. abs/1910.10683, 2020. [20] K. Clark, U. Khandelwal, O. Levy, and C. D. Manning, “What does bert look at? an analysis of bert’s attention,” in BlackboxNLP@ACL, 2019. [21] N. Kitaev, L. Kaiser, and A. Levskaya, “Reformer: The efficient transformer,” ArXiv, vol. abs/2001.04451, 2020. [22] I. Beltagy, M. E. Peters, and A. Cohan, “Longformer: The long-document transformer,” ArXiv, vol. abs/2004.05150, 2020. [23] Z. Dai, Z. Yang, Y. Yang, J. G. Carbonell, Q. V. Le, and R. Salakhutdinov, “Transformer-xl: Attentive language models beyond a fixed-length context,” ArXiv, vol. abs/1901.02860, 2019. [24] Z. Lan, M. Chen, S. Goodman, K. Gimpel, P. Sharma, and R. Soricut, “Albert: A lite bert for self-supervised learning of language representations,” ArXiv, vol. abs/1909.11942, 2020. [25] M. Lewis, Y. Liu, N. Goyal, M. Ghazvininejad, A. Mohamed, O. Levy, V. Stoyanov, and L. Zettlemoyer, “Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension,” in ACL, 2020. [26] S. Kobayashi, “Homemade bookcorpus.” https://github.com/BIGBALLON/cifar-10-cnn, 2018. [27] W. Foundation, “Wikimedia downloads.” https://dumps.wikimedia.org, 2020. [28] K. Song, X. Tan, T. Qin, J. Lu, and T.-Y. Liu, “Mass: Masked sequence to sequence pre-training for language generation,” in ICML, 2019. [29] A. Andoni, P. Indyk, T. Laarhoven, I. P. Razenshteyn, and L. Schmidt, “Practical and optimal lsh for angular distance,” ArXiv, vol. abs/1509.02897, 2015. [30] A. N. Gomez, M. Ren, R. Urtasun, and R. B. Grosse, “The reversible residual network: Backpropagation without storing activations,” in NIPS, 2017. [31] GoodfellowIan, Pouget-AbadieJean, MirzaMehdi, Xubing, Warde-FarleyDavid, OzairSherjil, CourvilleAaron, and BengioYoshua, “Generative adversarial networks,” Communications of The ACM, 2020. [32] M. Arjovsky, S. Chintala, and L. Bottou, “Wasserstein gan,” ArXiv, vol. abs/1701.07875, 2017. [33] D. P. Kingma and M. Welling, “Auto-encoding variational bayes,” CoRR, vol. abs/1312.6114, 2014. [34] A. Makhzani, J. Shlens, N. Jaitly, and I. J. Goodfellow, “Adversarial autoencoders,” ArXiv, vol. abs/1511.05644, 2015. [35] M. A. Kramer, “Nonlinear principal component analysis using autoassociative neural networks,” Aiche Journal, vol. 37, pp. 233–243, 1991. [36] A. B. L. Larsen, S. K. Sonderby, H. Larochelle, and O. Winther, “Autoencoding beyond pixels using a learned similarity metric,” ArXiv, vol. abs/1512.09300, 2016. [37] A. Plumerault, H. L. Borgne, and C. Hudelot, “Avae: Adversarial variational auto encoder,” 2020 25th International Conference on Pattern Recognition (ICPR), pp. 8687–8694, 2021. [38] A. Pagnoni, K. Liu, and S. Li, “Conditional variational autoencoder for neural machine translation,” ArXiv, vol. abs/1812.04405, 2018. [39] M. Mirza and S. Osindero, “Conditional generative adversarial nets,” ArXiv, vol. abs/1411.1784, 2014. [40] P. Bhargava, A. Drozd, and A. Rogers, “Generalization in nli: Ways (not) to go beyond simple heuristics,” 2021. [41] I. Turc, M. Chang, K. Lee, and K. Toutanova, “Well-read students learn better: The impact of student initialization on knowledge distillation,” CoRR, vol. abs/1908.08962, 2019. [42] B. Klimt and Y. Yang, “The enron corpus: A new dataset for email classification research,” in ECML, 2004. [43] mrm8488, “Mrm8488/fake-news · datasets at hugging face,” Oct 2021. [44] F. O. Catak and A. F. Yazi, “A benchmark api call dataset for windows pe malware classification,” ArXiv, vol. abs/1905.01999, 2019. [45] D. Dua and C. Graff, “UCI machine learning repository,” 2017. [46] F. O. Catak, J. Ahmed, K. Sahinbas, and Z. H. Khand, “Data augmentation based malware detection using convolutional neural networks,” PeerJ Computer Science, vol. 7, p. e346, Jan. 2021. [47] A. F. Yazi, F. O. Catak, and E. G¨ul, “Classification of methamorphic malware with deep learning(lstm),” 2019 27th Signal Processing and Communications Applications Conference (SIU), pp. 1–4, 2019. [48] P. Gage, “A new algorithm for data compression,” The C Users Journal archive, vol. 12, pp. 23– 38, 1994. [49] S. Reese, G. Boleda, M. Cuadros, L. Padr´o, and G. Rigau, “Wikicorpus: A word-sense disambiguated multilingual Wikipedia corpus,” in Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC’10), (Valletta, Malta), European Language Resources Association (ELRA), May 2010. [50] Y. Zhu, R. Kiros, R. Zemel, R. Salakhutdinov, R. Urtasun, A. Torralba, and S. Fidler, “Aligning books and movies: Towards story-like visual explanations by watching movies and reading books,” in The IEEE International Conference on Computer Vision (ICCV), December 2015. [51] Q. Lhoest, A. Villanova del Moral, Y. Jernite, A. Thakur, P. von Platen, S. Patil, J. Chaumond, M. Drame, J. Plu, L. Tunstall, J. Davison, M. ˇSaˇsko, G. Chhablani, B. Malik, S. Brandeis, T. Le Scao, V. Sanh, C. Xu, N. Patry, A. McMillan-Major, P. Schmid, S. Gugger, C. Delangue, T. Matussi"ere, L. Debut, S. Bekman, P. Cistac, T. Goehringer, V. Mustar, F. Lagunas, A. Rush, and T. Wolf, “Datasets: A community library for natural language processing,” in Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, (Online and Punta Cana, Dominican Republic), pp. 175–184, Association for Computational Linguistics, Nov. 2021. [52] T. Wolf, L. Debut, V. Sanh, J. Chaumond, C. Delangue, A. Moi, P. Cistac, T. Rault, R. Louf, M. Funtowicz, J. Davison, S. Shleifer, P. von Platen, C. Ma, Y. Jernite, J. Plu, C. Xu, T. L. Scao, S. Gugger, M. Drame, Q. Lhoest, and A. M. Rush, “Transformers: State-of-the-art natural language processing,” in Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, (Online), pp. 38–45, Association for Computational Linguistics, Oct. 2020. |