政大機構典藏-National Chengchi University Institutional Repository(NCCUR):Item 140.119/142128
English  |  正體中文  |  简体中文  |  Post-Print筆數 : 27 |  全文笔数/总笔数 : 113318/144297 (79%)
造访人次 : 51069027      在线人数 : 973
RC Version 6.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜寻范围 查询小技巧:
  • 您可在西文检索词汇前后加上"双引号",以获取较精准的检索结果
  • 若欲以作者姓名搜寻,建议至进阶搜寻限定作者字段,可获得较完整数据
  • 进阶搜寻
    政大機構典藏 > 資訊學院 > 資訊科學系 > 學位論文 >  Item 140.119/142128


    请使用永久网址来引用或连结此文件: https://nccur.lib.nccu.edu.tw/handle/140.119/142128


    题名: 基於孿生網絡之正則化對比式遷移學習於醫療影像
    Contrastive Transfer Learning for Regularization with Triplet Network on Medical Imaging
    作者: 游勤葑
    Yu, Chin-Feng
    贡献者: 邱淑怡
    Chiu, Shu-I
    游勤葑
    Yu, Chin-Feng
    关键词: 黃斑部病變
    對比式學習
    遷移式學習
    正則化
    Macular degeneration
    Contrastive learning
    Transfer learning
    Regularization
    日期: 2022
    上传时间: 2022-10-05 09:15:57 (UTC+8)
    摘要: 在此篇論文中,我們針對眼底攝影 ( Color Fundus Photography)醫療影像提出了一個新穎的遷移式學習架構,稱為基於孿生網絡之正則化對比式遷移學習(Contrastive Transfer Learning for Regularization with Triplet Network),CTLRT,在 CTLRT 中包含三種對比式正則化損失項且結合了遷移式學習的骨架,我們在三種眼底攝影資料集且多種遷移式學習骨架下表明 CTLRT 不只擁有比傳統的遷移式學習更高的準確
    度,並且透過我們設計的對比式正則化損失減緩複雜模型帶來的過擬
    合效應,提高了模型的泛化能力,且經由可視化模型關注的區域說明
    了 CTLRT 確實能正確的關注變病的區域。
    This paper focuses on Color Fundus Photography and proposes a novel transfer learning architecture called Contrastive Transfer Learning for Regularization with Triplet Network (CTLRT). CTLRT contains three kinds of contrastive regularization loss terms and combines the backbone of transfer learning. We use three fundus photography datasets and multiple transfer backbones. The following shows that CTLRT not only has higher accuracy than traditional transfer learning but also mitigates the overfitting effect brought by complex models through our designed contrastive regularization
    loss and improves the model’s generalization ability. Visualizing the area where model interest shows that CTLRT correctly focuses on the diseased site.
    參考文獻: Agarap, A. F. (2018). Deep learning using rectified linear units (relu). arXiv preprint
    arXiv:1803.08375.
    Bromley, J., Guyon, I., LeCun, Y., Säckinger, E., and Shah, R. (1993). Signature verifi-
    cation using a” siamese” time delay neural network. Advances in neural information
    processing systems, 6.
    Chakraborty, R. and Pramanik, A. (2022). Dcnn-based prediction model for detection
    of age-related macular degeneration from color fundus images. Medical & Biological
    Engineering & Computing, 60(5):1431–1448.
    Chen, T., Kornblith, S., Norouzi, M., and Hinton, G. (2020a). A simple framework for
    contrastive learning of visual representations. In International conference on machine
    learning, pages 1597–1607. PMLR.
    Chen, T.-C., Lim, W. S., Wang, V. Y., Ko, M.-L., Chiu, S.-I., Huang, Y.-S., Lai, F., Yang,
    C.-M., Hu, F.-R., Jang, J.-S. R., et al. (2021). Artificial intelligence–assisted early detec-
    tion of retinitis pigmentosa—the most common inherited retinal degeneration. Journal
    of Digital Imaging, 34(4):948–958.
    Chen, X., Fan, H., Girshick, R., and He, K. (2020b). Improved baselines with momentum
    contrastive learning. arXiv preprint arXiv:2003.04297.
    Chen, X. and He, K. (2021). Exploring simple siamese representation learning. In Pro-
    ceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,
    pages 15750–15758.
    Chollet, F. (2017). Xception: Deep learning with depthwise separable convolutions. In
    Proceedings of the IEEE conference on computer vision and pattern recognition, pages
    1251–1258.
    Dataset, B. R. O.-A. (2019). ichallenge-amd.
    Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., and Fei-Fei, L. (2009a). Imagenet: A
    large-scale hierarchical image database. In 2009 IEEE Conference on Computer Vision
    and Pattern Recognition, pages 248–255.
    Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., and Fei-Fei, L. (2009b). ImageNet: A
    Large-Scale Hierarchical Image Database. In CVPR09.
    Farnell, D. J., Hatfield, F. N., Knox, P., Reakes, M., Spencer, S., Parry, D., and Harding, S. P. (2008). Enhancement of blood vessels in digital fundus photographs via the appli-
    cation of multiscale line operators. Journal of the Franklin institute, 345(7):748–765.
    Fukushima, K. and Miyake, S. (1982). Neocognitron: A new algorithm for pattern recog-
    nition tolerant of deformations and shifts in position. Pattern recognition, 15(6):455–
    469.
    Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C.,
    Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al. (2020). Bootstrap your own latent-a
    new approach to self-supervised learning. Advances in Neural Information Processing
    Systems, 33:21271–21284.
    Hadsell, R., Chopra, S., and LeCun, Y. (2006). Dimensionality reduction by learning an
    invariant mapping. In 2006 IEEE Computer Society Conference on Computer Vision
    and Pattern Recognition (CVPR’06), volume 2, pages 1735–1742. IEEE.
    He, K., Zhang, X., Ren, S., and Sun, J. (2016). Deep residual learning for image recogni-
    tion. In Proceedings of the IEEE conference on computer vision and pattern recogni-
    tion, pages 770–778.
    Huang, G., Liu, Z., Van Der Maaten, L., and Weinberger, K. Q. (2017). Densely connected
    convolutional networks. In Proceedings of the IEEE conference on computer vision and
    pattern recognition, pages 4700–4708.
    Hubel, D. H. and Wiesel, T. N. (1962). Receptive fields, binocular interaction and func-
    tional architecture in the cat’s visual cortex. The Journal of physiology, 160(1):106.
    Krizhevsky, A., Hinton, G., et al. (2009). Learning multiple layers of features from tiny
    images. Technical report, Citeseer.
    Krizhevsky, A., Sutskever, I., and Hinton, G. E. (2012). Imagenet classification with deep
    convolutional neural networks. In Advances in neural information processing systems,
    pages 1097–1105.
    LeCun, Y., Boser, B., Denker, J., Henderson, D., Howard, R., Hubbard, W., and Jackel, L.
    (1989). Handwritten digit recognition with a back-propagation network. Advances in neural information processing systems, 2.
    LeCun, Y., Bottou, L., Bengio, Y., Haffner, P., et al. (1998). Gradient-based learning
    applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324.
    Netzer, Y., Wang, T., Coates, A., Bissacco, A., Wu, B., and Ng, A. Y. (2011). Reading
    digits in natural images with unsupervised feature learning.
    Pan, S. J. and Yang, Q. (2009). A survey on transfer learning. IEEE Transactions on
    knowledge and data engineering, 22(10):1345–1359.
    Raghu, M., Zhang, C., Kleinberg, J., and Bengio, S. (2019). Transfusion: Understand-
    ing transfer learning for medical imaging. Advances in neural information processing
    systems, 32.
    Robinson, J., Chuang, C.-Y., Sra, S., and Jegelka, S. (2020). Contrastive learning with
    hard negative samples. arXiv preprint arXiv:2010.04592.
    Rumelhart, D. E., Hinton, G. E., and Williams, R. J. (1986). Learning representations by
    back-propagating errors. nature, 323(6088):533–536.
    Selvaraju, R. R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., and Batra, D. (2017). Grad-cam: Visual explanations from deep networks via gradient-based localization. In
    Proceedings of the IEEE international conference on computer vision, pages 618–626.
    Shorten, C. and Khoshgoftaar, T. M. (2019). A survey on image data augmentation for
    deep learning. Journal of Big Data, 6(1):60.
    Smith, R. (2007). An overview of the tesseract ocr engine. In Ninth international confer-
    ence on document analysis and recognition (ICDAR 2007), volume 2, pages 629–633.
    IEEE.
    Szegedy, C., Ioffe, S., Vanhoucke, V., and Alemi, A. A. (2017). Inception-v4, inception- resnet and the impact of residual connections on learning. In AAAI, volume 4, page 12.
    Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., and Van-
    houcke (2015). Going deeper with convolutions. In Proceedings of the IEEE conference
    on computer vision and pattern recognition, pages 1–9.
    Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., and Wojna, Z. (2016). Rethinking the
    inception architecture for computer vision. In Proceedings of the IEEE conference on
    computer vision and pattern recognition, pages 2818–2826.
    Torrey, L. and Shavlik, J. (2010). Transfer learning. In Handbook of research on machine
    learning applications and trends: algorithms, methods, and techniques, pages 242–264.
    IGI global.
    Yu, Y., Chen, X., Zhu, X., Zhang, P., Hou, Y., Zhang, R., and Wu, C. (2020). Performance
    of deep transfer learning for detecting abnormal fundus images. Journal of Current
    Ophthalmology, 32(4):368.
    Zhai, X., Oliver, A., Kolesnikov, A., and Beyer, L. (2019). S4l: Self-supervised semi-
    supervised learning. In Proceedings of the IEEE/CVF International Conference on
    Computer Vision, pages 1476–1485.
    描述: 碩士
    國立政治大學
    資訊科學系
    110753205
    資料來源: http://thesis.lib.nccu.edu.tw/record/#G0110753205
    数据类型: thesis
    DOI: 10.6814/NCCU202201567
    显示于类别:[資訊科學系] 學位論文

    文件中的档案:

    档案 描述 大小格式浏览次数
    320501.pdf54061KbAdobe PDF2151检视/开启


    在政大典藏中所有的数据项都受到原著作权保护.


    社群 sharing

    著作權政策宣告 Copyright Announcement
    1.本網站之數位內容為國立政治大學所收錄之機構典藏,無償提供學術研究與公眾教育等公益性使用,惟仍請適度,合理使用本網站之內容,以尊重著作權人之權益。商業上之利用,則請先取得著作權人之授權。
    The digital content of this website is part of National Chengchi University Institutional Repository. It provides free access to academic research and public education for non-commercial use. Please utilize it in a proper and reasonable manner and respect the rights of copyright owners. For commercial use, please obtain authorization from the copyright owner in advance.

    2.本網站之製作,已盡力防止侵害著作權人之權益,如仍發現本網站之數位內容有侵害著作權人權益情事者,請權利人通知本網站維護人員(nccur@nccu.edu.tw),維護人員將立即採取移除該數位著作等補救措施。
    NCCU Institutional Repository is made to protect the interests of copyright owners. If you believe that any material on the website infringes copyright, please contact our staff(nccur@nccu.edu.tw). We will remove the work from the repository and investigate your claim.
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 回馈