Reference: | 參考文獻 [1] M. Ashikhmin, “Synthesizing natural textures,” in ACM Symposium on Interactive 3D Graphics and Games, 2001. [2] L. A. Gatys, A. S. Ecker, and M. Bethge, “A neural algorithm of artistic style,”ArXiv, vol. abs/1508.06576, 2015. [3] J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, “Unpaired image-to-image translation using cycle-consistent adversarial networks,” in Computer Vision (ICCV),2017 IEEE International Conference on, 2017. [4] B. Li, C. Xiong, T. Wu, Y. Zhou, L. Zhang, and R. Chu, “Neural abstract styletransfer for chinese traditional painting,” 2018. [5] A. Xue, “End-to-end chinese landscape painting creation using generative adversarial networks,” 2020. [6] S. Luo, S. Liu, J. Han, and T. Guo, “Multimodal fusion for traditional chinesepainting generation,” in Pacific Rim Conference on Multimedia, 2018. [7] B. He, F. Gao, D. Ma, B. Shi, and L.-Y. Duan, “Chipgan: A generative adversarialnetwork for chinese ink wash painting style transfer,” in Proceedings of the 26thACM international conference on Multimedia, 2018, pp. 1172–1180. [8] A. A. Efros and T. K. Leung, “Texture synthesis by non-parametric sampling,” Proceedings of the Seventh IEEE International Conference on Computer Vision,vol. 2, pp. 1033–1038 vol.2, 1999. [9] A. Hertzmann, C. E. Jacobs, N. Oliver, B. Curless, and D. Salesin, “Image analogies,” Proceedings of the 28th annual conference on Computer graphics andinteractive techniques, 2001. [10] V. Dumoulin, J. Shlens, and M. Kudlur, “A learned representation for artisticstyle,” ArXiv, vol. abs/1610.07629, 2016. [11] X. Huang and S. J. Belongie, “Arbitrary style transfer in real-time with adaptive instance normalization,” 2017 IEEE International Conference on Computer Vision (ICCV), pp. 1510–1519, 2017. [12] I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair,A. C. Courville, and Y. Bengio, “Generative adversarial nets,” in NIPS, 2014. [13] T.-C. Wang, M.-Y. Liu, J.-Y. Zhu, A. Tao, J. Kautz, and B. Catanzaro, “Highresolution image synthesis and semantic manipulation with conditional gans,” 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.8798–8807, 2017. [14] J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, “Unpaired image-to-image translation using cycle-consistent adversarial networks,” in Proceedings of the IEEEinternational conference on computer vision, 2017, pp. 2223–2232. [15] H. Zhang, I. J. Goodfellow, D. N. Metaxas, and A. Odena, “Self-attention generative adversarial networks,” ArXiv, vol. abs/1805.08318, 2018. [16] D. Eigen, C. Puhrsch, and R. Fergus, “Depth map prediction from a single imageusing a multi-scale deep network,” 2014. [17] C. Godard, O. M. Aodha, and G. J. Brostow, “Unsupervised monocular depthestimation with left-right consistency,” 2017. [18] C. Chan, F. Durand, and P. Isola, “Learning to generate line drawings that conveygeometry and semantics,” 2022. [19] R. Ranftl, K. Lasinger, D. Hafner, K. Schindler, and V. Koltun, “Towards robustmonocular depth estimation: Mixing datasets for zero-shot cross-dataset transfer,” 2020. [20] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, “Rethinking theinception architecture for computer vision,” 2015. [21] A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry,A. Askell, P. Mishkin, J. Clark, G. Krueger, and I. Sutskever, “Learning transferable visual models from natural language supervision,” 2021. [22] Z. Wang, A. Bovik, H. Sheikh, and E. Simoncelli, “Image quality assessment:from error visibility to structural similarity,” IEEE Transactions on Image Processing, vol. 13, no. 4, pp. 600–612, 2004. [23] Q. Huynh-Thu, “Scope of validity of psnr in image/video quality assessment,”Electronics Letters, vol. 44, pp. 800–801(1), June 2008. [Online]. Available:https://digital-library.theiet.org/content/journals/10.1049/el_20080522 |