Reference: | Alenezi, R. and Ludwig, S. A. (2021). Explainability of cybersecurity threats data using shap. In 2021 IEEE Symposium Series on Computational Intelligence (SSCI), pages 01–10. IEEE. Bach, S., Binder, A., Montavon, G., Klauschen, F., Müller, K.-R., and Samek, W. (2015). On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PloS one, 10(7):e0130140. Ball, T. and Daniel, J. (2015). Deconstructing dynamic symbolic execution. In Dependable Soft- ware Systems Engineering, pages 26–41. IOS Press. Brummayer, R. and Biere, A. (2009). Boolector: An efficient smt solver for bit-vectors and ar- rays. In Tools and Algorithms for the Construction and Analysis of Systems: 15th International Conference, TACAS 2009, Held as Part of the Joint European Conferences on Theory and Prac- tice of Software, ETAPS 2009, York, UK, March 22-29, 2009. Proceedings 15, pages 174–177. Springer. Cadar, C., Ganesh, V., Pawlowski, P. M., Dill, D. L., and Engler, D. R. (2008). Exe: Automatically generating inputs of death. ACM Transactions on Information and System Security (TISSEC), 12(2):1–38. Cen, L.-P., Ji, J., Lin, J.-W., Ju, S.-T., Lin, H.-J., Li, T.-P., Wang, Y., Yang, J.-F., Liu, Y.-F., Tan, S., et al. (2021). Automatic detection of 39 fundus diseases and conditions in retinal photographs using deep neural networks. Nature communications, 12(1):4828. Chen, H., Lundberg, S. M., and Lee, S.-I. (2022). Explaining a series of models by propagating shapley values. Nature communications, 13(1):4512. Chen, Y.-F., Tsai, W.-L., Wu, W.-C., Yen, D.-D., and Yu, F. (2021). Pyct: A python concolic tester. In Programming Languages and Systems: 19th Asian Symposium, APLAS 2021, Chicago, IL, USA, October 17–18, 2021, Proceedings 19, pages 38–46. Springer. De Moura, L. and Bjørner, N. (2008). Z3: An efficient smt solver. In International conference on Tools and Algorithms for the Construction and Analysis of Systems, pages 337–340. Springer. Elboher, Y. Y., Gottschlich, J., and Katz, G. (2020). An abstraction-based framework for neural network verification. In Computer Aided Verification: 32nd International Conference, CAV 2020, Los Angeles, CA, USA, July 21–24, 2020, Proceedings, Part I 32, pages 43–65. Springer. Gehr, T., Mirman, M., Drachsler-Cohen, D., Tsankov, P., Chaudhuri, S., and Vechev, M. (2018). Ai2: Safety and robustness certification of neural networks with abstract interpretation. In 2018 IEEE symposium on security and privacy (SP), pages 3–18. IEEE. Godefroid, P., Klarlund, N., and Sen, K. (2005). Dart: Directed automated random testing. In Proceedings of the 2005 ACM SIGPLAN conference on Programming language design and implementation, pages 213–223. Goodfellow, I., Bengio, Y., and Courville, A. (2016). Deep learning. MIT press. Goodfellow, I. J., Shlens, J., and Szegedy, C. (2014). Explaining and harnessing adversarial ex- amples. arXiv preprint arXiv:1412.6572. Gopinath, D., Zhang, M., Wang, K., Kadron, I. B., Pasareanu, C., and Khurshid, S. (2019). Sym- bolic execution for importance analysis and adversarial generation in neural networks. In 2019 IEEE 30th International Symposium on Software Reliability Engineering (ISSRE), pages 313– 322. IEEE. Guo, C., Sablayrolles, A., Jégou, H., and Kiela, D. (2021). Gradient-based adversarial attacks against text transformers. arXiv preprint arXiv:2104.13733. He, L., Aouf, N., and Song, B. (2021). Explainable deep reinforcement learning for uav au- tonomous path planning. Aerospace science and technology, 118:107052. Hinton, G., Deng, L., Yu, D., Dahl, G. E., Mohamed, A.-r., Jaitly, N., Senior, A., Vanhoucke, V., Nguyen, P., Sainath, T. N., et al. (2012). Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal processing magazine, 29(6):82–97. Huang, X., Kroening, D., Ruan, W., Sharp, J., Sun, Y., Thamo, E., Wu, M., and Yi, X. (2020). A survey of safety and trustworthiness of deep neural networks: Verification, testing, adversarial attack and defence, and interpretability. Computer Science Review, 37:100270. Katz, G., Barrett, C., Dill, D. L., Julian, K., and Kochenderfer, M. J. (2017). Reluplex: An efficient smt solver for verifying deep neural networks. In Computer Aided Verification: 29th Interna- tional Conference, CAV 2017, Heidelberg, Germany, July 24-28, 2017, Proceedings, Part I 30, pages 97–117. Springer. Katz, G., Huang, D. A., Ibeling, D., Julian, K., Lazarus, C., Lim, R., Shah, P., Thakoor, S., Wu, H., Zeljić, A., et al. (2019). The marabou framework for verification and analysis of deep neural networks. In Computer Aided Verification: 31st International Conference, CAV 2019, New York City, NY, USA, July 15-18, 2019, Proceedings, Part I 31, pages 443–452. Springer. Krizhevsky, A., Sutskever, I., and Hinton, G. E. (2012). Imagenet classification with deep convo- lutional neural networks. Advances in neural information processing systems, 25. Li, Z., Ma, X., Xu, C., and Cao, C. (2019). Structural coverage criteria for neural networks could be misleading. In 2019 IEEE/ACM 41st International Conference on Software Engineering: New Ideas and Emerging Results (ICSE-NIER), pages 89–92. IEEE. Lundberg, S. M. and Lee, S.-I. (2017). A unified approach to interpreting model predictions. Advances in neural information processing systems, 30. Ma, L., Juefei-Xu, F., Zhang, F., Sun, J., Xue, M., Li, B., Chen, C., Su, T., Li, L., Liu, Y., et al. (2018a). Deepgauge: Multi-granularity testing criteria for deep learning systems. In Proceed- ings of the 33rd ACM/IEEE international conference on automated software engineering, pages 120–131. Ma, L., Zhang, F., Sun, J., Xue, M., Li, B., Juefei-Xu, F., Xie, C., Li, L., Liu, Y., Zhao, J., et al. (2018b). Deepmutation: Mutation testing of deep learning systems. In 2018 IEEE 29th inter- national symposium on software reliability engineering (ISSRE), pages 100–111. IEEE. Mirman, M., Gehr, T., and Vechev, M. (2018). Differentiable abstract interpretation for provably robust neural networks. In International Conference on Machine Learning, pages 3578–3586. PMLR. Narodytska, N., Kasiviswanathan, S., Ryzhyk, L., Sagiv, M., and Walsh, T. (2018). Verifying prop- erties of binarized deep neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 32. Nguyen, A., Yosinski, J., and Clune, J. (2015). Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 427–436. Odena, A., Olsson, C., Andersen, D., and Goodfellow, I. (2019). Tensorfuzz: Debugging neural networks with coverage-guided fuzzing. In International Conference on Machine Learning, pages 4901–4911. PMLR. Pei, K., Cao, Y., Yang, J., and Jana, S. (2017). Deepxplore: Automated whitebox testing of deep learning systems. In proceedings of the 26th Symposium on Operating Systems Principles, pages 1–18. Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). ” why should i trust you?” explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Ruan, W., Huang, X., and Kwiatkowska, M. (2018). Reachability analysis of deep neural networks with provable guarantees. arXiv preprint arXiv:1805.02242. Sen, K., Marinov, D., and Agha, G. (2005). Cute: A concolic unit testing engine for c. ACM SIGSOFT Software Engineering Notes, 30(5):263–272. Shrikumar, A., Greenside, P., and Kundaje, A. (2017). Learning important features through prop- agating activation differences. In International conference on machine learning, pages 3145– 3153. PMlR. Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L., Van Den Driessche, G., Schrittwieser, J., Antonoglou, I., Panneershelvam, V., Lanctot, M., et al. (2016). Mastering the game of go with deep neural networks and tree search. nature, 529(7587):484–489. Singh, G., Gehr, T., Mirman, M., Püschel, M., and Vechev, M. (2018). Fast and effective robustness certification. Advances in neural information processing systems, 31. Singh, G., Gehr, T., Püschel, M., and Vechev, M. (2019). An abstract domain for certifying neural networks. Proceedings of the ACM on Programming Languages, 3(POPL):1–30. Su, J., Vargas, D. V., and Sakurai, K. (2019). One pixel attack for fooling deep neural networks. IEEE Transactions on Evolutionary Computation, 23(5):828–841. Sun, Y., Huang, X., Kroening, D., Sharp, J., Hill, M., and Ashmore, R. (2019). Deepconcolic: Testing and debugging deep neural networks. In 2019 IEEE/ACM 41st International Conference on Software Engineering: Companion Proceedings (ICSE-Companion), pages 111–114. IEEE. Sun, Y., Wu, M., Ruan, W., Huang, X., Kwiatkowska, M., and Kroening, D. (2018). Concolic testing for deep neural networks. In Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering, pages 109–119. Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I., and Fergus, R. (2013). Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., and Polosukhin, I. (2017). Attention is all you need. Advances in neural information processing systems, 30. Wang, S., Pei, K., Whitehouse, J., Yang, J., and Jana, S. (2018). Formal security analysis of neural networks using symbolic intervals. In 27th USENIX Security Symposium (USENIX Security 18), pages 1599–1614. Xie, X., Li, T., Wang, J., Ma, L., Guo, Q., Juefei-Xu, F., and Liu, Y. (2022). Npc: N euron p ath c overage via characterizing decision logic of deep neural networks. ACM Transactions on Software Engineering and Methodology (TOSEM), 31(3):1–27. Yu, F., Chi, Y.-Y., and Chen, Y.-F. (2024). Constraint-based adversarial example synthesis. Zhang, H., Weng, T.-W., Chen, P.-Y., Hsieh, C.-J., and Daniel, L. (2018). Efficient neural net- work robustness certification with general activation functions. Advances in neural information processing systems, 31. |