Reference: | Alemdar, H., Leroy, V., Prost-Boucle, A., & Pétrot, F. (2017). Ternary neural networks for resource-efficient ai applications. 2017 international joint conference on neural networks (IJCNN), 2547–2554. Andri, R., Cavigelli, L., Rossi, D., & Benini, L. (2017). Yodann: An architecture for ultralow power binary-weight cnn acceleration. IEEE Transactions on ComputerAided Design of Integrated Circuits and Systems, 37(1), 48–60. Audemard, G., Koriche, F., & Marquis, P. (2020). On tractable xai queries based on compiled representations. Proceedings of the International Conference on Principles of Knowledge Representation and Reasoning, 17(1), 838–849. Bhosekar, A., & Ierapetritou, M. (2018). Advances in surrogate based modeling, feasibility analysis, and optimization: A review. Computers Chemical Engineering, 108, 250–267. https://doi.org/https://doi.org/10.1016/j.compchemeng.2017.09.017 Bulat, A., & Tzimiropoulos, G. (2019). Xnor-net++: Improved binary neural networks. arXiv preprint arXiv:1909.13863. Chen, Y.-F., Tsai, W.-L., Wu, W.-C., Yen, D.-D., & Yu, F. (2021). Pyct: A python concolic tester. Programming Languages and Systems: 19th Asian Symposium, APLAS 2021, Chicago, IL, USA, October 17–18, 2021, Proceedings 19, 38–46. 50 Conti, F., Schiavone, P. D., & Benini, L. (2018). Xnor neural engine: A hardware accelerator ip for 21.6-fj/op binary neural network inference. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 37(11), 2940–2951. Courbariaux, M., Hubara, I., Soudry, D., El-Yaniv, R., & Bengio, Y. (2016). Binarized neural networks: Training deep neural networks with weights and activations constrained to+ 1 or-1. arXiv preprint arXiv:1602.02830. Darvish Rouhani, B., Lo, D., Zhao, R., Liu, M., Fowers, J., Ovtcharov, K., Vinogradsky, A., Massengill, S., Yang, L., Bittner, R., et al. (2020). Pushing the limits of narrow precision inferencing at cloud scale with microsoft floating point. Advances in neural information processing systems, 33, 10271–10281. Fortz, S., Mesnard, F., Payet, E., Perrouin, G., Vanhoof, W., & Vidal, G. (2020). An smtbased concolic testing tool for logic programs. International Symposium on Functional and Logic Programming, 215–219. Godefroid, P., Levin, M. Y., & Molnar, D. (2012). Sage: Whitebox fuzzing for security testing. Communications of the ACM, 55(3), 40–44. Harel-Canada, F., Wang, L., Gulzar, M. A., Gu, Q., & Kim, M. (2020). Is neuron coverage a meaningful measure for testing deep neural networks? Proceedings of the 28th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering, 851–862. https://doi.org/10. 1145/3368089.3409754 Huang, W., Liu, Y., Qin, H., Li, Y., Zhang, S., Liu, X., Magno, M., & Qi, X. (2024). Billm: Pushing the limit of post-training quantization for llms. arXiv preprint arXiv:2402.04291. Huang, W., Sun, Y., Huang, X., & Sharp, J. (2019). Testrnn: Coverage-guided testing on recurrent neural networks. CoRR, abs/1906.08557. http:// arxiv. org/ abs/ 1906. 08557 51 Huang, X., Kroening, D., Ruan, W., Sharp, J., Sun, Y., Thamo, E., Wu, M., & Yi, X. (2020). A survey of safety and trustworthiness of deep neural networks: Verification, testing, adversarial attack and defence, and interpretability. Computer Science Review, 37, 100270. Hubara, I., Nahshan, Y., Hanani, Y., Banner, R., & Soudry, D. (2021). Accurate post training quantization with small calibration sets. International Conference on Machine Learning, 4466–4475. Hwang, K., & Sung, W. (2014). Fixed-point feedforward deep neural network design using weights+ 1, 0, and- 1. 2014 IEEE Workshop on Signal Processing Systems (SiPS), 1–6. Ignatiev, A., Narodytska, N., & Marques-Silva, J. (2019). Abduction-based explanations for machine learning models. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 1511–1519. Irlbeck, M., et al. (2015). Deconstructing dynamic symbolic execution. Dependable Software Systems Engineering, 40(2015), 26. Katz, G., Barrett, C., Dill, D. L., Julian, K., & Kochenderfer, M. J. (2017). Reluplex: An efficient smt solver for verifying deep neural networks. Computer Aided Verification: 29th International Conference, CAV 2017, Heidelberg, Germany, July 24-28, 2017, Proceedings, Part I 30, 97–117. Kim, Y., Hong, S., & Kim, M. (2019). Target-driven compositional concolic testing with function summary refinement for effective bug detection. Proceedings of the 2019 27th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering, 16–26. Li, F., Liu, B., Wang, X., Zhang, B., & Yan, J. (2016). Ternary weight networks. arXiv preprint arXiv:1605.04711. 52 Li, R., Wang, Y., Liang, F., Qin, H., Yan, J., & Fan, R. (2019). Fully quantized network for object detection. Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2810–2819. Li, Z., Ma, X., Xu, C., & Cao, C. (2019). Structural coverage criteria for neural networks could be misleading. 2019 IEEE/ACM 41st International Conference on Software Engineering: New Ideas and Emerging Results (ICSE-NIER), 89–92. Lu, X., Zhou, A., Lin, Z., Liu, Q., Xu, Y., Zhang, R., Wen, Y., Ren, S., Gao, P., Yan, J., et al. (2024). Terdit: Ternary diffusion models with transformers. arXiv preprint arXiv:2405.14854. Luckow, K., Dimjašević, M., Giannakopoulou, D., Howar, F., Isberner, M., Kahsai, T., Rakamarić, Z., & Raman, V. (2016). Jd art: A dynamic symbolic analysis framework. Tools and Algorithms for the Construction and Analysis of Systems: 22nd International Conference, TACAS 2016, Held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2016, Eindhoven, The Netherlands, April 2-8, 2016, Proceedings 22, 442–459. Ma, H., Qiu, H., Gao, Y., Zhang, Z., Abuadbba, A., Xue, M., Fu, A., Zhang, J., Al-Sarawi, S. F., & Abbott, D. (2024). Quantization backdoors to deep learning commercial frameworks. IEEE Transactions on Dependable and Secure Computing. Ma, S., Wang, H., Ma, L., Wang, L., Wang, W., Huang, S., Dong, L., Wang, R., Xue, J., & Wei, F. (2024). The era of 1-bit llms: All large language models are in 1.58 bits. arXiv preprint arXiv:2402.17764. Martinez, B., Yang, J., Bulat, A., & Tzimiropoulos, G. (2020). Training binary neural networks with real-to-binary convolutions. arXiv preprint arXiv:2003.11535. Mellempudi, N., Kundu, A., Mudigere, D., Das, D., Kaul, B., & Dubey, P. (2017). Ternary neural networks with fine-grained quantization. arXiv preprint arXiv:1705.01462. 53 Meng, X., Kundu, S., Kanuparthi, A. K., & Basu, K. (2021). Rtl-contest: Concolic testing on rtl for detecting security vulnerabilities. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 41(3), 466–477. Nagel, M., Amjad, R. A., Van Baalen, M., Louizos, C., & Blankevoort, T. (2020). Up or down? adaptive rounding for post-training quantization. International Conference on Machine Learning, 7197–7206. Narodytska, N., Kasiviswanathan, S., Ryzhyk, L., Sagiv, M., & Walsh, T. (2018). Verifying properties of binarized deep neural networks. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1). Park, E., Ahn, J., & Yoo, S. (2017). Weighted-entropy-based quantization for deep neural networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 5456–5464. PyCT-QS: Python Concolic Testing and Simplification Tool. (2024). https://github.com/ PyCTsimplify/PyCT_Quantization_Simplification.git Qin, H., Gong, R., Liu, X., Bai, X., Song, J., & Sebe, N. (2020). Binary neural networks: A survey. Pattern Recognition, 105, 107281. Qin, H., Gong, R., Liu, X., Shen, M., Wei, Z., Yu, F., & Song, J. (2020). Forward and backward information retention for accurate binary neural networks. Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2250– 2259. Qiu, H., Ma, H., Zhang, Z., Gao, Y., Zheng, Y., Fu, A., Zhou, P., Abbott, D., & Al-Sarawi, S. F. (2022). Rbnn: Memory-efficient reconfigurable deep binary neural network with ip protection for internet of things. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 42(4), 1185–1198. Rastegari, M., Ordonez, V., Redmon, J., & Farhadi, A. (2016). Xnor-net: Imagenet classification using binary convolutional neural networks. Computer Vision–ECCV 54 2016: 14th European Conference, Amsterdam, The Netherlands, October 11–14, 2016, Proceedings, Part IV, 525–542. Riccio, V., Jahangirova, G., Stocco, A., Humbatova, N., Weiss, M., & Tonella, P. (2020). Testing machine learning based systems: A systematic mapping. Empirical Software Engineering, 25, 5193–5254. Sen, K., Kalasapur, S., Brutch, T., & Gibbs, S. (2013). Jalangi: A tool framework for concolic testing, selective record-replay, and dynamic analysis of javascript. Proceedings of the 2013 9th Joint Meeting on Foundations of Software Engineering, 615–618. Sen, K., Marinov, D., & Agha, G. (2005). Cute: A concolic unit testing engine for c. ACM SIGSOFT Software Engineering Notes, 30(5), 263–272. Sen, K., & Zheng, D. (2024). Dynamic inference of likely symbolic tensor shapes in python machine learning programs. Sun, Y., Wu, M., Ruan, W., Huang, X., Kwiatkowska, M., & Kroening, D. (2018). Concolic testing for deep neural networks. Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering, 109–119. Tang, W., Hua, G., & Wang, L. (2017). How to train a compact binary neural network with high accuracy? Proceedings of the AAAI conference on artificial intelligence, 31(1). Tihanyi, N., Bisztray, T., Jain, R., Ferrag, M. A., Cordeiro, L. C., & Mavroeidis, V. (2023). The formai dataset: Generative ai in software security through the lens of formal verification. Proceedings of the 19th International Conference on Predictive Models and Data Analytics in Software Engineering, 33–43. Wang, E., Davis, J. J., Zhao, R., Ng, H.-C., Niu, X., Luk, W., Cheung, P. Y., & Constantinides, G. A. (2019). Deep neural network approximation for custom hardware: 55 Where we’ve been, where we’re going. ACM Computing Surveys (CSUR), 52(2), 1–39. Wang, P., Hu, Q., Zhang, Y., Zhang, C., Liu, Y., & Cheng, J. (2018). Two-step quantization for low-bit neural networks. Proceedings of the IEEE Conference on computer vision and pattern recognition, 4376–4384. Xu, S., Li, Y., Ma, T., Zeng, B., Zhang, B., Gao, P., & Lv, J. (2022). Tervit: An efficient ternary vision transformer. arXiv preprint arXiv:2201.08050. Xue, M., Yuan, C., Wu, H., Zhang, Y., & Liu, W. (2020). Machine learning security: Threats, countermeasures, and evaluations. IEEE Access, 8, 74720–74742. https: //doi.org/10.1109/ACCESS.2020.2987435 Yao, W., Chen, X., Huang, Y., & van Tooren, M. (2014). A surrogate-based optimization method with rbf neural network enhanced by linear interpolation and hybrid infill strategy. Optimization Methods and Software, 29(2), 406–429. Zhang, J. M., Harman, M., Ma, L., & Liu, Y. (2022). Machine learning testing: Survey, landscapes and horizons. IEEE Transactions on Software Engineering, 48(1), 1– 36. https://doi.org/10.1109/TSE.2019.2962027 Zhou, Z., Dou, W., Liu, J., Zhang, C., Wei, J., & Ye, D. (2021). Deepcon: Contribution coverage testing for deep learning systems. 2021 IEEE International Conference on Software Analysis, Evolution and Reengineering (SANER), 189–200. Zhu, C., Han, S., Mao, H., & Dally, W. J. (2016). Trained ternary quantization. arXiv preprint arXiv:1612.01064. |