We’ve recently been working on the quality assurance of deep learning-powered software systems, such as automated image and speech recognition applications. It’s well-known that these systems are vulnerable to many kinds of adversarial attacks. We’ve made some progress on the systematic testing of both the feedforward and recurrent neural networks, i.e., FNNs and RNNs.
With the wonderful NTU cyber security team, we’ve developed two testing tools, namely, DeepHunter [1] and DeepStellar [2], [3]. We also got two papers accepted at the ASE’19 Demonstration track. You can find the video demonstrations of the tools below.
DeepHunter Demo Video
DeepStellar Demo Video
References
- Xie, X., Chen, H., Li, Y., Ma, L., Liu, Y., & Zhao, J. (2019). Coverage-Guided Fuzzing for FeedForward Neural Networks. Proceedings of the 34th IEEE/ACM International Conference on Automated Software Engineering (ASE), 1162–1165.
- Du, X., Xie, X., Li, Y., Ma, L., Liu, Y., & Zhao, J. (2019). A Quantitative Analysis Framework for Recurrent Neural Network. Proceedings of the 34th IEEE/ACM International Conference on Automated Software Engineering (ASE), 1062–1065.
- Du, X., Xie, X., Li, Y., Ma, L., Liu, Y., & Zhao, J. (2019). DeepStellar: Model-Based Quantitative Analysis of Stateful Deep Learning Systems. Proceedings of the 27th ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering (FSE), 477–487.