Mr. Yuzhe Ma, Prof. Bei Yu and Their Collaborators Received ICTAI 2019 Best Student Paper Award

Congratulations to Mr. Yuzhe Ma, Prof. Bei Yu and their collaborators for receiving ICTAI 2019 Best Paper Award, for the paper titled “A Unified Approximation Framework for Compressing and Accelerating Deep Neural Networks”.

The IEEE International Conference on Tools with Artificial Intelligence (ICTAI) is a leading Conference of AI in the Computer Society providing a major international forum where the creation and exchange of ideas related to artificial intelligence are fostered among academia, industry, and government agencies. The conference facilitates the cross-fertilization of AI ideas and promotes their transfer into practical tools, for developing intelligent systems and pursuing artificial intelligence applications.

 

Abstract:

Deep neural networks (DNNs) have achieved significant success in a variety of real world applications, i.e., image classification. However, tons of parameters in the networks restrict the efficiency of neural networks due to the large model size and intensive computation. To address this issue, various approximation techniques have been investigated, which seek a light weighted network with little performance degradation in exchange of smaller model size or faster inference. Both low- rankness and sparsity are appealing properties for the network approximation. In this paper, we propose a unified framework to compress the convolutional neural networks (CNNs) by combining these two properties while taking the nonlinear activation into consideration. Each layer in the network is approximated by the sum of a structured sparse component and a low-rank component, which is formulated as an optimization problem. Then, an extended version of alternating direction method of multipliers (ADMM) with guaranteed convergence is presented to solve the relaxed optimization problem. Experiments are carried out on VGG-16, AlexNet, and GoogLeNet with large image classification datasets. The results outperform previous work in terms of accuracy degradation, compression rate, and speedup ratio. The proposed method is able to remarkably compress the model (with up to 4.9× reduction of parameters) at a cost of little loss or without loss on accuracy.