Standaert, Baptiste
[UCL]
De Vleeschouwer, Christophe
[UCL]
Jacques, Laurent
[UCL]
The recent growth in performances of deep neural networks in image classification tasks is coupled with an increase in the complexity of the inference. For embedded applications, it becomes necessary to use compression techniques to simplify the inference cost of these networks. A compression technique consists in quantizing the activation. However, this compression implies an accuracy loss that can be explained by a suboptimal training due to the quantization. We introduce a dither during the quantization of activation. The latter is used to control the distribution of the quantization error. We show that this dither allows improving the generalization of quantized activation networks by correcting the training. We also introduce a dithered set-based approach, which takes advantage of the stochastic inference of this method, and improves accuracy.


Référence bibliographique |
Standaert, Baptiste. Improving generalization of quantized activation neural networks using dither. Ecole polytechnique de Louvain, Université catholique de Louvain, 2021. Prom. : De Vleeschouwer, Christophe ; Jacques, Laurent. |
Permalien |
http://hdl.handle.net/2078.1/thesis:33122 |