Quantization of convolutional neural networks (CNNs) is a common approach to ease the computational burden involved in the deployment of CNNs, especially on low-resource edge devices. However, fixed-point arithmetic is not natural to the type of computations involved in neural networks. In this work, we explore ways to improve quantized CNNs using PDE-based perspective and analysis. First, we harness the total variation (TV) approach to apply edge-aware smoothing to the feature maps throughout the network. This aims to reduce outliers in the distribution of values and promote piecewise constant maps, which are more suitable for quantization. Secondly, we consider symmetric and stable variants of common CNNs for image classification and graph convolutional networks for graph node classification. We demonstrate through several experiments that the property of forward stability preserves the action of a network under different quantization rates. As a result, stable quantized networks behave similarly to their non-quantized counterparts even though they rely on fewer parameters. We also find that at times, stability even aids in improving accuracy. These properties are of particular interest for sensitive, resource-constrained, low-power or real-time applications like autonomous driving.
|Journal||Research in Mathematical Sciences|
|State||Published - 1 Dec 2022|
- Convolutional neural networks
- Neural ODEs
- Stable symmetric architectures
- Total variation
ASJC Scopus subject areas
- Theoretical Computer Science
- Mathematics (miscellaneous)
- Computational Mathematics
- Applied Mathematics
FingerprintDive into the research topics of 'Quantized convolutional neural networks through the lens of partial differential equations'. Together they form a unique fingerprint.
Reports from Ben-Gurion University of the Negev Describe Recent Advances in Mathematics (Quantized Convolutional Neural Networks Through the Lens of Partial Differential Equations)
Moshe Eliasof, Eran Treister & Ido Ben-Yair
1 item of Media coverage