GradFreeBits: Gradient Free Bit Allocation for Dynamic Low Precision Neural Networks.

Benjamin J. Bodner, Gil Ben Shalom, Eran Treister

Research output: Working paper/PreprintPreprint


Quantized neural networks (QNNs) are among the main approaches for deploying deep neural networks on low resource edge devices. Training QNNs using different levels of precision throughout the network (dynamic quantization) typically achieves superior trade-offs between performance and computational load. However, optimizing the different precision levels of QNNs can be complicated, as the values of the bit allocations are discrete and difficult to differentiate for. Also, adequately accounting for the dependencies between the bit allocation of different layers is not straight-forward. To meet these challenges, in this work we propose GradFreeBits: a novel joint optimization scheme for training dynamic QNNs, which alternates between gradient-based optimization for the weights, and gradient-free optimization for the bit allocation. Our method achieves better or on par performance with current state of the art low precision neural networks on CIFAR10/100 and ImageNet classification. Furthermore, our approach can be extended to a variety of other applications involving neural networks used in conjunction with parameters which are difficult to optimize for.
Original languageEnglish
StatePublished - 18 Feb 2021


Dive into the research topics of 'GradFreeBits: Gradient Free Bit Allocation for Dynamic Low Precision Neural Networks.'. Together they form a unique fingerprint.

Cite this