Haar Wavelet Feature Compression for Quantized Graph Convolutional Networks

Moshe Eliasof, Benjamin J. Bodner, Eran Treister

Research output: Contribution to journalArticlepeer-review

3 Scopus citations


Graph convolutional networks (GCNs) are widely used in a variety of applications and can be seen as an unstructured version of standard convolutional neural networks (CNNs). As in CNNs, the computational cost of GCNs for large input graphs (such as large point clouds or meshes) can be high and inhibit the use of these networks, especially in environments with low computational resources. To ease these costs, quantization can be applied to GCNs. However, aggressive quantization of the feature maps can lead to a significant degradation in performance. On a different note, the Haar wavelet transforms are known to be one of the most effective and efficient approaches to compress signals. Therefore, instead of applying aggressive quantization to feature maps, we propose to use Haar wavelet compression and light quantization to reduce the computations involved with the network. We demonstrate that this approach surpasses aggressive feature quantization by a significant margin, for a variety of problems ranging from node classification to point cloud classification and both part and semantic segmentation.

Original languageEnglish
Pages (from-to)4542-4553
Number of pages12
JournalIEEE Transactions on Neural Networks and Learning Systems
Issue number4
StatePublished - 28 Jun 2023


  • Graph convolutional networks (GCNs)
  • graph wavelet transform
  • network compression
  • quantized neural networks

ASJC Scopus subject areas

  • Software
  • Artificial Intelligence
  • Computer Networks and Communications
  • Computer Science Applications


Dive into the research topics of 'Haar Wavelet Feature Compression for Quantized Graph Convolutional Networks'. Together they form a unique fingerprint.

Cite this