Gradual Learning of Recurrent Neural Networks

Ziv Aharoni, Gal Rattner, Haim Permuter

Research output: Working paper/PreprintPreprint

54 Downloads (Pure)


Recurrent Neural Networks (RNNs) achieve state-of-the-art results in many sequence-to-sequence modeling tasks. However, RNNs are difficult to train and tend to suffer from overfitting. Motivated by the Data Processing Inequality (DPI), we formulate the multi-layered network as a Markov chain, introducing a training method that comprises training the network gradually and using layer-wise gradient clipping. We found that applying our methods, combined with previously introduced regularization and optimization methods, resulted in improvements in state-of-the-art architectures operating in language modeling tasks.
Original languageEnglish GB
StatePublished - 29 Aug 2017


  • stat.ML
  • cs.IT
  • cs.LG
  • math.IT


Dive into the research topics of 'Gradual Learning of Recurrent Neural Networks'. Together they form a unique fingerprint.

Cite this