Improve Robustness of Deep Neural Networks by Coding

Kunping Huang, Netanel Raviv, Siddharth Jain, Pulakesh Upadhyaya, Jehoshua Bruck, Paul H. Siegel, Anxiao Andrew Jiang

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

2 Scopus citations

Abstract

Deep neural networks (DNNs) typically have many weights. When errors appear in their weights, which are usually stored in non-volatile memories, their performance can degrade significantly. We review two recently presented approaches that improve the robustness of DNNs in complementary ways. In the first approach, we use error-correcting codes as external redundancy to protect the weights from errors. A deep reinforcement learning algorithm is used to optimize the redundancy-performance tradeoff. In the second approach, internal redundancy is added to neurons via coding. It enables neurons to perform robust inference in noisy environments.

Original languageEnglish
Title of host publication2020 Information Theory and Applications Workshop, ITA 2020
PublisherInstitute of Electrical and Electronics Engineers
ISBN (Electronic)9781728141909
DOIs
StatePublished - 2 Feb 2020
Externally publishedYes
Event2020 Information Theory and Applications Workshop, ITA 2020 - San Diego, United States
Duration: 2 Feb 20207 Feb 2020

Publication series

Name2020 Information Theory and Applications Workshop, ITA 2020

Conference

Conference2020 Information Theory and Applications Workshop, ITA 2020
Country/TerritoryUnited States
CitySan Diego
Period2/02/207/02/20

ASJC Scopus subject areas

  • Artificial Intelligence
  • Computational Theory and Mathematics
  • Computer Science Applications
  • Information Systems and Management
  • Control and Optimization

Fingerprint

Dive into the research topics of 'Improve Robustness of Deep Neural Networks by Coding'. Together they form a unique fingerprint.

Cite this