## Abstract

In this brief paper, we study the size and width of autoencoders consisting of Boolean threshold functions, where an autoencoder is a layered neural network whose structure can be viewed as consisting of an encoder, which compresses an input vector to a lower dimensional vector, and a decoder which transforms the low-dimensional vector back to the original input vector exactly (or approximately). We focus on the decoder part and show that <inline-formula> <tex-math notation="LaTeX">$\Omega(\ssqrt{Dn/d})$</tex-math> </inline-formula> and <inline-formula> <tex-math notation="LaTeX">$O(\sqrt{Dn})$</tex-math> </inline-formula> nodes are required to transform <inline-formula> <tex-math notation="LaTeX">$n$</tex-math> </inline-formula> vectors in <inline-formula> <tex-math notation="LaTeX">$d$</tex-math> </inline-formula>-dimensional binary space to <inline-formula> <tex-math notation="LaTeX">$D$</tex-math> </inline-formula>-dimensional binary space. We also show that the width can be reduced if we allow small errors, where the error is defined as the average of the Hamming distance between each vector input to the encoder part and the resulting vector output by the decoder.

Original language | English |
---|---|

Pages (from-to) | 1-8 |

Number of pages | 8 |

Journal | IEEE Transactions on Neural Networks and Learning Systems |

DOIs | |

State | Accepted/In press - 1 Jan 2023 |

## Keywords

- Autoencoders
- Boolean functions
- Decoding
- Hamming distances
- Image coding
- Learning systems
- Neural networks
- Transforms
- Upper bound
- neural networks
- threshold functions

## ASJC Scopus subject areas

- Software
- Artificial Intelligence
- Computer Networks and Communications
- Computer Science Applications