The maximum entropy principle is a well established approach to unsupervised optimization. Entropy maximization learning algorithms for single-layered neural networks already exist for the cases in which the number of output neurons is greater or equal to the number of input neurons. These models were successfully employed in various applications, most notably for independent component analysis. In this work, we generalize the maximum entropy principle to a single-layered neural network with fewer output than input neurons. The proposed learning algorithm finds a low-dimensional representation of the data and identifies the independent components within it. In general, such a model must incorporate some prior knowledge of the input distribution; however, we overcome this difficulty using a variational approach. We illustrate the performance of the model through several examples and compare it to other algorithms. While our model achieves similar results to the state-of-the-art algorithm for overdetermined independent component analysis within a similar convergence time, its main advantage lies in its ability to be learned efficiently on-line.