TY - GEN
T1 - Matrix norms in data streams
T2 - 35th International Conference on Machine Learning, ICML 2018
AU - Braverman, Vladimir
AU - Chestnut, Stephen
AU - Krauthgamer, Robert
AU - Li, Yi
AU - Woodruff, David
AU - Yang, Lin
N1 - Publisher Copyright:
© 2018 by the Authors. All rights reserved.
PY - 2018/1/1
Y1 - 2018/1/1
N2 - Given the prevalence of large scale linear algebra problems in machine learning, recently there has been considerable effort in characterizing which functions can be approximated efficiently of a matrix in the data stream model. We study a number of aspects of estimating matrix norms - an important class of matrix functions - in a stream that have not previously been considered: (1) multi-pass algorithms, (2) algorithms that see the underlying matrix one row at a time, and (3) time-efficient algorithms. Our multi-pass and row-order algorithms use less memory than what is provably required in the single-pass and entrywise-update models, and thus give separations between these models (in terms of memory). Moreover, all of our algorithms are considerably faster than previous ones. We also prove a number of lower bounds, and obtain for instance, a near-complete characterization of the memory required of row-order algorithms for estimating Schatten p-norms of sparse matrices. We complement our results with numerical experiments.
AB - Given the prevalence of large scale linear algebra problems in machine learning, recently there has been considerable effort in characterizing which functions can be approximated efficiently of a matrix in the data stream model. We study a number of aspects of estimating matrix norms - an important class of matrix functions - in a stream that have not previously been considered: (1) multi-pass algorithms, (2) algorithms that see the underlying matrix one row at a time, and (3) time-efficient algorithms. Our multi-pass and row-order algorithms use less memory than what is provably required in the single-pass and entrywise-update models, and thus give separations between these models (in terms of memory). Moreover, all of our algorithms are considerably faster than previous ones. We also prove a number of lower bounds, and obtain for instance, a near-complete characterization of the memory required of row-order algorithms for estimating Schatten p-norms of sparse matrices. We complement our results with numerical experiments.
UR - https://www.scopus.com/pages/publications/85057287168
M3 - Conference contribution
AN - SCOPUS:85057287168
T3 - 35th International Conference on Machine Learning, ICML 2018
SP - 1036
EP - 1045
BT - 35th International Conference on Machine Learning, ICML 2018
A2 - Dy, Jennifer
A2 - Krause, Andreas
PB - International Machine Learning Society (IMLS)
Y2 - 10 July 2018 through 15 July 2018
ER -