Abstract
Transformers pretrained on diverse tasks exhibit remarkable in-context learning (ICL) capabilities, enabling them to solve unseen tasks solely based on input contexts without adjusting model parameters. In this paper, we study ICL in one of its simplest setups: pretraining a linearly parameterized single-layer linear attention model for linear regression with a Gaussian prior. We establish a statistical task complexity bound for the attention model pretraining, showing that effective pretraining only requires a small number of independent tasks. Furthermore, we prove that the pretrained model closely matches the Bayes optimal algorithm, i.e., optimally tuned ridge regression, by achieving nearly Bayes optimal risk on unseen tasks under a fixed context length. These theoretical findings complement prior experimental research and shed light on the statistical foundations of ICL.
| Original language | English |
|---|---|
| State | Published - 1 Jan 2024 |
| Externally published | Yes |
| Event | 12th International Conference on Learning Representations, ICLR 2024 - Hybrid, Vienna, Austria Duration: 7 May 2024 → 11 May 2024 |
Conference
| Conference | 12th International Conference on Learning Representations, ICLR 2024 |
|---|---|
| Country/Territory | Austria |
| City | Hybrid, Vienna |
| Period | 7/05/24 → 11/05/24 |
ASJC Scopus subject areas
- Language and Linguistics
- Computer Science Applications
- Education
- Linguistics and Language
Fingerprint
Dive into the research topics of 'HOW MANY PRETRAINING TASKS ARE NEEDED FOR IN-CONTEXT LEARNING OF LINEAR REGRESSION?'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver