Skip to main navigation Skip to search Skip to main content

HOW MANY PRETRAINING TASKS ARE NEEDED FOR IN-CONTEXT LEARNING OF LINEAR REGRESSION?

  • Jingfeng Wu
  • , Difan Zou
  • , Zixiang Chen
  • , Vladimir Braverman
  • , Quanquan Gu
  • , Peter L. Bartlett

Research output: Contribution to conferencePaperpeer-review

21 Scopus citations

Abstract

Transformers pretrained on diverse tasks exhibit remarkable in-context learning (ICL) capabilities, enabling them to solve unseen tasks solely based on input contexts without adjusting model parameters. In this paper, we study ICL in one of its simplest setups: pretraining a linearly parameterized single-layer linear attention model for linear regression with a Gaussian prior. We establish a statistical task complexity bound for the attention model pretraining, showing that effective pretraining only requires a small number of independent tasks. Furthermore, we prove that the pretrained model closely matches the Bayes optimal algorithm, i.e., optimally tuned ridge regression, by achieving nearly Bayes optimal risk on unseen tasks under a fixed context length. These theoretical findings complement prior experimental research and shed light on the statistical foundations of ICL.

Original languageEnglish
StatePublished - 1 Jan 2024
Externally publishedYes
Event12th International Conference on Learning Representations, ICLR 2024 - Hybrid, Vienna, Austria
Duration: 7 May 202411 May 2024

Conference

Conference12th International Conference on Learning Representations, ICLR 2024
Country/TerritoryAustria
CityHybrid, Vienna
Period7/05/2411/05/24

ASJC Scopus subject areas

  • Language and Linguistics
  • Computer Science Applications
  • Education
  • Linguistics and Language

Fingerprint

Dive into the research topics of 'HOW MANY PRETRAINING TASKS ARE NEEDED FOR IN-CONTEXT LEARNING OF LINEAR REGRESSION?'. Together they form a unique fingerprint.

Cite this