Abstract
In-context learning (ICL) has garnered significant attention for its ability to grasp functions and tasks from demonstrations. Recent empirical studies suggest the presence of a Task/Function Vector in the latent geometry of large language models (LLMs) during ICL. Merullo et al. (2024) showed that LLMs leverage this vector alongside the residual stream for Word2Vec-like vector arithmetic, solving factual-recall ICL tasks. Additionally, recent work empirically highlighted the key role of Question-Answer data in enhancing factual-recall capabilities. Despite these insights, a theoretical explanation remains elusive. To move one step forward, this work provides a theoretical framework building on empirically grounded hierarchical concept modeling. We develop an optimization theory, showing how nonlinear residual transformers trained via gradient descent on cross-entropy loss perform factual-recall ICL tasks via task vector arithmetic. We prove 0-1 loss convergence and highlight their superior generalization capabilities, adeptly handling concept recombinations and data shifts. These findings underscore the advantages of transformers over static word embedding methods. Empirical simulations corroborate our theoretical insights.
Original language | English |
---|---|
Publication status | Accepted/In press/Filed - 1 May 2025 |
Event | 42nd International Conference on Machine Learning, ICML 2025 - Vancouver Convention Center, Vancouver, Canada Duration: 13 Jul 2025 → 19 Jul 2025 https://icml.cc/Conferences/2025 |
Conference
Conference | 42nd International Conference on Machine Learning, ICML 2025 |
---|---|
Abbreviated title | ICML 2025 |
Country/Territory | Canada |
City | Vancouver |
Period | 13/07/25 → 19/07/25 |
Internet address |