U-shaped and Inverted-U Scaling behind Emergent Abilities of Large Language Models

5citations
Project
5
Citations
#1177
in ICLR 2025
of 3827 papers
2
Authors
1
Data Points

Abstract

Large language models (LLMs) have been shown to exhibitemergent abilitiesin some downstream tasks, where model performance stagnates at first and then improves sharply and unpredictably with scale beyond a threshold. In this work, we investigate the phenomenon by grouping questions based on difficulty level and provide a possible explanation for emergent abilities. Specifically, we observe U-shaped scaling for hard questions and inverted-U scaling followed by steady improvement for easy questions. The two scaling patterns initially offset each other, causing stagnant overall performance. The performance starts to soar when the scaling pattern of easy questions reverts from inverse to standard scaling, leading to emergent abilities. Based on this finding, we propose a simple yet effective pipeline, calledSlice-and-Sandwich, to predict the emergence threshold and model performance beyond the threshold. Our code is publicly available at https://github.com/tony10101105/ExpEmergence.

Citation History

Jan 26, 2026
5