The Lock-in Hypothesis: Stagnation by Algorithm

5
citations
#987
in ICML 2025
of 3340 papers
4
Top Authors
4
Data Points

Abstract

The training and deployment of large language models (LLMs) create a feedback loop with human users: models learn human beliefs from data, reinforce these beliefs with generated content, reabsorb the reinforced beliefs, and feed them back to users again and again. This dynamic resembles an echo chamber.We hypothesize that this feedback loop entrenches the existing values and beliefs of users, leading to a loss of diversity in human ideas and potentially thelock-inof false beliefs.We formalize this hypothesis and test it empirically with agent-based LLM simulations and real-world GPT usage data. Analysis reveals sudden but sustained drops in diversity after the release of new GPT iterations, consistent with the hypothesized human-AI feedback loop.Website: https://thelockinhypothesis.com

Citation History

Jan 28, 2026
0
Feb 13, 2026
5+5
Feb 13, 2026
5
Feb 13, 2026
5