When Splitting Makes Stronger: A Theoretical and Empirical Analysis of Divide-and-Conquer Prompting in LLMs

0citations
PDFProject
0
citations
#206
in COLM 2025
of 263 papers
5
Top Authors
1
Data Points

Abstract

Foundation models, particularly Large Language Models (LLMs), have garnered significant interest due to their wide range of applications. Yet these models demonstrate notable weaknesses when confronted with tasks involving iterative sub-problems or deliberately misleading content—exemplified by complex arithmetic operations and comprehensive fake news evaluation. Conventional instructional prompting frequently produces flawed outputs in these scenarios. While research has established that advanced techniques such as Chain-of-Thoughts and Least-to-Most methodologies can dramatically enhance LLM performance, emerging investigation indicates that a more streamlined divide-and-conquer (DaC) approach—which systematically partitions input sequences into discrete components—can yield remarkable improvements for particular problem classes like misinformation assessment. Our investigation rigorously examines the efficacy of DaC prompting strategies and precisely delineates the task characteristics that benefit most from this methodology. Through comprehensive theoretical analysis, we establish formal guarantees for performance enhancement in specifically identified task categories. We validate our theoretical framework through focused empirical studies on large integer multiplication and factual verification tasks, where experimental outcomes robustly confirm our analytical predictions, demonstrating DaC's practical superiority in these challenging domains.

Citation History

Feb 12, 2026
0