Exploring Large Language Model Agents for Piloting Social Experiments
Top Authors
Abstract
Computational social experiments, which typically employ agent-based modeling to create testbeds for piloting social experiments, not only provide a computational solution to the major challenges faced by traditional experimental methods, but have also gained widespread attention in various research fields. Despite their significance, their broader impact is largely limited by the underdeveloped intelligence of their core component, i.e., agents. To address this limitation, we develop a framework grounded in well-established social science theories and practices, consisting of three key elements: (i) large language model (LLM)-driven experimental agents, serving as "silicon participants", (ii) methods for implementing various interventions or treatments, and (iii) tools for collecting behavioral, survey, and interview data. We evaluate its effectiveness by replicating three representative experiments, with results demonstrating strong alignment, both quantitatively and qualitatively, with real-world evidence. This work provides the first framework for designing LLM-driven agents to pilot social experiments, underscoring the transformative potential of LLMs and their agents in computational social science