Yejin Choi
15
Papers
660
Total Citations
Papers (15)
Can LLMs Keep a Secret? Testing Privacy Implications of Language Models via Contextual Integrity Theory
ICLR 2024
158
citations
WildBench: Benchmarking LLMs with Challenging Tasks from Real Users in the Wild
ICLR 2025arXiv
142
citations
ProRL: Prolonged Reinforcement Learning Expands Reasoning Boundaries in Large Language Models
NeurIPS 2025
96
citations
Value Kaleidoscope: Engaging AI with Pluralistic Human Values, Rights, and Duties
AAAI 2024arXiv
91
citations
One-Minute Video Generation with Test-Time Training
CVPR 2025
65
citations
Trust or Escalate: LLM Judges with Provable Guarantees for Human Agreement
ICLR 2025
42
citations
AI as Humanity’s Salieri: Quantifying Linguistic Creativity of Language Models via Systematic Attribution of Machine Text against Web Text
ICLR 2025arXiv
32
citations
Model Swarms: Collaborative Search to Adapt LLM Experts via Swarm Intelligence
ICML 2025
16
citations
SafetyAnalyst: Interpretable, Transparent, and Steerable Safety Moderation for AI Behavior
ICML 2025
7
citations
PlaSma: Procedural Knowledge Models for Language-based Planning and Re-Planning
ICLR 2024
6
citations
Broken Tokens? Your Language Model can Secretly Handle Non-Canonical Tokenizations
NeurIPS 2025
5
citations
Position: A Roadmap to Pluralistic Alignment
ICML 2024
0
citations
Bias in Gender Bias Benchmarks: How Spurious Features Distort Evaluation
ICCV 2025
0
citations
Structured Chemistry Reasoning with Large Language Models
ICML 2024
0
citations
Synthetic Visual Genome
CVPR 2025
0
citations