Most Cited COLM "geometric context encoding" Papers

418 papers found • Page 2 of 3

#201

Verifying the Verifiers: Unveiling Pitfalls and Potentials in Fact Verifiers

Wooseok Seo, Seungju Han, Jaehun Jung et al.

COLM 2025paperarXiv:2506.13342
3
citations
#202

Adaptive Computation Pruning for the Forgetting Transformer

Zhixuan Lin, Johan Obando-Ceron, Xu Owen He et al.

COLM 2025paperarXiv:2504.06949
3
citations
#203

Vision-Language Models Are Not Pragmatically Competent in Referring Expression Generation

Ziqiao Ma, Jing Ding, Xuejun Zhang et al.

COLM 2025paperarXiv:2504.16060
3
citations
#204

Register Always Matters: Analysis of LLM Pretraining Data Through the Lens of Language Variation

Amanda Myntti, Erik Henriksson, Veronika Laippala et al.

COLM 2025paperarXiv:2504.01542
3
citations
#205

MAC: A Live Benchmark for Multimodal Large Language Models in Scientific Understanding

Mohan Jiang, Jin Gao, Jiahao Zhan et al.

COLM 2025paperarXiv:2508.15802
3
citations
#206

Energy-Based Reward Models for Robust Language Model Alignment

Anamika Lochab, Ruqi Zhang

COLM 2025paperarXiv:2504.13134
3
citations
#207

AutoScale: Scale-Aware Data Mixing for Pre-Training LLMs

Feiyang Kang, Yifan Sun, Bingbing Wen et al.

COLM 2025paperarXiv:2407.20177
3
citations
#208

Post-training for Efficient Communication via Convention Formation

Yilun Hua, Evan Wang, Yoav Artzi

COLM 2025paperarXiv:2508.06482
3
citations
#209

Language models align with brain regions that represent concepts across modalities

Maria Ryskina, Greta Tuckute, Alexander Fung et al.

COLM 2025paperarXiv:2508.11536
3
citations
#210

Text Speaks Louder than Vision: ASCII Art Reveals Textual Biases in Vision-Language Models

Zhaochen Wang, Bryan Hooi, Yiwei Wang et al.

COLM 2025paperarXiv:2504.01589
3
citations
#211

Scalable Zeroth-Order Fine-Tuning for Extremely Large Language Models with Limited GPU Memory

Liangyu Wang, Jie Ren, Hang Xu et al.

COLM 2025paperarXiv:2503.12668
3
citations
#212

Resona: Improving Context Copying in Linear Recurrence Models with Retrieval

Xinyu Wang, Linrui Ma, Jerry Huang et al.

COLM 2025paperarXiv:2503.22913
3
citations
#213

ParaPO: Aligning Language Models to Reduce Verbatim Reproduction of Pre-training Data

Tong Chen, Faeze Brahman, Jiacheng Liu et al.

COLM 2025paperarXiv:2504.14452
3
citations
#214

Improving Fisher Information Estimation and Efficiency for LoRA-based LLM Unlearning

Yejin Kim, Eunwon Kim, Buru Chang et al.

COLM 2025paperarXiv:2508.21300
3
citations
#215

Pretraining on the Test Set Is No Longer All You Need: A Debate-Driven Approach to QA Benchmarks

Linbo Cao, Jinman Zhao

COLM 2025paperarXiv:2507.17747
3
citations
#216

On the Effectiveness and Generalization of Race Representations for Debiasing High-Stakes Decisions

Dang Nguyen, Chenhao Tan

COLM 2025paperarXiv:2504.06303
3
citations
#217

Impact-driven Context Filtering For Cross-file Code Completion

Yanzhou Li, Shangqing Liu, Kangjie Chen et al.

COLM 2025paperarXiv:2508.05970
2
citations
#218

Probing Syntax in Large Language Models: Successes and Remaining Challenges

Pablo J. Diego Simon, Emmanuel Chemla, Jean-Remi King et al.

COLM 2025paperarXiv:2508.03211
2
citations
#219

Context-Adaptive Multi-Prompt Embedding with Large Language Models for Vision-Language Alignment

Dahun Kim, Anelia Angelova

COLM 2025paperarXiv:2508.02762
2
citations
#220

Imagine All The Relevance: Scenario-Profiled Indexing with Knowledge Expansion for Dense Retrieval

Sangam Lee, Ryang Heo, SeongKu Kang et al.

COLM 2025paperarXiv:2503.23033
2
citations
#221

Approximating Language Model Training Data from Weights

John Xavier Morris, Junjie Oscar Yin, Woojeong Kim et al.

COLM 2025paperarXiv:2506.15553
2
citations
#222

From Queries to Criteria: Understanding How Astronomers Evaluate LLMs

Alina Hyk, Kiera McCormick, Mian Zhong et al.

COLM 2025paperarXiv:2507.15715
2
citations
#223

AIR: A Systematic Analysis of Annotations, Instructions, and Response Pairs in Preference Dataset

Bingxiang He, Wenbin Zhang, Jiaxi Song et al.

COLM 2025paperarXiv:2504.03612
2
citations
#224

When Does Metadata Conditioning (NOT) Work for Language Model Pre-Training? A Study with Context-Free Grammars

Rei Higuchi, Ryotaro Kawata, Naoki Nishikawa et al.

COLM 2025paperarXiv:2504.17562
2
citations
#225

Evaluating and Designing Sparse Autoencoders by Approximating Quasi-Orthogonality

Sewoong Lee, Adam Davies, Marc E. Canby et al.

COLM 2025paperarXiv:2503.24277
2
citations
#226

CLIPPER: Compression enables long-context synthetic data generation

Chau Minh Pham, Yapei Chang, Mohit Iyyer

COLM 2025paperarXiv:2502.14854
2
citations
#227

RARe: Retrieval Augmented Retrieval with In-Context Examples

Atula Tejaswi, Yoonsang Lee, sujay sanghavi et al.

COLM 2025paperarXiv:2410.20088
2
citations
#228

Deep Binding of Language Model Virtual Personas: a Study on Approximating Political Partisan Misperceptions

Minwoo Kang, Suhong Moon, Seung Hyeong Lee et al.

COLM 2025paperarXiv:2504.11673
2
citations
#229

ADAPT: Actively Discovering and Adapting to Preferences for any Task

Maithili Patel, Xavier Puig, Ruta Desai et al.

COLM 2025paperarXiv:2504.04040
2
citations
#230

LLM Unlearning Without an Expert Curated Dataset

Xiaoyuan Zhu, Muru Zhang, Ollie Liu et al.

COLM 2025paperarXiv:2508.06595
2
citations
#231

MixAssist: An Audio-Language Dataset for Co-Creative AI Assistance in Music Mixing

Michael Paul Clemens, Ana Marasovic

COLM 2025paperarXiv:2507.06329
2
citations
#232

MS-SSM: A Multi-Scale State Space Model for Efficient Sequence Modeling

Mahdi Karami, Ali Behrouz, Peilin Zhong et al.

COLM 2025paperarXiv:2512.23824
2
citations
#233

The Surprising Effectiveness of Membership Inference with Simple N-Gram Coverage

Skyler Hallinan, Jaehun Jung, Melanie Sclar et al.

COLM 2025paperarXiv:2508.09603
2
citations
#234

Agree to Disagree? A Meta-Evaluation of LLM Misgendering

Arjun Subramonian, Vagrant Gautam, Preethi Seshadri et al.

COLM 2025paperarXiv:2504.17075
2
citations
#235

A Taxonomy of Transcendence

Natalie Abreu, Edwin Zhang, Eran Malach et al.

COLM 2025paperarXiv:2508.17669
2
citations
#236

Correctness-Guaranteed Code Generation via Constrained Decoding

Lingxiao Li, salar rahili, Yiwei Zhao

COLM 2025paperarXiv:2508.15866
2
citations
#237

Sharpe Ratio-Guided Active Learning for Preference Optimization in RLHF

Syrine Belakaria, Joshua Kazdan, Charles Marx et al.

COLM 2025paperarXiv:2503.22137
2
citations
#238

Noiser: Bounded Input Perturbations for Attributing Large Language Models

Mohammad Reza Ghasemi Madani, Aryo Pradipta Gema, Yu Zhao et al.

COLM 2025paperarXiv:2504.02911
2
citations
#239

Language Agents Mirror Human Causal Reasoning Biases. How Can We Help Them Think Like Scientists?

Anthony GX-Chen, Dongyan Lin, Mandana Samiei et al.

COLM 2025paper
2
citations
#240

Guided Reasoning in LLM-Driven Penetration Testing Using Structured Attack Trees

Katsuaki Nakano, Reza Fayyazi, Shanchieh Yang et al.

COLM 2025paperarXiv:2509.07939
2
citations
#241

In-Context Occam’s Razor: How Transformers Prefer Simpler Hypotheses on the Fly

Puneesh Deora, Bhavya Vasudeva, Tina Behnia et al.

COLM 2025paper
2
citations
#242

The Zero Body Problem: Probing LLM Use of Sensory Language

Rebecca M. M. Hicke, Sil Hamilton, David Mimno

COLM 2025paperarXiv:2504.06393
2
citations
#243

Humans overrely on overconfident language models, across languages

Neil Rathi, Dan Jurafsky, Kaitlyn Zhou

COLM 2025paperarXiv:2507.06306
2
citations
#244

RankAlign: A Ranking View of the Generator-Validator Gap in Large Language Models

Juan Diego Rodriguez, Wenxuan Ding, Katrin Erk et al.

COLM 2025paper
2
citations
#245

Visual Representations inside the Language Model

Benlin Liu, Amita Kamath, Madeleine Grunde-McLaughlin et al.

COLM 2025paper
2
citations
#246

Can a Crow Hatch a Falcon? Lineage Matters in Predicting Large Language Model Performance

Takuya Tamura, Taro Yano, Masafumi Enomoto et al.

COLM 2025paperarXiv:2504.19811
1
citations
#247

Fleurs-SLU: A Massively Multilingual Benchmark for Spoken Language Understanding

Fabian David Schmidt, Ivan Vulić, Goran Glavaš et al.

COLM 2025paperarXiv:2501.06117
1
citations
#248

Hyperparameter Loss Surfaces Are Simple Near their Optima

Nicholas Lourie, He He, Kyunghyun Cho

COLM 2025paper
1
citations
#249

Navigating the Rabbit Hole: Emergent Biases in LLM-Generated Attack Narratives Targeting Mental Health Groups

Rijul Magu, Arka Dutta, Sean Kim et al.

COLM 2025paperarXiv:2504.06160
1
citations
#250

BiXSE: Improving Dense Retrieval via Probabilistic Graded Relevance Distillation

Christos Tsirigotis, Vaibhav Adlakha, Joao Monteiro et al.

COLM 2025paperarXiv:2508.06781
1
citations
#251

The Negation Bias in Large Language Models: Investigating bias reflected in linguistic markers

Yishan Wang, Pia Sommerauer, Jelke Bloem

COLM 2025paper
1
citations
#252

Layerwise Importance Analysis of Feed-Forward Networks in Transformer-based Language Models

Wataru Ikeda, Kazuki Yano, Ryosuke Takahashi et al.

COLM 2025paperarXiv:2508.17734
1
citations
#253

Multi-Agent Retrieval-Augmented Framework for Evidence-Based Counterspeech Against Health Misinformation

Anirban Saha Anik, Xiaoying Song, Elliott Wang et al.

COLM 2025paperarXiv:2507.07307
1
citations
#254

Task-Circuit Quantization: Leveraging Knowledge Localization and Interpretability for Compression

Hanqi Xiao, Yi-Lin Sung, Elias Stengel-Eskin et al.

COLM 2025paperarXiv:2504.07389
1
citations
#255

Learning Effective Language Representations for Sequential Recommendation via Joint Embedding Predictive Architecture

Nguyen Anh Minh, Dung D. Le

COLM 2025paperarXiv:2504.10512
1
citations
#256

SEAM: Semantically Equivalent Across Modalities Benchmark for Vision-Language Models

Zhenwei Tang, Difan Jiao, Blair Yang et al.

COLM 2025paperarXiv:2508.18179
1
citations
#257

Implicit In-Context Learning: Evidence from Artificial Language Experiments

Xiaomeng Ma, Qihui Xu

COLM 2025paperarXiv:2503.24190
1
citations
#258

Exploring Large Language Model Agents for Piloting Social Experiments

Jinghua Piao, Yuwei Yan, Nian Li et al.

COLM 2025paperarXiv:2508.08678
1
citations
#259

Do Large Language Models Have a Planning Theory of Mind? Evidence from MindGames: a Multi-Step Persuasion Task

Jared Moore, Ned Cooper, Rasmus Overmark et al.

COLM 2025paperarXiv:2507.16196
1
citations
#260

URANIA: Differentially Private Insights into AI Use

Daogao Liu, Edith Cohen, Badih Ghazi et al.

COLM 2025paperarXiv:2506.04681
1
citations
#261

OpinioRAG: Towards Generating User-Centric Opinion Highlights from Large-scale Online Reviews

Mir Tafseer Nayeem, Davood Rafiei

COLM 2025paperarXiv:2509.00285
1
citations
#262

Investigating Intersectional Bias in Large Language Models using Confidence Disparities in Coreference Resolution

Falaah Arif Khan, Nivedha Sivakumar, Yinong Oliver Wang et al.

COLM 2025paperarXiv:2508.07111
1
citations
#263

Privately Learning from Graphs with Applications in Fine-tuning Large Language Models

Haoteng Yin, Rongzhe Wei, Eli Chien et al.

COLM 2025paperarXiv:2410.08299
1
citations
#264

Exploring Sparse Adapters for Scalable Merging of Parameter Efficient Experts

Samin Yeasar Arnob, Zhan Su, Minseon Kim et al.

COLM 2025paperarXiv:2507.07140
1
citations
#265

The World According to LLMs: How Geographic Origin Influences LLMs' Entity Deduction Capabilities

Harsh Nishant Lalai, Raj Sanjay Shah, Jiaxin Pei et al.

COLM 2025paperarXiv:2508.05525
1
citations
#266

Self-Rewarding PPO: Aligning Large Language Models with Demonstrations Only

Qingru Zhang, Liang Qiu, Ilgee Hong et al.

COLM 2025paperarXiv:2510.21090
1
citations
#267

UTF-8 Plumbing: Byte-level Tokenizers Unavoidably Enable LLMs to Generate Ill-formed UTF-8

Preston Firestone, Shubham Ugare, Gagandeep Singh et al.

COLM 2025paperarXiv:2511.05578
1
citations
#268

Resource-efficient Inference with Foundation Model Programs

Lunyiu Nie, Zhimin Ding, Kevin Yu et al.

COLM 2025paperarXiv:2504.07247
1
citations
#269

Teach Old SAEs New Domain Tricks with Boosting

Nikita Koriagin, Yaroslav Aksenov, Daniil Laptev et al.

COLM 2025paperarXiv:2507.12990
1
citations
#270

ALOPE: Adaptive Layer Optimization for Translation Quality Estimation using Large Language Models

Archchana Sindhujan, Shenbin Qian, Chan Chi Chun Matthew et al.

COLM 2025paperarXiv:2508.07484
1
citations
#271

Detecting and Pruning Prominent but Detrimental Neurons in Large Language Models

Ameen Ali Ali, Shahar Katz, Lior Wolf et al.

COLM 2025paperarXiv:2507.09185
1
citations
#272

Meta-Learning for Speeding Up Large Model Inference in Decentralized Environments

Yipeng Du, Zihao Wang, Ahmad Farhan et al.

COLM 2025paperarXiv:2410.21340
1
citations
#273

X-EcoMLA: Upcycling Pre-Trained Attention into MLA for Efficient and Extreme KV Compression

Guihong Li, Mehdi Rezagholizadeh, Mingyu Yang et al.

COLM 2025paperarXiv:2503.11132
1
citations
#274

Mitigating Modal Imbalance in Multimodal Reasoning

Chen Henry Wu, Neil Kale, Aditi Raghunathan

COLM 2025paperarXiv:2510.02608
1
citations
#275

RRO: LLM Agent Optimization Through Rising Reward Trajectories

Zilong Wang, Jingfeng Yang, Sreyashi Nag et al.

COLM 2025paperarXiv:2505.20737
1
citations
#276

Single-Pass Document Scanning for Question Answering

Weili Cao, Jianyou Wang, Youze Zheng et al.

COLM 2025paperarXiv:2504.03101
1
citations
#277

Customize Multi-modal RAI Guardrails with Precedent-based predictions

Cheng-Fu Yang, Thanh Tran, Christos Christodoulopoulos et al.

COLM 2025paperarXiv:2507.20503
1
citations
#278

Can Large Language Models Integrate Spatial Data? Empirical Insights into Reasoning Strengths and Computational Weaknesses

Bin HAN, Robert Wolfe, Anat Caspi et al.

COLM 2025paperarXiv:2508.05009
1
citations
#279

Elucidating the Design Space of Decay in Linear Attention

Zhen Qin, Xuyang Shen, Yiran Zhong

COLM 2025paperarXiv:2509.05282
1
citations
#280

CONCAP: Seeing Beyond English with Concepts Retrieval-Augmented Captioning

George Ibrahim, Rita Ramos, Yova Kementchedjhieva

COLM 2025paperarXiv:2507.20411
1
citations
#281

Can Language Models Falsify? Evaluating Algorithmic Reasoning with Counterexample Creation

Shiven Sinha, Shashwat Goel, Ponnurangam Kumaraguru et al.

COLM 2025paperarXiv:2502.19414
1
citations
#282

Limitations of refinement methods for weak to strong generalization

Seamus Somerstep, Yaacov Ritov, Mikhail Yurochkin et al.

COLM 2025paper
1
citations
#283

DualEdit: Dual Editing for Knowledge Updating in Vision-Language Models

Zhiyi Shi, Binjie Wang, Chongjie Si et al.

COLM 2025paper
1
citations
#284

Ensemble Debiasing Across Class and Sample Levels for Fairer Prompting Accuracy

Ruixi Lin, Ziqiao Wang, Yang You

COLM 2025paperarXiv:2503.05157
1
citations
#285

MapIQ: Evaluating Multimodal Large Language Models for Map Question Answering

Varun Srivastava, Fan Lei, Srija Mukhopadhyay et al.

COLM 2025paperarXiv:2507.11625
#286

Can Test-Time Scaling Improve World Foundation Model?

Wenyan Cong, Hanqing Zhu, Peihao Wang et al.

COLM 2025paper
#287

SQuat: Subspace-orthogonal KV Cache Quantization

Hao Wang, Ligong Han, Kai Xu et al.

COLM 2025paper
#288

G1yphD3c0de: Towards Safer Language Models on Visually Perturbed Texts

Yejinchoi, Yejin Yeo, Yejin Son et al.

COLM 2025paper
#289

Effective Length Extrapolation via Dimension-Wise Positional Embeddings Manipulation

Yi Lu, Wanxu Zhao, Xin Zhou et al.

COLM 2025paperarXiv:2504.18857
#290

Agents Are All You Need for LLM Unlearning

Debdeep Sanyal, Murari Mandal

COLM 2025paper
#291

The Unlearning Mirage: A Dynamic Framework for Evaluating LLM Unlearning

Raj Sanjay Shah, Jing Huang, Keerthiram Murugesan et al.

COLM 2025paper
#292

SmolLM2: When Smol Goes Big — Data-Centric Training of a Fully Open Small Language Model

Loubna Ben allal, Anton Lozhkov, Elie Bakouch et al.

COLM 2025paper
#293

HIPPO-VIDEO : Simulating Watch Histories with Large Language Models for History-Driven Video Highlighting

Jeongeun Lee, Youngjae Yu, Dongha Lee

COLM 2025paper
#294

Reverse-engineering NLI: A study of the meta-inferential properties of Natural Language Inference

Rasmus Blanck, Bill Noble, Stergios Chatzikyriakidis

COLM 2025paperarXiv:2601.05170
#295

Improving LLMs‘ Generalized Reasoning Abilities by Graph Problems

Qifan Zhang, Nuo Chen, Zehua Li et al.

COLM 2025paperarXiv:2507.17168
#296

Society of Mind Meets Real-Time Strategy: A Hierarchical Multi-Agent Framework for Strategic Reasoning

Daechul Ahn, San Kim, Jonghyun Choi

COLM 2025paperarXiv:2508.06042
#297

BlockFFN: Towards End-Side Acceleration-Friendly Mixture-of-Experts with Chunk-Level Activation Sparsity

Chenyang Song, Weilin Zhao, Xu Han et al.

COLM 2025paperarXiv:2507.08771
#298

Overfill: Two-Stage Models for Efficient Language Model Decoding

Woojeong Kim, Junxiong Wang, Jing Nathan Yan et al.

COLM 2025paperarXiv:2508.08446
#299

Analyzing Multilingualism in Large Language Models with Sparse Autoencoders

Ikhyun Cho, Julia Hockenmaier

COLM 2025paper
#300

Towards Compute-Optimal Many-Shot In-Context Learning

Shahriar Golchin, Yanfei Chen, Rujun Han et al.

COLM 2025paperarXiv:2507.16217
#301

CodeXEmbed: A Generalist Embedding Model Family for Multilingual and Multi-task Code Retrieval

Ye Liu, Rui Meng, Shafiq Joty et al.

COLM 2025paper
#302

Evaluating LLMs on Chinese Idiom Translation

Cai Yang, Yao Dou, David Heineman et al.

COLM 2025paperarXiv:2508.10421
#303

EvidenceBench: A Benchmark for Extracting Evidence from Biomedical Papers

Jianyou Wang, Weili Cao, Kaicheng Wang et al.

COLM 2025paperarXiv:2504.18736
#304

When Splitting Makes Stronger: A Theoretical and Empirical Analysis of Divide-and-Conquer Prompting in LLMs

Yizhou Zhang, Defu Cao, Lun Du et al.

COLM 2025paper
#305

HyperINF: Unleashing the HyperPower of Schulz's Method for Data Influence Estimation

Xinyu Zhou, Simin Fan, Martin Jaggi

COLM 2025paper
#306

Beyond Blanket Masking: Examining Granularity for Privacy Protection in Images Captured by Blind and Low Vision Users

Jeffri Murrugarra-Llerena, Haoran Niu, K. Suzanne Barber et al.

COLM 2025paperarXiv:2508.09245
#307

Hypothesis-Driven Theory-of-Mind Reasoning for Large Language Models

Hyunwoo Kim, Melanie Sclar, Tan Zhi-Xuan et al.

COLM 2025paper
#308

2 OLMo 2 Furious (COLM’s Version)

Evan Pete Walsh, Luca Soldaini, Dirk Groeneveld et al.

COLM 2025paper
#309

CRABS: A syntactic-semantic pincer strategy for bounding LLM interpretation of Python notebooks

Meng Li, Timothy M. McPhillips, Dingmin Wang et al.

COLM 2025paperarXiv:2507.11742
#310

LM Agents May Fail to Act on Their Own Risk Knowledge

Yuzhi Tang, Tianxiao Li, Elizabeth Li et al.

COLM 2025paperarXiv:2508.13465
#311

CALLME: Call Graph Augmentation with Large Language Models for Javascript

Michael Wang, Kexin Pei, Armando Solar-Lezama

COLM 2025paper
#312

MeMAD: Structured Memory of Debates for Enhanced Multi-Agent Reasoning

Shuai Ling, Lizi Liao, Dongmei Jiang et al.

COLM 2025paper
#313

VaPR - Vision-language Preference alignment for Reasoning

Rohan Wadhawan, Fabrice Y Harel-Canada, Zi-Yi Dou et al.

COLM 2025paperarXiv:2510.01700
#314

Learning by Teaching: Engaging Students as Instructors of Large Language Models in Computer Science Education

Xinming Yang, Haasil Pujara, Jun Li

COLM 2025paperarXiv:2508.05979
#315

Do Language Models Agree with Human Perceptions of Suspense in Stories?

Glenn Matlin, Devin Zhang, Rodrigo Barroso Loza et al.

COLM 2025paperarXiv:2508.15794
#316

SlimMoE: Structured Compression of Large MoE Models via Expert Slimming and Distillation

Zichong Li, Chen Liang, Zixuan Zhang et al.

COLM 2025paperarXiv:2506.18349
#317

CoLa: Learning to Interactively Collaborate with Large Language Models

Abhishek Sharma, Dan Goldwasser

COLM 2025paperarXiv:2504.02965
#318

Reinforcement Learning Enhanced Full-Duplex Spoken Dialogue Language Models for Conversational Interactions

Chen Chen, Ke Hu, Chao-Han Huck Yang et al.

COLM 2025paper
#319

$\mu$KE: Matryoshka Unstructured Knowledge Editing of Large Language Models

Zian Su, Ziyang Huang, Kaiyuan Zhang et al.

COLM 2025paper
#320

GenerationPrograms: Fine-grained Attribution with Executable Programs

David Wan, Eran Hirsch, Elias Stengel-Eskin et al.

COLM 2025paperarXiv:2506.14580
#321

HAICOSYSTEM: An Ecosystem for Sandboxing Safety Risks in Interactive AI Agents

Xuhui Zhou, Hyunwoo Kim, Faeze Brahman et al.

COLM 2025paper
#322

Hawkeye: Model Collaboration for Efficient Reasoning

Jianshu She, Zhuohao Li, Zhemin Huang et al.

COLM 2025paper
#323

Building Instruction-Tuning Datasets from Human-Written Instructions with Open-Weight Large Language Models

Youmi Ma, Sakae Mizuki, Kazuki Fujii et al.

COLM 2025paperarXiv:2503.23714
#324

REM: Evaluating LLM Embodied Spatial Reasoning through Multi-Frame Trajectories

Jacob Thompson, Emiliano Garcia-Lopez, Yonatan Bisk

COLM 2025paperarXiv:2512.00736
#325

Phased Training for LLM-powered Text Retrieval Models Beyond Data Scaling

Xin Zhang, Yanzhao Zhang, Wen Xie et al.

COLM 2025paper
#326

JailDAM: Jailbreak Detection with Adaptive Memory for Vision-Language Model

Yi Nian, Shenzhe Zhu, Yuehan Qin et al.

COLM 2025paperarXiv:2504.03770
#327

IMPersona: Evaluating Individual Level LLM Impersonation

Quan Shi, Carlos E Jimenez, Stephen Dong et al.

COLM 2025paper
#328

ProsodyLM: Uncovering the Emerging Prosody Processing Capabilities in Speech Language Models

Kaizhi Qian, Xulin Fan, Junrui Ni et al.

COLM 2025paperarXiv:2507.20091
#329

Bootstrapping Visual Assistant Modeling with Situated Interaction Simulation

Yichi Zhang, Run Peng, Yinpei Dai et al.

COLM 2025paper
#330

On Mechanistic Circuits for Extractive Question-Answering

Samyadeep Basu, Vlad I Morariu, Ryan A. Rossi et al.

COLM 2025paperarXiv:2502.08059
#331

Partial Perspectives: How LLMs Handle Logically Inconsistent Knowledge in Reasoning Tasks

Zichao Li, Ines Arous, Jackie CK Cheung

COLM 2025paper
#332

Teaching Models to Understand (but not Generate) High-risk Data

Ryan Yixiang Wang, Matthew Finlayson, Luca Soldaini et al.

COLM 2025paperarXiv:2505.03052
#333

Rethinking Associative Memory Mechanism in Induction Head

Shuo Wang, Issei Sato

COLM 2025paper
#334

QAPyramid: Fine-grained Evaluation of Content Selection for Text Summarization

Shiyue Zhang, David Wan, Arie Cattan et al.

COLM 2025paper
#335

Rhapsody: A Dataset for Highlight Detection in Podcasts

Younghan Park, Anuj Diwan, David Harwath et al.

COLM 2025paperarXiv:2505.19429
#336

Exposing and Patching the Flaws of Large Language Models in Social Character Simulation

Yue Huang, Zhengqing Yuan, Yujun Zhou et al.

COLM 2025paper
#337

Breakpoint: Stress-testing systems-level reasoning in LLM agents

Kaivalya Hariharan, Uzay Girit, Zifan Wang et al.

COLM 2025paper
#338

VisOnlyQA: Large Vision Language Models Still Struggle with Visual Perception of Geometric Information

Ryo Kamoi, Yusen Zhang, Sarkar Snigdha Sarathi Das et al.

COLM 2025paper
#339

Impact of LLM Alignment on Impression Formation in Social Interactions

Ala N. Tak, Anahita Bolourani, Daniel B. Shank et al.

COLM 2025paper
#340

Hell or High Water: Evaluating Agentic Recovery from External Failures

Andrew Wang, Sophia Hager, Adi Asija et al.

COLM 2025paperarXiv:2508.11027
#341

StagFormer: Time Staggering Decoder only Transformers

Dylan J Cutler, Arun Kandoor, Nishanth Dikkala et al.

COLM 2025paper
#342

Reasoning Models Know When They’re Right: Probing Hidden States for Self-Verification

Anqi Zhang, Yulin Chen, Jane Pan et al.

COLM 2025paper
#343

DeepRetrieval: Hacking Real Search Engines and Retrievers with Large Language Models via Reinforcement Learning

Pengcheng Jiang, Jiacheng Lin, Lang Cao et al.

COLM 2025paperarXiv:2503.00223
#344

Stop-Think-AutoRegress: Language Modeling with Latent Diffusion Planning

Justin Lovelace, Christian K Belardi, Sofian Zalouk et al.

COLM 2025paper
#345

Do Biased Models Have Biased Thoughts?

Swati Rajwal, Shivank Garg, Reem Abdel-Salam et al.

COLM 2025paper
#346

Improving Table Understanding with LLMs and Entity-Oriented Search

Thi-Nhung Nguyen, Hoang Ngo, Dinh Phung et al.

COLM 2025paper
#347

Estimating Optimal Context Length for Hybrid Retrieval-augmented Multi-document Summarization

Adithya Pratapa, Teruko Mitamura

COLM 2025paperarXiv:2504.12972
#348

MSRS: Evaluating Multi-Source Retrieval-Augmented Generation

Rohan Phanse, Ej Zhou, Kejian Shi et al.

COLM 2025paperarXiv:2508.20867
#349

Readability ≠ Learnability: Rethinking the Role of Simplicity in Training Small Language Models

Ivan Lee, Taylor Berg-Kirkpatrick

COLM 2025paper
#350

Truth-value judgment in language models: ‘truth directions’ are context sensitive

Stefan F. Schouten, Peter Bloem, Ilia Markov et al.

COLM 2025paper
#351

Synthetic Data Generation and Multi-Step Reinforcement Learning for Reasoning and Tool Use

Anna Goldie, Azalia Mirhoseini, Hao Zhou et al.

COLM 2025paper
#352

Cutting the Root of Hallucination: Structural Trimming for Vulnerability Mitigation in Code LLMs

Yage Zhang

COLM 2025paper
#353

REFA: Reference Free Alignment with Fine-Grained Length Control

Taneesh Gupta, Rahul Madhavan, Xuchao Zhang et al.

COLM 2025paper
#354

TRELLIS: Learning to Compress Key-Value Memory in Attention Models

Mahdi Karami, Ali Behrouz, Praneeth Kacham et al.

COLM 2025paper
#355

Understanding and Improving Noisy Embedding Techniques in Instruction Finetuning

Abhay Yadav

COLM 2025paper
#356

Traceable and Explainable Multimodal Large Language Models: An Information-Theoretic View

Zihan Huang, Junda Wu, Rohan Surana et al.

COLM 2025paper
#357

LawFlow: Collecting and Simulating Lawyers’ Thought Processes on Business Formation Case Studies

Debarati Das, Khanh Chi Le, Ritik Sachin Parkar et al.

COLM 2025paper
#358

Yourbench: Dynamic Evaluation Set Generation with LLMs

Sumuk Shashidhar, Clémentine Fourrier, Alina Lozovskaya et al.

COLM 2025paper
#359

How does Watermarking Affect Visual Language Models in Document Understanding?

Chunxue Xu, Yiwei Wang, Bryan Hooi et al.

COLM 2025paper
#360

Evaluating Large Language Models as Expert Annotators

Yu-Min Tseng, Wei-Lin Chen, Chung-Chi Chen et al.

COLM 2025paperarXiv:2508.07827
#361

NoWag: A Unified Framework for Shape Preserving Com- pression of Large Language Models

Lawrence Ray Liu, Inesh Chakrabarti, Yixiao Li et al.

COLM 2025paper
#362

E$^2$-RAG: Towards Editable Efficient RAG by Editing Compressed KV Caches

Tongxu Luo, Wenyu Du, HanWen Hao et al.

COLM 2025paper
#363

Robo-Instruct: Simulator-Augmented Instruction Alignment For Finetuning Code LLMs

Zichao Hu, Junyi Jessy Li, Arjun Guha et al.

COLM 2025paper
#364

Off-Policy Corrected Reward Modeling for Reinforcement Learning from Human Feedback

Johannes Ackermann, Takashi Ishida, Masashi Sugiyama

COLM 2025paper
#365

Language Model Uncertainty Quantification with Attention Chain

Yinghao Li, Rushi Qiang, Lama Moukheiber et al.

COLM 2025paper
#366

C3PO: Critical-Layer, Core-Expert, Collaborative Pathway Optimization for Test-Time Expert Re-Mixing

Zhongyang Li, Ziyue Li, Tianyi Zhou

COLM 2025paper
#367

KVSink: Understanding and Enhancing the Preservation of Attention Sinks in KV Cache Quantization for LLMs

Zunhai Su, Kehong Yuan

COLM 2025paper
#368

The Devil is in the EOS: Sequence Training for Detailed Image Captioning

Abdelrahman Mohamed, Yova Kementchedjhieva

COLM 2025paper
#369

Style over Substance: Distilled Language Models Reason Via Stylistic Replication

Philip Lippmann, Jie Yang

COLM 2025paper
#370

One ruler to measure them all: Benchmarking multilingual long-context language models

Yekyung Kim, Jenna Russell, Marzena Karpinska et al.

COLM 2025paper
#371

$100K or 100 Days: Trade-offs when Pre-Training with Academic Resources

Apoorv Khandelwal, Tian Yun, Nihal V. Nayak et al.

COLM 2025paper
#372

Inside-Out: Hidden Factual Knowledge in LLMs

Zorik Gekhman, Eyal Ben-David, Hadas Orgad et al.

COLM 2025paper
#373

D3: A Dataset for Training Code LMs to Act Diff-by-Diff

Ulyana Piterbarg, Kanishk Gandhi, Lerrel Pinto et al.

COLM 2025paper
#374

FineMedLM-o1: Enhancing Medical Knowledge Reasoning Ability of LLM from Supervised Fine-Tuning to Test-Time Training

hongzhou yu, Tianhao Cheng, Yingwen Wang et al.

COLM 2025paper
#375

SmolVLM: Redefining small and efficient multimodal models

Andrés Marafioti, Orr Zohar, Miquel Farré et al.

COLM 2025paper
#376

PredGen: Accelerated Inference of Large Language Models through Input-Time Speculation for Real-Time Speech Interaction

Shufan Li, Aditya Grover

COLM 2025paper
#377

LLM-based Multi-Agents System Attack via Continuous Optimization with Discrete Efficient Search

Weichen Yu, Kai Hu, Tianyu Pang et al.

COLM 2025paper
#378

The Dual-Route Model of Induction

Sheridan Feucht, Eric Todd, Byron C Wallace et al.

COLM 2025paper
#379

Both Direct and Indirect Evidence Contribute to Dative Alternation Preferences in Language Models

Qing Yao, Kanishka Misra, Leonie Weissweiler et al.

COLM 2025paper
#380

SEAL: Steerable Reasoning Calibration of Large Language Models for Free

Runjin Chen, Zhenyu Zhang, Junyuan Hong et al.

COLM 2025paper
#381

LoRI: Reducing Cross-Task Interference in Multi-Task Low-Rank Adaptation

Juzheng Zhang, Jiacheng You, Ashwinee Panda et al.

COLM 2025paper
#382

CASCADE Your Datasets for Cross-Mode Knowledge Retrieval of Language Models

Runlong Zhou, Yi Zhang

COLM 2025paper
#383

Enhancing LLM Reasoning with Iterative DPO: A Comprehensive Empirical Investigation

Songjun Tu, Jiahao Lin, Xiangyu Tian et al.

COLM 2025paper
#384

SpectR: Dynamically Composing LM Experts with Spectral Routing

William Fleshman, Benjamin Van Durme

COLM 2025paper
#385

Efficient Self-Improvement in Multimodal Large Language Models: A Model-Level Judge-Free Approach

Shijian Deng, Wentian Zhao, Yu-Jhe Li et al.

COLM 2025paper
#386

CodeARC: Benchmarking Reasoning Capabilities of LLM Agents for Inductive Program Synthesis

Anjiang Wei, Tarun Suresh, Jiannan Cao et al.

COLM 2025paper
#387

LV-Eval: A Balanced Long-Context Benchmark with 5 Length Levels Up to 256K

Tao Yuan, Xuefei Ning, Dong Zhou et al.

COLM 2025paper
#388

Extragradient Preference Optimization (EGPO): Beyond Last-Iterate Convergence for Nash Learning from Human Feedback

Runlong Zhou, Maryam Fazel, Simon Shaolei Du

COLM 2025paper
#389

Hidden in plain sight: VLMs overlook their visual representations

Stephanie Fu, tyler bonnen, Devin Guillory et al.

COLM 2025paper
#390

Language Models Fail to Introspect About Their Knowledge of Language

Siyuan Song, Jennifer Hu, Kyle Mahowald

COLM 2025paper
#391

Overflow Prevention Enhances Long-Context Recurrent LLMs

Assaf Ben-Kish, Itamar Zimerman, Muhammad Jehanzeb Mirza et al.

COLM 2025paper
#392

News is More than a Collection of Facts: Moral Frame Preserving News Summarization

Enrico Liscio, Michela Lorandi, Pradeep K. Murukannaiah

COLM 2025paper
#393

EuroBERT: Scaling Multilingual Encoders for European Languages

Nicolas Boizard, Hippolyte Gisserot-Boukhlef, Duarte Miguel Alves et al.

COLM 2025paper
#394

Shared Global and Local Geometry of Language Model Embeddings

Andrew Lee, Melanie Weber, Fernanda Viégas et al.

COLM 2025paper
#395

CUPID: Evaluating Personalized and Contextualized Alignment of LLMs from Interactions

Tae Soo Kim, Yoonjoo Lee, Yoonah Park et al.

COLM 2025paper
#396

LLMs as Research Tools: A Large Scale Survey of Researchers’ Usage and Perceptions

Zhehui Liao, Maria Antoniak, Inyoung Cheong et al.

COLM 2025paper
#397

Beyond the Reported Cutoff: Where Large Language Models Fall Short on Financial Knowledge

Agam Shah, Liqin Ye, Sebastian Jaskowski et al.

COLM 2025paper
#398

BEARCUBS: A benchmark for computer-using web agents

Yixiao Song, Katherine Thai, Chau Minh Pham et al.

COLM 2025paper
#399

Gating is Weighting: Understanding Gated Linear Attention through In-context Learning

Yingcong Li, Davoud Ataee Tarzanagh, Ankit Singh Rawat et al.

COLM 2025paper
#400

Supposedly Equivalent Facts That Aren’t? Entity Frequency in Pre-training Induces Asymmetry in LLMs

Yuan He, Bailan He, Zifeng Ding et al.

COLM 2025paper