Poster Papers

24,624 papers found • Page 149 of 493

Language Models over Canonical Byte-Pair Encodings

Tim Vieira, Tianyu Liu, Clemente Pasti et al.

ICML 2025arXiv:2506.07956
7
citations

Language models scale reliably with over-training and on downstream tasks

Samir Yitzhak Gadre, Georgios Smyrnis, Vaishaal Shankar et al.

ICLR 2025arXiv:2403.08540
79
citations

Language Ranker: A Lightweight Ranking framework for LLM Decoding

Chenheng Zhang, Tianqi Du, Jizhe Zhang et al.

NEURIPS 2025arXiv:2510.21883

Language Representations Can be What Recommenders Need: Findings and Potentials

Leheng Sheng, An Zhang, Yi Zhang et al.

ICLR 2025arXiv:2407.05441
26
citations

LANTERN: Accelerating Visual Autoregressive Models with Relaxed Speculative Decoding

Doohyuk Jang, Sihwan Park, June Yong Yang et al.

ICLR 2025arXiv:2410.03355
30
citations

Laplace Sample Information: Data Informativeness Through a Bayesian Lens

Johannes Kaiser, Kristian Schwethelm, Daniel Rueckert et al.

ICLR 2025arXiv:2505.15303

Laplace Transform Based Low-Complexity Learning of Continuous Markov Semigroups

Vladimir Kostic, Karim Lounici, Hélène Halconruy et al.

ICML 2025arXiv:2410.14477
4
citations

LapSum - One Method to Differentiate Them All: Ranking, Sorting and Top-k Selection

Łukasz Struski, Michal Bednarczyk, Igor Podolak et al.

ICML 2025arXiv:2503.06242
2
citations

LaRA: Benchmarking Retrieval-Augmented Generation and Long-Context LLMs – No Silver Bullet for LC or RAG Routing

Kuan Li, Liwen Zhang, Yong Jiang et al.

ICML 2025arXiv:2502.09977
20
citations

LaRender: Training-Free Occlusion Control in Image Generation via Latent Rendering

Xiaohang Zhan, Dingming Liu

ICCV 2025arXiv:2508.07647
2
citations

LaRes: Evolutionary Reinforcement Learning with LLM-based Adaptive Reward Search

Pengyi Li, Hongyao Tang, Jinbin Qiao et al.

NEURIPS 2025

Large Continual Instruction Assistant

Jingyang Qiao, zhizhong zhang, Xin Tan et al.

ICML 2025arXiv:2410.10868
2
citations

Large Convolutional Model Tuning via Filter Subspace

Wei Chen, Zichen Miao, Qiang Qiu

ICLR 2025arXiv:2403.00269
10
citations

Large Displacement Motion Transfer with Unsupervised Anytime Interpolation

Guixiang Wang, Jianjun Li

ICML 2025

Large Language Bayes

Justin Domke

NEURIPS 2025arXiv:2504.14025
2
citations

Large Language-Geometry Model: When LLM meets Equivariance

Zongzhao Li, Jiacheng Cen, Bing Su et al.

ICML 2025arXiv:2502.11149
13
citations

Large Language Models are Demonstration Pre-Selectors for Themselves

Jiarui Jin, Yuwei Wu, Haoxuan Li et al.

ICML 2025arXiv:2506.06033
2
citations

Large Language Models are Interpretable Learners

Ruochen Wang, Si Si, Felix Yu et al.

ICLR 2025arXiv:2406.17224
6
citations

Large Language Models as End-to-end Combinatorial Optimization Solvers

Xia Jiang, Yaoxin Wu, Minshuo Li et al.

NEURIPS 2025arXiv:2509.16865
8
citations

Large Language Models as Model Organisms for Human Associative Learning

Camila Kolling, Vy Vo, Mariya Toneva

NEURIPS 2025arXiv:2510.21408

Large Language Models Assume People are More Rational than We Really are

Ryan Liu, Jiayi Geng, Joshua Peterson et al.

ICLR 2025arXiv:2406.17055
43
citations

Large Language Models can Become Strong Self-Detoxifiers

Ching-Yun Ko, Pin-Yu Chen, Payel Das et al.

ICLR 2025
3
citations

Large language models can learn and generalize steganographic chain-of-thought under process supervision

ROBERT MC CARTHY, Joey SKAF, Luis Ibanez-Lissen et al.

NEURIPS 2025arXiv:2506.01926
13
citations

Large Language Models for Lossless Image Compression: Next-Pixel Prediction in Language Space is All You Need

Kecheng Chen, Pingping Zhang, Hui Liu et al.

NEURIPS 2025arXiv:2411.12448
7
citations

Large Language Models Meet Symbolic Provers for Logical Reasoning Evaluation

Chengwen Qi, Ren Ma, Bowen Li et al.

ICLR 2025arXiv:2502.06563
25
citations

Large Language Models Miss the Multi-agent Mark

Emanuele La Malfa, Gabriele La Malfa, Samuele Marro et al.

NEURIPS 2025arXiv:2505.21298
13
citations

Large Language Models Often Say One Thing and Do Another

Ruoxi Xu, Hongyu Lin, Xianpei Han et al.

ICLR 2025arXiv:2503.07003
4
citations

Large Language Models Think Too Fast To Explore Effectively

Lan Pan, Hanbo Xie, Robert Wilson

NEURIPS 2025arXiv:2501.18009
6
citations

Large Language Models to Diffusion Finetuning

Edoardo Cetin, Tianyu Zhao, Yujin Tang

ICML 2025arXiv:2501.15781
5
citations

Large Learning Rates Simultaneously Achieve Robustness to Spurious Correlations and Compressibility

Melih Barsbey, Lucas Prieto, Stefanos Zafeiriou et al.

ICCV 2025arXiv:2507.17748

Large Multi-modal Models Can Interpret Features in Large Multi-modal Models

Kaichen Zhang, Yifei Shen, Bo Li et al.

ICCV 2025arXiv:2411.14982
10
citations

Larger or Smaller Reward Margins to Select Preferences for LLM Alignment?

Kexin Huang, Junkang Wu, Ziqian Chen et al.

ICML 2025

Large-scale and Fine-grained Vision-language Pre-training for Enhanced CT Image Understanding

Zhongyi Shui, Jianpeng Zhang, Weiwei Cao et al.

ICLR 2025arXiv:2501.14548
29
citations

Large Scale Knowledge Washing

Yu Wang, Ruihan Wu, Zexue He et al.

ICLR 2025arXiv:2405.16720
14
citations

Large-scale Multi-view Tensor Clustering with Implicit Linear Kernels

Jiyuan Liu, Xinwang Liu, chuankun Li et al.

CVPR 2025

Large-scale Pre-training for Grounded Video Caption Generation

Evangelos Kazakos, Cordelia Schmid, Josef Sivic

ICCV 2025arXiv:2503.10781
3
citations

Large-Scale Text-to-Image Model with Inpainting is a Zero-Shot Subject-Driven Image Generator

Chaehun Shin, Jooyoung Choi, Heeseung Kim et al.

CVPR 2025arXiv:2411.15466
37
citations

Large Scene Generation with Cube-Absorb Discrete Diffusion

Qianjiang Hu, Wei Hu

ICCV 2025

Large Self-Supervised Models Bridge the Gap in Domain Adaptive Object Detection

Marc-Antoine Lavoie, Anas Mahmoud, Steven L. Waslander

CVPR 2025arXiv:2503.23220
6
citations

Large Stepsizes Accelerate Gradient Descent for Regularized Logistic Regression

Jingfeng Wu, Pierre Marion, Peter Bartlett

NEURIPS 2025arXiv:2506.02336

Large (Vision) Language Models are Unsupervised In-Context Learners

Artyom Gadetsky, Andrei Atanov, Yulun Jiang et al.

ICLR 2025arXiv:2504.02349
3
citations

LARGO: Latent Adversarial Reflection through Gradient Optimization for Jailbreaking LLMs

Ran Li, Hao Wang, Chengzhi Mao

NEURIPS 2025arXiv:2505.10838
4
citations

Lark: Low-Rank Updates After Knowledge Localization for Few-shot Class-Incremental Learning

Jinxin Shi, Jiabao Zhao, Yifan Yang et al.

ICCV 2025

LARM: Large Auto-Regressive Model for Long-Horizon Embodied Intelligence

Zhuoling Li, Xiaogang Xu, Zhenhua Xu et al.

ICML 2025arXiv:2405.17424
9
citations

La RoSA: Enhancing LLM Efficiency via Layerwise Rotated Sparse Activation

Kai Liu, Bowen Xu, Shaoyu Wu et al.

ICML 2025arXiv:2507.01299
1
citations

LARP: Tokenizing Videos with a Learned Autoregressive Generative Prior

Hanyu Wang, Saksham Suri, Yixuan Ren et al.

ICLR 2025arXiv:2410.21264
31
citations

LASER: Attention with Exponential Transformation

Sai Surya Duvvuri, Inderjit Dhillon

ICML 2025arXiv:2411.03493
3
citations

LASeR: Learning to Adaptively Select Reward Models with Multi-Arm Bandits

Duy Nguyen, Archiki Prasad, Elias Stengel-Eskin et al.

NEURIPS 2025arXiv:2410.01735
6
citations

LASeR: Towards Diversified and Generalizable Robot Design with Large Language Models

JUNRU SONG, Yang Yang, Huan Xiao et al.

ICLR 2025
7
citations

Lasso Bandit with Compatibility Condition on Optimal Arm

Harin Lee, Taehyun Hwang, Min-hwan Oh

ICLR 2025arXiv:2406.00823
4
citations