Poster "large language models" Papers

740 papers found • Page 4 of 15

Dynamic Bundling with Large Language Models for Zero-Shot Inference on Text-Attributed Graphs

Yusheng Zhao, Qixin Zhang, Xiao Luo et al.

NEURIPS 2025arXiv:2505.17599
2
citations

Dynamic Loss-Based Sample Reweighting for Improved Large Language Model Pretraining

Daouda Sow, Herbert Woisetschläger, Saikiran Bulusu et al.

ICLR 2025arXiv:2502.06733
13
citations

DynamicRAG: Leveraging Outputs of Large Language Model as Feedback for Dynamic Reranking in Retrieval-Augmented Generation

Jiashuo Sun, Xianrui Zhong, Sizhe Zhou et al.

NEURIPS 2025arXiv:2505.07233
6
citations

Dynamic Updates for Language Adaptation in Visual-Language Tracking

Xiaohai Li, Bineng Zhong, Qihua Liang et al.

CVPR 2025arXiv:2503.06621
7
citations

EAGLE-3: Scaling up Inference Acceleration of Large Language Models via Training-Time Test

Yuhui Li, Fangyun Wei, Chao Zhang et al.

NEURIPS 2025arXiv:2503.01840
115
citations

EDiT: A Local-SGD-Based Efficient Distributed Training Method for Large Language Models

Jialiang Cheng, Ning Gao, Yun Yue et al.

ICLR 2025arXiv:2412.07210
1
citations

Effective Interplay between Sparsity and Quantization: From Theory to Practice

Simla Harma, Ayan Chakraborty, Elizaveta Kostenok et al.

ICLR 2025arXiv:2405.20935
19
citations

Efficient Automated Circuit Discovery in Transformers using Contextual Decomposition

Aliyah Hsu, Georgia Zhou, Yeshwanth Cherapanamjeri et al.

ICLR 2025arXiv:2407.00886
15
citations

EFFICIENT JAILBREAK ATTACK SEQUENCES ON LARGE LANGUAGE MODELS VIA MULTI-ARMED BANDIT-BASED CONTEXT SWITCHING

Aditya Ramesh, Shivam Bhardwaj, Aditya Saibewar et al.

ICLR 2025
3
citations

Efficient Long Context Fine-tuning with Chunk Flow

Xiulong Yuan, Hongtao Xu, Wenting Shen et al.

ICML 2025arXiv:2503.02356
3
citations

Efficient Reinforcement Learning with Large Language Model Priors

Xue Yan, Yan Song, Xidong Feng et al.

ICLR 2025arXiv:2410.07927
21
citations

Efficient stagewise pretraining via progressive subnetworks

Abhishek Panigrahi, Nikunj Saunshi, Kaifeng Lyu et al.

ICLR 2025arXiv:2402.05913
8
citations

EffiCoder: Enhancing Code Generation in Large Language Models through Efficiency-Aware Fine-tuning

Dong HUANG, Guangtao Zeng, Jianbo Dai et al.

ICML 2025arXiv:2410.10209
9
citations

EgoLM: Multi-Modal Language Model of Egocentric Motions

Fangzhou Hong, Vladimir Guzov, Hyo Jin Kim et al.

CVPR 2025arXiv:2409.18127
12
citations

ELICIT: LLM Augmentation Via External In-context Capability

Futing Wang, Jianhao (Elliott) Yan, Yue Zhang et al.

ICLR 2025arXiv:2410.09343
6
citations

Embracing Trustworthy Brain-Agent Collaboration as Paradigm Extension for Intelligent Assistive Technologies

Yankai Chen, Xinni Zhang, Yifei Zhang et al.

NEURIPS 2025arXiv:2510.22095
1
citations

Emerging Safety Attack and Defense in Federated Instruction Tuning of Large Language Models

Rui Ye, Jingyi Chai, Xiangrui Liu et al.

ICLR 2025arXiv:2406.10630
18
citations

Empowering LLMs to Understand and Generate Complex Vector Graphics

XiMing Xing, Juncheng Hu, Guotao Liang et al.

CVPR 2025arXiv:2412.11102
33
citations

Enhancing Graph Of Thought: Enhancing Prompts with LLM Rationales and Dynamic Temperature Control

Sunguk Shin, Youngjoon Kim

ICLR 2025
3
citations

Enhancing Personalized Multi-Turn Dialogue with Curiosity Reward

Yanming Wan, Jiaxing Wu, Marwa Abdulhai et al.

NEURIPS 2025arXiv:2504.03206
13
citations

Enhancing Safety in Reinforcement Learning with Human Feedback via Rectified Policy Optimization

Xiyue Peng, Hengquan Guo, Jiawei Zhang et al.

NEURIPS 2025arXiv:2410.19933
5
citations

EvaLearn: Quantifying the Learning Capability and Efficiency of LLMs via Sequential Problem Solving

Shihan Dou, Ming Zhang, Chenhao Huang et al.

NEURIPS 2025arXiv:2506.02672
4
citations

Evaluating Program Semantics Reasoning with Type Inference in System $F$

Yifeng He, Luning Yang, Christopher Gonzalo et al.

NEURIPS 2025arXiv:2509.23686
1
citations

Every Rollout Counts: Optimal Resource Allocation for Efficient Test-Time Scaling

Xinglin Wang, Yiwei Li, Shaoxiong Feng et al.

NEURIPS 2025arXiv:2506.15707
5
citations

Everything is Editable: Extend Knowledge Editing to Unstructured Data in Large Language Models

Jingcheng Deng, Zihao Wei, Liang Pang et al.

ICLR 2025arXiv:2405.15349
8
citations

Evidential Knowledge Distillation

Liangyu Xiang, Junyu Gao, Changsheng Xu

ICCV 2025arXiv:2507.18366
1
citations

Exploring CLIP's Dense Knowledge for Weakly Supervised Semantic Segmentation

Zhiwei Yang, Yucong Meng, Kexue Fu et al.

CVPR 2025arXiv:2503.20826
8
citations

Exploring the limits of strong membership inference attacks on large language models

Jamie Hayes, I Shumailov, Christopher A. Choquette-Choo et al.

NEURIPS 2025arXiv:2505.18773
12
citations

Factorio Learning Environment

Jack Hopkins, Mart Bakler, Akbir Khan

NEURIPS 2025arXiv:2503.09617
2
citations

Federated Residual Low-Rank Adaption of Large Language Models

Yunlu Yan, Chun-Mei Feng, Wangmeng Zuo et al.

ICLR 2025
8
citations

Few-Shot Knowledge Distillation of LLMs With Counterfactual Explanations

Faisal Hamman, Pasan Dissanayake, Yanjun Fu et al.

NEURIPS 2025arXiv:2510.21631
1
citations

Fiddler: CPU-GPU Orchestration for Fast Inference of Mixture-of-Experts Models

Keisuke Kamahori, Tian Tang, Yile Gu et al.

ICLR 2025arXiv:2402.07033
48
citations

Finding and Reactivating Post-Trained LLMs' Hidden Safety Mechanisms

Mingjie Li, Wai Man Si, Michael Backes et al.

NEURIPS 2025
1
citations

Fine-tuning can Help Detect Pretraining Data from Large Language Models

Hengxiang Zhang, Songxin Zhang, Bingyi Jing et al.

ICLR 2025arXiv:2410.10880
4
citations

Fleet of Agents: Coordinated Problem Solving with Large Language Models

Lars Klein, Nearchos Potamitis, Roland Aydin et al.

ICML 2025arXiv:2405.06691
3
citations

FlexPrefill: A Context-Aware Sparse Attention Mechanism for Efficient Long-Sequence Inference

Xunhao Lai, Jianqiao Lu, Yao Luo et al.

ICLR 2025arXiv:2502.20766
62
citations

Flick: Empowering Federated Learning with Commonsense Knowledge

Ran Zhu, Mingkun Yang, Shiqiang Wang et al.

NEURIPS 2025

FlowerTune: A Cross-Domain Benchmark for Federated Fine-Tuning of Large Language Models

Yan Gao, Massimo R. Scamarcia, Javier Fernandez-Marques et al.

NEURIPS 2025arXiv:2506.02961
4
citations

FlowPrune: Accelerating Attention Flow Calculation by Pruning Flow Network

Shuo Xu, Yu Chen, Shuxia Lin et al.

NEURIPS 2025

FoGE: Fock Space inspired encoding for graph prompting

Takis Chytas, Rudrasis Chakraborty, Vikas Singh

NEURIPS 2025arXiv:2507.02937

ForecastBench: A Dynamic Benchmark of AI Forecasting Capabilities

Ezra Karger, Houtan Bastani, Chen Yueh-Han et al.

ICLR 2025arXiv:2409.19839
34
citations

Forking Paths in Neural Text Generation

Eric Bigelow, Ari Holtzman, Hidenori Tanaka et al.

ICLR 2025arXiv:2412.07961
20
citations

From Artificial Needles to Real Haystacks: Improving Retrieval Capabilities in LLMs by Finetuning on Synthetic Data

Zheyang Xiong, Vasilis Papageorgiou, Kangwook Lee et al.

ICLR 2025arXiv:2406.19292
19
citations

From Attention to Activation: Unraveling the Enigmas of Large Language Models

Prannay Kaul, Chengcheng Ma, Ismail Elezi et al.

ICLR 2025arXiv:2410.17174
8
citations

From Euler to AI: Unifying Formulas for Mathematical Constants

Tomer Raz, Michael Shalyt, Elyasheev Leibtag et al.

NEURIPS 2025arXiv:2502.17533
1
citations

From Low Rank Gradient Subspace Stabilization to Low-Rank Weights: Observations, Theories, and Applications

Ajay Jaiswal, Yifan Wang, Lu Yin et al.

ICML 2025arXiv:2407.11239
20
citations

From Programs to Poses: Factored Real-World Scene Generation via Learned Program Libraries

Joy Hsu, Emily Jin, Jiajun Wu et al.

NEURIPS 2025arXiv:2510.10292
1
citations

Functional Homotopy: Smoothing Discrete Optimization via Continuous Parameters for LLM Jailbreak Attacks

Zi Wang, Divyam Anshumaan, Ashish Hooda et al.

ICLR 2025arXiv:2410.04234
4
citations

General-Reasoner: Advancing LLM Reasoning Across All Domains

Xueguang Ma, Qian Liu, Dongfu Jiang et al.

NEURIPS 2025arXiv:2505.14652
86
citations

General Scene Adaptation for Vision-and-Language Navigation

Haodong Hong, Yanyuan Qiao, Sen Wang et al.

ICLR 2025arXiv:2501.17403
10
citations