Poster "large language models" Papers

455 papers found • Page 4 of 10

Merging LoRAs like Playing LEGO: Pushing the Modularity of LoRA to Extremes Through Rank-Wise Clustering

Ziyu Zhao, tao shen, Didi Zhu et al.

ICLR 2025posterarXiv:2409.16167
33
citations

MindSearch: Mimicking Human Minds Elicits Deep AI Searcher

Zehui Chen, Kuikun Liu, Qiuchen Wang et al.

ICLR 2025posterarXiv:2407.20183
53
citations

Min-K%++: Improved Baseline for Pre-Training Data Detection from Large Language Models

Jingyang Zhang, Jingwei Sun, Eric Yeats et al.

ICLR 2025poster
24
citations

Mixture Compressor for Mixture-of-Experts LLMs Gains More

Wei Huang, Yue Liao, Jianhui Liu et al.

ICLR 2025posterarXiv:2410.06270
22
citations

MLZero: A Multi-Agent System for End-to-end Machine Learning Automation

Haoyang Fang, Boran Han, Nick Erickson et al.

NeurIPS 2025posterarXiv:2505.13941
7
citations

Model Provenance Testing for Large Language Models

Ivica Nikolic, Teodora Baluta, Prateek Saxena

NeurIPS 2025posterarXiv:2502.00706
8
citations

MODEL SHAPLEY: Find Your Ideal Parameter Player via One Gradient Backpropagation

Chu Xu, Xinke Jiang, Rihong Qiu et al.

NeurIPS 2025poster

ModuLM: Enabling Modular and Multimodal Molecular Relational Learning with Large Language Models

Zhuo Chen, YIZHEN ZHENG, Huan Yee Koh et al.

NeurIPS 2025posterarXiv:2506.00880
1
citations

More of the Same: Persistent Representational Harms Under Increased Representation

Jennifer Mickel, Maria De-Arteaga, Liu Leqi et al.

NeurIPS 2025posterarXiv:2503.00333
3
citations

More RLHF, More Trust? On The Impact of Preference Alignment On Trustworthiness

Aaron J. Li, Satyapriya Krishna, Hima Lakkaraju

ICLR 2025posterarXiv:2404.18870
10
citations

Multi-Agent Collaboration via Evolving Orchestration

Yufan Dang, Chen Qian, Xueheng Luo et al.

NeurIPS 2025posterarXiv:2505.19591
25
citations

Neural Interactive Proofs

Lewis Hammond, Sam Adam-Day

ICLR 2025posterarXiv:2412.08897
5
citations

No Loss, No Gain: Gated Refinement and Adaptive Compression for Prompt Optimization

Wenhang Shi, Yiren Chen, Shuqing Bian et al.

NeurIPS 2025posterarXiv:2509.23387

Offline RL by Reward-Weighted Fine-Tuning for Conversation Optimization

Subhojyoti Mukherjee, Viet Lai, Raghavendra Addanki et al.

NeurIPS 2025posterarXiv:2506.06964
2
citations

One Filters All: A Generalist Filter For State Estimation

Shiqi Liu, Wenhan Cao, Chang Liu et al.

NeurIPS 2025posterarXiv:2509.20051
2
citations

On Large Language Model Continual Unlearning

Chongyang Gao, Lixu Wang, Kaize Ding et al.

ICLR 2025posterarXiv:2407.10223
26
citations

On the Role of Attention Heads in Large Language Model Safety

Zhenhong Zhou, Haiyang Yu, Xinghua Zhang et al.

ICLR 2025posterarXiv:2410.13708
40
citations

Open-Source vs Close-Source: The Context Utilization Challenge

Litu Ou

ICLR 2025poster

OSDA Agent: Leveraging Large Language Models for De Novo Design of Organic Structure Directing Agents

Zhaolin Hu, Yixiao Zhou, Zhongan Wang et al.

ICLR 2025poster
6
citations

PANORAMA: A Dataset and Benchmarks Capturing Decision Trails and Rationales in Patent Examination

Hyunseung Lim, Sooyohn Nam, Sungmin Na et al.

NeurIPS 2025posterarXiv:2510.24774

Param$\Delta$ for Direct Mixing: Post-Train Large Language Model At Zero Cost

Sheng Cao, Mingrui Wu, Karthik Prasad et al.

ICLR 2025poster

Parameter and Memory Efficient Pretraining via Low-rank Riemannian Optimization

Zhanfeng Mo, Long-Kai Huang, Sinno Jialin Pan

ICLR 2025poster

ParamMute: Suppressing Knowledge-Critical FFNs for Faithful Retrieval-Augmented Generation

Pengcheng Huang, Zhenghao Liu, Yukun Yan et al.

NeurIPS 2025posterarXiv:2502.15543
4
citations

Perceive Anything: Recognize, Explain, Caption, and Segment Anything in Images and Videos

Weifeng Lin, Xinyu Wei, Ruichuan An et al.

NeurIPS 2025posterarXiv:2506.05302
29
citations

Perturbation-Restrained Sequential Model Editing

Jun-Yu Ma, Hong Wang, Hao-Xiang Xu et al.

ICLR 2025posterarXiv:2405.16821
17
citations

PHYBench: Holistic Evaluation of Physical Perception and Reasoning in Large Language Models

Shi Qiu, Shaoyang Guo, Zhuo-Yang Song et al.

NeurIPS 2025posterarXiv:2504.16074
26
citations

PlanU: Large Language Model Reasoning through Planning under Uncertainty

Ziwei Deng, Mian Deng, Chenjing Liang et al.

NeurIPS 2025posterarXiv:2510.18442

Plug, Play, and Generalize: Length Extrapolation with Pointer-Augmented Neural Memory

Svetha Venkatesh, Kien Do, Hung Le et al.

ICLR 2025poster

Polynomial Composition Activations: Unleashing the Dynamics of Large Language Models

Zhijian Zhuo, Ya Wang, Yutao Zeng et al.

ICLR 2025posterarXiv:2411.03884
5
citations

PortLLM: Personalizing Evolving Large Language Models with Training-Free and Portable Model Patches

Rana Muhammad Shahroz Khan, Pingzhi Li, Sukwon Yun et al.

ICLR 2025posterarXiv:2410.10870
3
citations

Preference-driven Knowledge Distillation for Few-shot Node Classification

Xing Wei, Chunchun Chen, Rui Fan et al.

NeurIPS 2025posterarXiv:2510.10116

Private Training Large-scale Models with Efficient DP-SGD

Liangyu Wang, Junxiao Wang, Jie Ren et al.

NeurIPS 2025poster

Probabilistic Reasoning with LLMs for Privacy Risk Estimation

Jonathan Zheng, Alan Ritter, Sauvik Das et al.

NeurIPS 2025poster

Probabilistic Token Alignment for Large Language Model Fusion

Runjia Zeng, James Liang, Cheng Han et al.

NeurIPS 2025posterarXiv:2509.17276
2
citations

Procedural Knowledge in Pretraining Drives Reasoning in Large Language Models

Laura Ruis, Maximilian Mozes, Juhan Bae et al.

ICLR 2025posterarXiv:2411.12580
24
citations

Progress Reward Model for Reinforcement Learning via Large Language Models

Xiuhui Zhang, Ning Gao, Xingyu Jiang et al.

NeurIPS 2025poster

PseuZO: Pseudo-Zeroth-Order Algorithm for Training Deep Neural Networks

Pengyun Yue, Xuanlin Yang, Mingqing Xiao et al.

NeurIPS 2025poster

RaSA: Rank-Sharing Low-Rank Adaptation

Zhiwei He, Zhaopeng Tu, Xing Wang et al.

ICLR 2025posterarXiv:2503.12576
4
citations

Ravan: Multi-Head Low-Rank Adaptation for Federated Fine-Tuning

Arian Raje, Baris Askin, Divyansh Jhunjhunwala et al.

NeurIPS 2025posterarXiv:2506.05568
1
citations

Reasoning Models Better Express Their Confidence

Dongkeun Yoon, Seungone Kim, Sohee Yang et al.

NeurIPS 2025posterarXiv:2505.14489
32
citations

Reasoning of Large Language Models over Knowledge Graphs with Super-Relations

Song Wang, Junhong Lin, Xiaojie Guo et al.

ICLR 2025posterarXiv:2503.22166
17
citations

Re-evaluating Open-ended Evaluation of Large Language Models

Si-Qi Liu, Ian Gemp, Luke Marris et al.

ICLR 2025posterarXiv:2502.20170
5
citations

Refine Knowledge of Large Language Models via Adaptive Contrastive Learning

Yinghui Li, Haojing Huang, Jiayi Kuang et al.

ICLR 2025posterarXiv:2502.07184
14
citations

Reinforcement Learning with Backtracking Feedback

Bilgehan Sel, Vaishakh Keshava, Phillip Wallis et al.

NeurIPS 2025poster

Reliable Decision‑Making via Calibration‑Oriented Retrieval‑Augmented Generation

Chaeyun Jang, Deukhwan Cho, Seanie Lee et al.

NeurIPS 2025poster

ReMA: Learning to Meta-Think for LLMs with Multi-agent Reinforcement Learning

Ziyu Wan, Yunxiang Li, Xiaoyu Wen et al.

NeurIPS 2025posterarXiv:2503.09501
36
citations

Representation Consistency for Accurate and Coherent LLM Answer Aggregation

Junqi Jiang, Tom Bewley, Salim I. Amoukou et al.

NeurIPS 2025posterarXiv:2506.21590
2
citations

RESAnything: Attribute Prompting for Arbitrary Referring Segmentation

Ruiqi Wang, Hao Zhang

NeurIPS 2025posterarXiv:2505.02867
2
citations

ReSearch: Learning to Reason with Search for LLMs via Reinforcement Learning

Mingyang Chen, Linzhuang Sun, Tianpeng Li et al.

NeurIPS 2025posterarXiv:2503.19470
56
citations

Re-Thinking Inverse Graphics With Large Language Models

Haiwen Feng, Michael J Black, Weiyang Liu et al.

ICLR 2025posterarXiv:2404.15228
15
citations