Poster "large language models" Papers

533 papers found • Page 3 of 11

DSBench: How Far Are Data Science Agents from Becoming Data Science Experts?

Liqiang Jing, Zhehui Huang, Xiaoyang Wang et al.

ICLR 2025posterarXiv:2409.07703
62
citations

DSV-LFS: Unifying LLM-Driven Semantic Cues with Visual Features for Robust Few-Shot Segmentation

Amin Karimi, Charalambos Poullis

CVPR 2025posterarXiv:2503.04006
4
citations

DuoGPT: Training-free Dual Sparsity through Activation-aware Pruning in LLMs

Ruokai Yin, Yuhang Li, Donghyun Lee et al.

NeurIPS 2025posterarXiv:2506.20194
2
citations

Durable Quantization Conditioned Misalignment Attack on Large Language Models

Peiran Dong, Haowei Li, Song Guo

ICLR 2025poster
1
citations

DWIM: Towards Tool-aware Visual Reasoning via Discrepancy-aware Workflow Generation & Instruct-Masking Tuning

Fucai Ke, Vijay Kumar b g, Xingjian Leng et al.

ICCV 2025posterarXiv:2503.19263
6
citations

Dynamic Loss-Based Sample Reweighting for Improved Large Language Model Pretraining

Daouda Sow, Herbert Woisetschläger, Saikiran Bulusu et al.

ICLR 2025posterarXiv:2502.06733
13
citations

DynamicRAG: Leveraging Outputs of Large Language Model as Feedback for Dynamic Reranking in Retrieval-Augmented Generation

Jiashuo Sun, Xianrui Zhong, Sizhe Zhou et al.

NeurIPS 2025posterarXiv:2505.07233
5
citations

Efficient Automated Circuit Discovery in Transformers using Contextual Decomposition

Aliyah Hsu, Georgia Zhou, Yeshwanth Cherapanamjeri et al.

ICLR 2025posterarXiv:2407.00886
14
citations

EFFICIENT JAILBREAK ATTACK SEQUENCES ON LARGE LANGUAGE MODELS VIA MULTI-ARMED BANDIT-BASED CONTEXT SWITCHING

Aditya Ramesh, Shivam Bhardwaj, Aditya Saibewar et al.

ICLR 2025poster
3
citations

ELICIT: LLM Augmentation Via External In-context Capability

Futing Wang, Jianhao (Elliott) Yan, Yue Zhang et al.

ICLR 2025posterarXiv:2410.09343
6
citations

Embracing Trustworthy Brain-Agent Collaboration as Paradigm Extension for Intelligent Assistive Technologies

Yankai Chen, Xinni Zhang, Yifei Zhang et al.

NeurIPS 2025posterarXiv:2510.22095
1
citations

Emerging Safety Attack and Defense in Federated Instruction Tuning of Large Language Models

Rui Ye, Jingyi Chai, Xiangrui Liu et al.

ICLR 2025posterarXiv:2406.10630
18
citations

Enhancing Graph Of Thought: Enhancing Prompts with LLM Rationales and Dynamic Temperature Control

Sunguk Shin, Youngjoon Kim

ICLR 2025poster
3
citations

Enhancing Personalized Multi-Turn Dialogue with Curiosity Reward

Yanming Wan, Jiaxing Wu, Marwa Abdulhai et al.

NeurIPS 2025posterarXiv:2504.03206
12
citations

Every Rollout Counts: Optimal Resource Allocation for Efficient Test-Time Scaling

Xinglin Wang, Yiwei Li, Shaoxiong Feng et al.

NeurIPS 2025posterarXiv:2506.15707
5
citations

Everything is Editable: Extend Knowledge Editing to Unstructured Data in Large Language Models

Jingcheng Deng, Zihao Wei, Liang Pang et al.

ICLR 2025posterarXiv:2405.15349
6
citations

Exploring CLIP's Dense Knowledge for Weakly Supervised Semantic Segmentation

Zhiwei Yang, Yucong Meng, Kexue Fu et al.

CVPR 2025posterarXiv:2503.20826

Exploring the limits of strong membership inference attacks on large language models

Jamie Hayes, I Shumailov, Christopher A. Choquette-Choo et al.

NeurIPS 2025posterarXiv:2505.18773
10
citations

Federated Residual Low-Rank Adaption of Large Language Models

Yunlu Yan, Chun-Mei Feng, Wangmeng Zuo et al.

ICLR 2025poster
6
citations

Fiddler: CPU-GPU Orchestration for Fast Inference of Mixture-of-Experts Models

Keisuke Kamahori, Tian Tang, Yile Gu et al.

ICLR 2025posterarXiv:2402.07033
45
citations

Finding and Reactivating Post-Trained LLMs' Hidden Safety Mechanisms

Mingjie Li, Wai Man Si, Michael Backes et al.

NeurIPS 2025poster
1
citations

Fine-tuning can Help Detect Pretraining Data from Large Language Models

Hengxiang Zhang, Songxin Zhang, Bingyi Jing et al.

ICLR 2025posterarXiv:2410.10880
4
citations

FlexPrefill: A Context-Aware Sparse Attention Mechanism for Efficient Long-Sequence Inference

Xunhao Lai, Jianqiao Lu, Yao Luo et al.

ICLR 2025posterarXiv:2502.20766
51
citations

FoGE: Fock Space inspired encoding for graph prompting

Takis Chytas, Rudrasis Chakraborty, Vikas Singh

NeurIPS 2025posterarXiv:2507.02937

ForecastBench: A Dynamic Benchmark of AI Forecasting Capabilities

Ezra Karger, Houtan Bastani, Chen Yueh-Han et al.

ICLR 2025posterarXiv:2409.19839
31
citations

From Artificial Needles to Real Haystacks: Improving Retrieval Capabilities in LLMs by Finetuning on Synthetic Data

Zheyang Xiong, Vasilis Papageorgiou, Kangwook Lee et al.

ICLR 2025posterarXiv:2406.19292
19
citations

From Programs to Poses: Factored Real-World Scene Generation via Learned Program Libraries

Joy Hsu, Emily Jin, Jiajun Wu et al.

NeurIPS 2025posterarXiv:2510.10292
1
citations

Functional Homotopy: Smoothing Discrete Optimization via Continuous Parameters for LLM Jailbreak Attacks

Zi Wang, Divyam Anshumaan, Ashish Hooda et al.

ICLR 2025posterarXiv:2410.04234
4
citations

General Scene Adaptation for Vision-and-Language Navigation

Haodong Hong, Yanyuan Qiao, Sen Wang et al.

ICLR 2025posterarXiv:2501.17403
10
citations

Generative Monoculture in Large Language Models

Fan Wu, Emily Black, Varun Chandrasekaran

ICLR 2025posterarXiv:2407.02209
10
citations

Generator-Mediated Bandits: Thompson Sampling for GenAI-Powered Adaptive Interventions

Marc Brooks, Gabriel Durham, Kihyuk Hong et al.

NeurIPS 2025posterarXiv:2505.16311

GeoCAD: Local Geometry-Controllable CAD Generation with Large Language Models

Zhanwei Zhang, kaiyuan liu, Junjie Liu et al.

NeurIPS 2025posterarXiv:2506.10337
2
citations

GOFA: A Generative One-For-All Model for Joint Graph Language Modeling

Lecheng Kong, Jiarui Feng, Hao Liu et al.

ICLR 2025posterarXiv:2407.09709
28
citations

Gradient Multi-Normalization for Efficient LLM Training

Meyer Scetbon, Chao Ma, Wenbo Gong et al.

NeurIPS 2025poster
3
citations

GraphChain: Large Language Models for Large-scale Graph Analysis via Tool Chaining

Chunyu Wei, Wenji Hu, Xingjia Hao et al.

NeurIPS 2025posterarXiv:2511.00457

GRIFFIN: Effective Token Alignment for Faster Speculative Decoding

Shijing Hu, Jingyang Li, Xingyu Xie et al.

NeurIPS 2025posterarXiv:2502.11018
3
citations

GRIP: A Graph-Based Reasoning Instruction Producer

Jiankang Wang, Jianjun Xu, Xiaorui Wang et al.

NeurIPS 2025posterarXiv:2412.08864
2
citations

HaDeMiF: Hallucination Detection and Mitigation in Large Language Models

Xiaoling Zhou, Mingjie Zhang, Zhemg Lee et al.

ICLR 2025poster
9
citations

HALL-E: Hierarchical Neural Codec Language Model for Minute-Long Zero-Shot Text-to-Speech Synthesis

Yuto Nishimura, Takumi Hirose, Masanari Ohi et al.

ICLR 2025posterarXiv:2410.04380
5
citations

HarmAug: Effective Data Augmentation for Knowledge Distillation of Safety Guard Models

Seanie Lee, Haebin Seong, Dong Bok Lee et al.

ICLR 2025posterarXiv:2410.01524
13
citations

HCRMP: An LLM-Hinted Contextual Reinforcement Learning Framework for Autonomous Driving

Zhiwen Chen, Hanming Deng, Zhuoren Li et al.

NeurIPS 2025posterarXiv:2505.15793
3
citations

Herald: A Natural Language Annotated Lean 4 Dataset

Guoxiong Gao, Yutong Wang, Jiedong Jiang et al.

ICLR 2025posterarXiv:2410.10878
28
citations

Hierarchical Demonstration Order Optimization for Many-shot In-Context Learning

Yinhan He, Wendy Zheng, Song Wang et al.

NeurIPS 2025poster

HiMoLE: Towards OOD-Robust LoRA via Hierarchical Mixture of Experts

Yinuo Jiang, Yan Xiaodong, Keyan Ding et al.

NeurIPS 2025poster

How Do Large Language Models Understand Graph Patterns? A Benchmark for Graph Pattern Comprehension

Xinnan Dai, Haohao QU, Yifei Shen et al.

ICLR 2025posterarXiv:2410.05298
20
citations

Human Simulacra: Benchmarking the Personification of Large Language Models

Qiujie Xie, Qiming Feng, Tianqi Zhang et al.

ICLR 2025posterarXiv:2402.18180
8
citations

Hypothetical Minds: Scaffolding Theory of Mind for Multi-Agent Tasks with Large Language Models

Logan Cross, Violet Xiang, Agam Bhatia et al.

ICLR 2025posterarXiv:2407.07086
22
citations

Imagine and Seek: Improving Composed Image Retrieval with an Imagined Proxy

You Li, Fan Ma, Yi Yang

CVPR 2025posterarXiv:2411.16752
9
citations

Implicit In-context Learning

Zhuowei Li, Zihao Xu, Ligong Han et al.

ICLR 2025posterarXiv:2405.14660
8
citations

Improved Techniques for Optimization-Based Jailbreaking on Large Language Models

Xiaojun Jia, Tianyu Pang, Chao Du et al.

ICLR 2025posterarXiv:2405.21018
74
citations