NEURIPS "large language models" Papers

298 papers found • Page 2 of 6

Classical Planning with LLM-Generated Heuristics: Challenging the State of the Art with Python Code

Augusto B. Corrêa, André G. Pereira, Jendrik Seipp

NEURIPS 2025posterarXiv:2503.18809
13
citations

CLAWS:Creativity detection for LLM-generated solutions using Attention Window of Sections

Keuntae Kim, Eunhye Jeong, Sehyeon Lee et al.

NEURIPS 2025poster

ClinBench: A Standardized Multi-Domain Framework for Evaluating Large Language Models in Clinical Information Extraction

Ismael Villanueva Miranda, Zifan Gu, Donghan Yang et al.

NEURIPS 2025poster

Computation and Memory-Efficient Model Compression with Gradient Reweighting

Zhiwei Li, Yuesen Liao, Binrui Wu et al.

NEURIPS 2025poster

Concept-Guided Interpretability via Neural Chunking

Shuchen Wu, Stephan Alaniz, Shyamgopal Karthik et al.

NEURIPS 2025posterarXiv:2505.11576

Concept Incongruence: An Exploration of Time and Death in Role Playing

Xiaoyan Bai, Ike Peng, Aditya Singh et al.

NEURIPS 2025oralarXiv:2505.14905
1
citations

Conditional Representation Learning for Customized Tasks

Honglin Liu, Chao Sun, Peng Hu et al.

NEURIPS 2025spotlightarXiv:2510.04564

Conflict-Aware Knowledge Editing in the Wild: Semantic-Augmented Graph Representation for Unstructured Text

Zhange Zhang, Zhicheng Geng, Yuqing Ma et al.

NEURIPS 2025spotlight

ConfTuner: Training Large Language Models to Express Their Confidence Verbally

Yibo Li, Miao Xiong, Jiaying Wu et al.

NEURIPS 2025posterarXiv:2508.18847
10
citations

ConTextTab: A Semantics-Aware Tabular In-Context Learner

Marco Spinaci, Marek Polewczyk, Maximilian Schambach et al.

NEURIPS 2025spotlightarXiv:2506.10707
7
citations

CoP: Agentic Red-teaming for Large Language Models using Composition of Principles

Chen Xiong, Pin-Yu Chen, Tsung-Yi Ho

NEURIPS 2025posterarXiv:2506.00781
3
citations

Cost-aware LLM-based Online Dataset Annotation

Eray Can Elumar, Cem Tekin, Osman Yagan

NEURIPS 2025spotlightarXiv:2505.15101
1
citations

Creativity or Brute Force? Using Brainteasers as a Window into the Problem-Solving Abilities of Large Language Models

Sophia Han, Howard Dai, Stephen Xia et al.

NEURIPS 2025posterarXiv:2505.10844
1
citations

DanmakuTPPBench: A Multi-modal Benchmark for Temporal Point Process Modeling and Understanding

Yue Jiang, Jichu Li, Yang Liu et al.

NEURIPS 2025oralarXiv:2505.18411
3
citations

DataSIR: A Benchmark Dataset for Sensitive Information Recognition

Fan Mo, Bo Liu, Yuan Fan et al.

NEURIPS 2025poster

Deep Value Benchmark: Measuring Whether Models Generalize Deep values or Shallow Preferences

Joshua Ashkinaze, Hua Shen, Saipranav Avula et al.

NEURIPS 2025oralarXiv:2511.02109

Detecting High-Stakes Interactions with Activation Probes

Alex McKenzie, Urja Pawar, Phil Blandfort et al.

NEURIPS 2025posterarXiv:2506.10805
13
citations

Detoxifying Large Language Models via Autoregressive Reward Guided Representation Editing

Yisong Xiao, Aishan Liu, Siyuan Liang et al.

NEURIPS 2025posterarXiv:2510.01243
2
citations

DEXTER: Diffusion-Guided EXplanations with TExtual Reasoning for Vision Models

Simone Carnemolla, Matteo Pennisi, Sarinda Samarasinghe et al.

NEURIPS 2025spotlightarXiv:2510.14741

Differentially Private Federated Low Rank Adaptation Beyond Fixed-Matrix

Ming Wen, Jiaqi Zhu, Yuedong Xu et al.

NEURIPS 2025posterarXiv:2507.09990

Direct Numerical Layout Generation for 3D Indoor Scene Synthesis via Spatial Reasoning

Xingjian Ran, Yixuan Li, Linning Xu et al.

NEURIPS 2025posterarXiv:2506.05341
5
citations

DISCO: Disentangled Communication Steering for Large Language Models

Max Torop, Aria Masoomi, Masih Eskandar et al.

NEURIPS 2025posterarXiv:2509.16820

Discovering Important Experts for Mixture-of-Experts Models Pruning Through a Theoretical Perspective

Weizhong Huang, Yuxin Zhang, Xiawu Zheng et al.

NEURIPS 2025poster

Disentangled Concepts Speak Louder Than Words: Explainable Video Action Recognition

Jongseo Lee, Wooil Lee, Gyeong-Moon Park et al.

NEURIPS 2025spotlightarXiv:2511.03725

Distribution-Aligned Decoding for Efficient LLM Task Adaptation

Senkang Hu, Xudong Han, Jinqi Jiang et al.

NEURIPS 2025posterarXiv:2509.15888
3
citations

DNA-DetectLLM: Unveiling AI-Generated Text via a DNA-Inspired Mutation-Repair Paradigm

Xiaowei Zhu, Yubing Ren, Fang Fang et al.

NEURIPS 2025spotlightarXiv:2509.15550

Does Reinforcement Learning Really Incentivize Reasoning Capacity in LLMs Beyond the Base Model?

Yang Yue, Zhiqi Chen, Rui Lu et al.

NEURIPS 2025oralarXiv:2504.13837
483
citations

Do LLMs Really Forget? Evaluating Unlearning with Knowledge Correlation and Confidence Awareness

Rongzhe Wei, Peizhi Niu, Hans Hao-Hsun Hsu et al.

NEURIPS 2025posterarXiv:2506.05735
6
citations

Don’t Forget the Enjoin: FocalLoRA for Instruction Hierarchical Alignment in Large Language Models

Zitong Shi, Guancheng Wan, Haixin Wang et al.

NEURIPS 2025poster

Do You Really Need Public Data? Surrogate Public Data for Differential Privacy on Tabular Data

Shlomi Hod, Lucas Rosenblatt, Julia Stoyanovich

NEURIPS 2025posterarXiv:2504.14368
1
citations

DSAS: A Universal Plug-and-Play Framework for Attention Optimization in Multi-Document Question Answering

Jiakai Li, Rongzheng Wang, Yizhuo Ma et al.

NEURIPS 2025posterarXiv:2510.12251

DuoGPT: Training-free Dual Sparsity through Activation-aware Pruning in LLMs

Ruokai Yin, Yuhang Li, Donghyun Lee et al.

NEURIPS 2025posterarXiv:2506.20194
2
citations

DynaAct: Large Language Model Reasoning with Dynamic Action Spaces

Xueliang Zhao, Wei Wu, Jian Guan et al.

NEURIPS 2025posterarXiv:2511.08043

Dynamic Bundling with Large Language Models for Zero-Shot Inference on Text-Attributed Graphs

Yusheng Zhao, Qixin Zhang, Xiao Luo et al.

NEURIPS 2025posterarXiv:2505.17599
2
citations

DynamicRAG: Leveraging Outputs of Large Language Model as Feedback for Dynamic Reranking in Retrieval-Augmented Generation

Jiashuo Sun, Xianrui Zhong, Sizhe Zhou et al.

NEURIPS 2025posterarXiv:2505.07233
5
citations

EAGLE-3: Scaling up Inference Acceleration of Large Language Models via Training-Time Test

Yuhui Li, Fangyun Wei, Chao Zhang et al.

NEURIPS 2025posterarXiv:2503.01840
102
citations

Embracing Trustworthy Brain-Agent Collaboration as Paradigm Extension for Intelligent Assistive Technologies

Yankai Chen, Xinni Zhang, Yifei Zhang et al.

NEURIPS 2025posterarXiv:2510.22095
1
citations

Enhancing Personalized Multi-Turn Dialogue with Curiosity Reward

Yanming Wan, Jiaxing Wu, Marwa Abdulhai et al.

NEURIPS 2025posterarXiv:2504.03206
12
citations

Enhancing Safety in Reinforcement Learning with Human Feedback via Rectified Policy Optimization

Xiyue Peng, Hengquan Guo, Jiawei Zhang et al.

NEURIPS 2025posterarXiv:2410.19933
5
citations

ErrorTrace: A Black-Box Traceability Mechanism Based on Model Family Error Space

Chuanchao Zang, Xiangtao Meng, Wenyu Chen et al.

NEURIPS 2025spotlight

EvaLearn: Quantifying the Learning Capability and Efficiency of LLMs via Sequential Problem Solving

Shihan Dou, Ming Zhang, Chenhao Huang et al.

NEURIPS 2025posterarXiv:2506.02672
4
citations

Evaluating Program Semantics Reasoning with Type Inference in System $F$

Yifeng He, Luning Yang, Christopher Gonzalo et al.

NEURIPS 2025posterarXiv:2509.23686
1
citations

Every Rollout Counts: Optimal Resource Allocation for Efficient Test-Time Scaling

Xinglin Wang, Yiwei Li, Shaoxiong Feng et al.

NEURIPS 2025posterarXiv:2506.15707
5
citations

Exploring the limits of strong membership inference attacks on large language models

Jamie Hayes, I Shumailov, Christopher A. Choquette-Choo et al.

NEURIPS 2025posterarXiv:2505.18773
10
citations

Factorio Learning Environment

Jack Hopkins, Mart Bakler, Akbir Khan

NEURIPS 2025posterarXiv:2503.09617
2
citations

FALQON: Accelerating LoRA Fine-tuning with Low-Bit Floating-Point Arithmetic

Kanghyun Choi, Hyeyoon Lee, Sunjong Park et al.

NEURIPS 2025arXiv:2510.24061

Far from the Shallow: Brain-Predictive Reasoning Embedding through Residual Disentanglement

Linyang He, Tianjun Zhong, Richard Antonello et al.

NEURIPS 2025oralarXiv:2510.22860
1
citations

Few-Shot Knowledge Distillation of LLMs With Counterfactual Explanations

Faisal Hamman, Pasan Dissanayake, Yanjun Fu et al.

NEURIPS 2025posterarXiv:2510.21631
1
citations

FFN Fusion: Rethinking Sequential Computation in Large Language Models

Akhiad Bercovich, Mohammed Dabbah, Omri Puny et al.

NEURIPS 2025spotlightarXiv:2503.18908
2
citations

Finding and Reactivating Post-Trained LLMs' Hidden Safety Mechanisms

Mingjie Li, Wai Man Si, Michael Backes et al.

NEURIPS 2025poster
1
citations