NeurIPS Papers
5,858 papers found • Page 15 of 118
CamEdit: Continuous Camera Parameter Control for Photorealistic Image Editing
Xinran Qin, Zhixin Wang, Fan Li et al.
Cameras as Relative Positional Encoding
Ruilong Li, Brent Yi, Junchen Liu et al.
CAMILA: Context-Aware Masking for Image Editing with Language Alignment
Hyunseung Kim, Chiho Choi, Srikanth Malla et al.
CaMiT: A Time-Aware Car Model Dataset for Classification and Generation
Frédéric Lin, Biruk Abere Ambaw, Adrian Popescu et al.
CAML: Collaborative Auxiliary Modality Learning for Multi-Agent Systems
Rui Liu, Yu Shen, Peng Gao et al.
CAMO: Convergence-Aware Multi-Fidelity Bayesian Optimization
WEI XING, Zhenjie Lu, Akeel Shah
CamSAM2: Segment Anything Accurately in Camouflaged Videos
Yuli Zhou, Yawei Li, Yuqian Fu et al.
Can Agent Fix Agent Issues?
Alfin Wijaya Rahardja, Junwei Liu, Weitong Chen et al.
Cancer Survival Analysis via Zero-shot Tumor Microenvironment Segmentation on Low-resolution Whole Slide Pathology Images
Jiao Tang, WEI SHAO, Daoqiang Zhang
Can Class-Priors Help Single-Positive Multi-Label Learning?
Biao Liu, Ning Xu, Jie Wang et al.
Can Dependencies Induced by LLM-Agent Workflows Be Trusted?
Yu Yao, Yiliao (Lia) Song, Yian Xie et al.
Can Diffusion Models Disentangle? A Theoretical Perspective
Liming Wang, Muhammad Jehanzeb Mirza, Yishu Gong et al.
Can DPO Learn Diverse Human Values? A Theoretical Scaling Law
Shawn Im, Sharon Li
Can Knowledge-Graph-based Retrieval Augmented Generation Really Retrieve What You Need?
Junchi Yu, Yujie Liu, Jindong Gu et al.
Can Large Language Models Help Multimodal Language Analysis? MMLA: A Comprehensive Benchmark
Hanlei Zhang, zhuohang li, Hua Xu et al.
Can Large Language Models Master Complex Card Games?
Wei Wang, Fuqing Bie, Junzhe Chen et al.
Can Large Multimodal Models Understand Agricultural Scenes? Benchmarking with AgroMind
Qingmei Li, Yang Zhang, Zurong Mai et al.
Can LLMs Correct Themselves? A Benchmark of Self-Correction in LLMs
Guiyao Tie, Zenghui Yuan, Zeli Zhao et al.
Can LLMs Outshine Conventional Recommenders? A Comparative Evaluation
Qijiong Liu, Jieming Zhu, Lu Fan et al.
Can LLMs Reason Over Non-Text Modalities in a Training-Free Manner? A Case Study with In-Context Representation Learning
Tianle Zhang, Wanlong Fang, Jonathan Woo et al.
Can MLLMs Absorb Math Reasoning Abilities from LLMs as Free Lunch?
Yijie Hu, Zihao Zhou, Kaizhu Huang et al.
Can Multi-Modal LLMs Provide Live Step-by-Step Task Guidance?
Apratim Bhattacharyya, Bicheng Xu, Sanjay Haresh et al.
Can NeRFs "See" without Cameras?
Chaitanya Amballa, Yu-Lin Wei, Sattwik Basu et al.
Can We Infer Confidential Properties of Training Data from LLMs?
Pengrun Huang, Chhavi Yadav, Kamalika Chaudhuri et al.
CAPability: A Comprehensive Visual Caption Benchmark for Evaluating Both Correctness and Thoroughness
Zhihang Liu, Chen-Wei Xie, Bin Wen et al.
Caption This, Reason That: VLMs Caught in the Middle
Zihan Weng, Lucas Gomez, Taylor Webb et al.
Capturing Individual Human Preferences with Reward Features
Andre Barreto, Vincent Dumoulin, Yiran Mao et al.
Capturing Polysemanticity with PRISM: A Multi-Concept Feature Description Framework
Laura Kopf, Nils Feldhus, Kirill Bykov et al.
CarbonGlobe: A Global-Scale, Multi-Decade Dataset and Benchmark for Carbon Forecasting in Forest Ecosystems
Zhihao Wang, Lei Ma, George Hurtt et al.
CARE: Decoding-Time Safety Alignment via Rollback and Introspection Intervention
Xiaomeng Hu, Fei Huang, Chenhan Yuan et al.
Care-PD: A Multi-Site Anonymized Clinical Dataset for Parkinson’s Disease Gait Assessment
Vida Adeli, Ivan Klabučar, Javad Rajabi et al.
CARES: Comprehensive Evaluation of Safety and Adversarial Robustness in Medical LLMs
Sijia Chen, Xiaomin Li, mengxue zhang et al.
CAR-Flow: Condition-Aware Reparameterization Aligns Source and Target for Better Flow Matching
Chen Chen, Pengsheng Guo, Liangchen Song et al.
Cascaded Language Models for Cost-Effective Human–AI Decision-Making
Claudio Fanconi, Mihaela van der Schaar
CAS-Spec: Cascade Adaptive Self-Speculative Decoding for On-the-Fly Lossless Inference Acceleration of LLMs
Zhiyuan Ning, Jiawei Shao, Ruge Xu et al.
CAT: Circular-Convolutional Attention for Sub-Quadratic Transformers
Yoshihiro Yamada
CAT: Content-Adaptive Image Tokenization
Junhong Shen, Kushal Tirumala, Michihiro Yasunaga et al.
CATransformers: Carbon Aware Transformers Through Joint Model-Hardware Optimization
Irene Wang, Mostafa Elhoushi, H Ekin Sumbul et al.
Causal Climate Emulation with Bayesian Filtering
Sebastian H. M. Hickman, Ilija Trajković, Julia Kaltenborn et al.
Causal Differentiating Concepts: Interpreting LM Behavior via Causal Representation Learning
Navita Goyal, Hal Daumé III, Alexandre Drouin et al.
Causal Discovery and Inference through Next-Token Prediction
Eivinas Butkus, Nikolaus Kriegeskorte
Causal Discovery over Clusters of Variables in Markovian Systems
Tara Anand, Adèle Ribeiro, Jin Tian et al.
CausalDynamics: A large‐scale benchmark for structural discovery of dynamical causal models
Benjamin Herdeanu, Juan Nathaniel, Carla Roesch et al.
Causal Explanation-Guided Learning for Organ Allocation
Alessandro Marchese, Jeroen Berrevoets, Sam Verboven
Causal Head Gating: A Framework for Interpreting Roles of Attention Heads in Transformers
Andrew Nam, Henry Conklin, Yukang Yang et al.
Causality-Induced Positional Encoding for Transformer-Based Representation Learning of Non-Sequential Features
Kaichen Xu, Yihang Du, Mianpeng Liu et al.
Causality Meets Locality: Provably Generalizable and Scalable Policy Learning for Networked Systems
Hao Liang, shuqing shi, Yudi Zhang et al.
Causality Meets the Table: Debiasing LLMs for Faithful TableQA via Front-Door Intervention
Zhen Yang, Ziwei Du, Minghan Zhang et al.
Causal LLM Routing: End-to-End Regret Minimization from Observational Data
Asterios Tsiourvas, Wei Sun, Georgia Perakis
Causally Reliable Concept Bottleneck Models
Giovanni De Felice, Arianna Casanova Flores, Francesco De Santis et al.