Spotlight Papers
1,421 papers found • Page 26 of 29
Position: LLMs Can’t Plan, But Can Help Planning in LLM-Modulo Frameworks
Subbarao Kambhampati, Karthik Valmeekam, Lin Guan et al.
Position: Mission Critical – Satellite Data is a Distinct Modality in Machine Learning
Esther Rolf, Konstantin Klemmer, Caleb Robinson et al.
Position: Reinforcement Learning in Dynamic Treatment Regimes Needs Critical Reexamination
Zhiyao Luo, Yangchen Pan, Peter Watkinson et al.
Position: The No Free Lunch Theorem, Kolmogorov Complexity, and the Role of Inductive Biases in Machine Learning
Micah Goldblum, Marc Finzi, Keefer Rowan et al.
Position: Understanding LLMs Requires More Than Statistical Generalization
Patrik Reizinger, Szilvia Ujváry, Anna Mészáros et al.
Position: What makes an image realistic?
Lucas Theis
Post-hoc bias scoring is optimal for fair classification
Wenlong Chen, Yegor Klochkov, Yang Liu
Practical Performance Guarantees for Pipelined DNN Inference
Aaron Archer, Matthew Fahrbach, Kuikui Liu et al.
Prediction without Preclusion: Recourse Verification with Reachable Sets
Avni Kothari, Bogdan Kulynych, Tsui-Wei Weng et al.
Predictive Linear Online Tracking for Unknown Targets
Anastasios Tsiamis, Aren Karapetyan, Yueshan Li et al.
Predictive, scalable and interpretable knowledge tracing on structured domains
Hanqi Zhou, Robert Bamler, Charley Wu et al.
Pre-Training and Fine-Tuning Generative Flow Networks
Ling Pan, Moksh Jain, Kanika Madan et al.
Pre-training with Random Orthogonal Projection Image Modeling
Maryam Haghighat, Peyman Moghadam, Shaheer Mohamed et al.
Pricing with Contextual Elasticity and Heteroscedastic Valuation
Jianyu Xu, Yu-Xiang Wang
PriorBoost: An Adaptive Algorithm for Learning from Aggregate Responses
Adel Javanmard, Matthew Fahrbach, Vahab Mirrokni
Privacy Amplification for Matrix Mechanisms
Christopher Choquette-Choo, Arun Ganesh, Thomas Steinke et al.
Privileged Sensing Scaffolds Reinforcement Learning
Edward Hu, James Springer, Oleh Rybkin et al.
Procedural Fairness Through Decoupling Objectionable Data Generating Components
Zeyu Tang, Jialu Wang, Yang Liu et al.
Project and Probe: Sample-Efficient Adaptation by Interpolating Orthogonal Features
Annie Chen, Yoonho Lee, Amrith Setlur et al.
Promoting External and Internal Equities Under Ex-Ante/Ex-Post Metrics in Online Resource Allocation
Karthik Abinav Sankararaman, Aravind Srinivasan, Pan Xu
Prompt Gradient Projection for Continual Learning
Jingyang Qiao, Zhizhong Zhang, Xin Tan et al.
Prospective Side Information for Latent MDPs
Jeongyeol Kwon, Yonathan Efroni, Shie Mannor et al.
Prototypical Information Bottlenecking and Disentangling for Multimodal Cancer Survival Prediction
Yilan Zhang, Yingxue XU, Jianqi Chen et al.
Provable Offline Preference-Based Reinforcement Learning
Wenhao Zhan, Masatoshi Uehara, Nathan Kallus et al.
Provable Reward-Agnostic Preference-Based Reinforcement Learning
Wenhao Zhan, Masatoshi Uehara, Wen Sun et al.
PTaRL: Prototype-based Tabular Representation Learning via Space Calibration
Hangting Ye, Wei Fan, Xiaozhuang Song et al.
Q-Bench: A Benchmark for General-Purpose Foundation Models on Low-level Vision
Haoning Wu, Zicheng Zhang, Erli Zhang et al.
QBMK: Quantum-based Matching Kernels for Un-attributed Graphs
Lu Bai, Lixin Cui, Ming Li et al.
Quasi-Monte Carlo Features for Kernel Approximation
ZHEN HUANG, Jiajin Sun, Yian Huang
Quasi-Monte Carlo for 3D Sliced Wasserstein
Khai Nguyen, Nicola Bariletto, Nhat Ho
Query-Policy Misalignment in Preference-Based Reinforcement Learning
Xiao Hu, Jianxiong Li, Xianyuan Zhan et al.
QuRating: Selecting High-Quality Data for Training Language Models
Alexander Wettig, Aatmik Gupta, Saumya Malik et al.
Real3D-Portrait: One-shot Realistic 3D Talking Portrait Synthesis
Zhenhui Ye, Tianyun Zhong, Yi Ren et al.
Realistic Evaluation of Semi-supervised Learning Algorithms in Open Environments
Lin-Han Jia, Lan-Zhe Guo, Zhi Zhou et al.
Realistic Unsupervised CLIP Fine-tuning with Universal Entropy Optimization
Jian Liang, Sheng, Zhengbo Wang et al.
R-EDL: Relaxing Nonessential Settings of Evidential Deep Learning
Mengyuan Chen, Junyu Gao, Changsheng Xu
Re-Dock: Towards Flexible and Realistic Molecular Docking with Diffusion Bridge
Yufei Huang, Odin Zhang, Lirong Wu et al.
Refined Coreset Selection: Towards Minimal Coreset Size under Model Performance Constraints
Xiaobo Xia, Jiale Liu, Shaokun Zhang et al.
Regression with Multi-Expert Deferral
Anqi Mao, Mehryar Mohri, Yutao Zhong
Relaxing the Accurate Imputation Assumption in Doubly Robust Learning for Debiased Collaborative Filtering
Haoxuan Li, Chunyuan Zheng, Shuyi Wang et al.
Relay Diffusion: Unifying diffusion process across resolutions for image synthesis
Jiayan Teng, Wendi Zheng, Ming Ding et al.
Replicable Learning of Large-Margin Halfspaces
Alkis Kalavasis, Amin Karbasi, Kasper Green Larsen et al.
Representing Molecules as Random Walks Over Interpretable Grammars
Michael Sun, Minghao Guo, Weize Yuan et al.
Resisting Stochastic Risks in Diffusion Planners with the Trajectory Aggregation Tree
Lang Feng, Pengjie Gu, Bo An et al.
Retrieval-based Disentangled Representation Learning with Natural Language Supervision
Jiawei Zhou, Xiaoguang Li, Lifeng Shang et al.
RetroBridge: Modeling Retrosynthesis with Markov Bridges
Ilia Igashov, Arne Schneuing, Marwin Segler et al.
Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization
Weiran Yao, Shelby Heinecke, Juan Carlos Niebles et al.
Revisiting the Power of Prompt for Visual Tuning
Yuzhu Wang, Lechao Cheng, Chaowei Fang et al.
Reward-Consistent Dynamics Models are Strongly Generalizable for Offline Reinforcement Learning
Fan-Ming Luo, Tian Xu, Xingchen Cao et al.
RICE: Breaking Through the Training Bottlenecks of Reinforcement Learning with Explanation
Zelei Cheng, Xian Wu, Jiahao Yu et al.