Most Cited NEURIPS "kd trees" Papers
5,858 papers found • Page 6 of 30
Conference
BrainOmni: A Brain Foundation Model for Unified EEG and MEG Signals
Qinfan Xiao, Ziyun Cui, Chi Zhang et al.
Understanding Prompt Tuning and In-Context Learning via Meta-Learning
Tim Genewein, Kevin Li, Jordi Grau-Moya et al.
Revisiting Multi-Agent World Modeling from a Diffusion-Inspired Perspective
Yang Zhang, Xinran Li, Jianing Ye et al.
Computational Algebra with Attention: Transformer Oracles for Border Basis Algorithms
Hiroshi Kera, Nico Pelleriti, Yuki Ishihara et al.
Dynamic Risk Assessments for Offensive Cybersecurity Agents
Boyi Wei, Benedikt Stroebl, Jiacen Xu et al.
HollowFlow: Efficient Sample Likelihood Evaluation using Hollow Message Passing
Johann Flemming Gloy, Simon Olsson
Learning to price with resource constraints: from full information to machine-learned prices
Ruicheng Ao, Jiashuo Jiang, David Simchi-Levi
FreqPolicy: Efficient Flow-based Visuomotor Policy via Frequency Consistency
Yifei Su, Ning Liu, Dong Chen et al.
HoliGS: Holistic Gaussian Splatting for Embodied View Synthesis
Xiaoyuan Wang, Yizhou Zhao, Botao Ye et al.
GyroSwin: 5D Surrogates for Gyrokinetic Plasma Turbulence Simulations
Fabian Paischer, Gianluca Galletti, William Hornsby et al.
MPMAvatar: Learning 3D Gaussian Avatars with Accurate and Robust Physics-Based Dynamics
Changmin Lee, Jihyun Lee, Tae-Kyun Kim
Born a Transformer -- Always a Transformer? On the Effect of Pretraining on Architectural Abilities
Mayank Jobanputra, Yana Veitsman, Yash Sarrof et al.
Improving Generalization of Neural Combinatorial Optimization for Vehicle Routing Problems via Test-Time Projection Learning
Yuanyao Chen, Rongsheng Chen, Fu Luo et al.
More of the Same: Persistent Representational Harms Under Increased Representation
Jennifer Mickel, Maria De-Arteaga, Liu Leqi et al.
GPO: Learning from Critical Steps to Improve LLM Reasoning
Jiahao Yu, Zelei Cheng, Xian Wu et al.
OmniTalker: One-shot Real-time Text-Driven Talking Audio-Video Generation With Multimodal Style Mimicking
Zhongjian Wang, Peng Zhang, Jinwei Qi et al.
Information-Theoretic Reward Decomposition for Generalizable RLHF
Liyuan Mao, Haoran Xu, Amy Zhang et al.
Free-Lunch Color-Texture Disentanglement for Stylized Image Generation
Jiang Qin, Alexandra Gomez-Villa, Senmao Li et al.
Continuous Concepts Removal in Text-to-image Diffusion Models
Tingxu Han, Weisong Sun, Yanrong Hu et al.
Beyond Modality Collapse: Representation Blending for Multimodal Dataset Distillation
xin zhang, Ziruo Zhang, JIAWEI DU et al.
Neurosymbolic Diffusion Models
Emile van Krieken, Pasquale Minervini, Edoardo Maria Ponti et al.
Fine-Grained Preference Optimization Improves Spatial Reasoning in VLMs
Yifan Shen, Yuanzhe Liu, Jingyuan Zhu et al.
Wan-Move: Motion-controllable Video Generation via Latent Trajectory Guidance
Ruihang Chu, Yefei He, Zhekai Chen et al.
A Stable Whitening Optimizer for Efficient Neural Network Training
Kevin Frans, Sergey Levine, Pieter Abbeel
Bridging Sign and Spoken Languages: Pseudo Gloss Generation for Sign Language Translation
Jianyuan Guo, Peike Li, Trevor Cohn
Return of ChebNet: Understanding and Improving an Overlooked GNN on Long Range Tasks
Ali Hariri, Alvaro Arroyo, Alessio Gravina et al.
Optimized Minimal 3D Gaussian Splatting
Joo Chan Lee, Jong Hwan Ko, Eunbyung Park
PlayerOne: Egocentric World Simulator
Yuanpeng Tu, Hao Luo, Xi Chen et al.
TC-Light: Temporally Coherent Generative Rendering for Realistic World Transfer
Yang Liu, Chuanchen Luo, Zimo Tang et al.
Deliberation on Priors: Trustworthy Reasoning of Large Language Models on Knowledge Graphs
Jie Ma, NING QU, Zhitao Gao et al.
Entropic Time Schedulers for Generative Diffusion Models
Dejan Stancevic, Florian Handke, Luca Ambrogioni
Next Semantic Scale Prediction via Hierarchical Diffusion Language Models
Cai Zhou, Chenyu Wang, Dinghuai Zhang et al.
Memory Mosaics at scale
Jianyu Zhang, Leon Bottou
GRIFFIN: Effective Token Alignment for Faster Speculative Decoding
Shijing Hu, Jingyang Li, Xingyu Xie et al.
Convergent Functions, Divergent Forms
Hyeonseong Jeon, Ainaz Eftekhar, Aaron Walsman et al.
Dynamic View Synthesis as an Inverse Problem
Hidir Yesiltepe, Pinar Yanardag
Gradient-Variation Online Adaptivity for Accelerated Optimization with Hölder Smoothness
Yuheng Zhao, Yu-Hu Yan, Kfir Y. Levy et al.
Causal Head Gating: A Framework for Interpreting Roles of Attention Heads in Transformers
Andrew Nam, Henry Conklin, Yukang Yang et al.
Asymptotic Theory of Geometric and Adaptive $k$-Means Clustering
Adam Quinn Jaffe
Efficient Speech Language Modeling via Energy Distance in Continuous Latent Space
Zhengrui Ma, Yang Feng, Chenze Shao et al.
Distribution-Aligned Decoding for Efficient LLM Task Adaptation
Senkang Hu, Xudong Han, Jinqi Jiang et al.
Gradient Multi-Normalization for Efficient LLM Training
Meyer Scetbon, Chao Ma, Wenbo Gong et al.
Rethinking Circuit Completeness in Language Models: AND, OR, and ADDER Gates
Hang Chen, Jiaying Zhu, Xinyu Yang et al.
Beyond Single-Task: Robust Multi-Task Length Generalization for LLMs
Yi Hu, Shijia Kang, Haotong Yang et al.
Understanding Contrastive Learning via Gaussian Mixture Models
Parikshit Bansal, Ali Kavis, Sujay Sanghavi
PAC Bench: Do Foundation Models Understand Prerequisites for Executing Manipulation Policies?
Atharva Gundawar, Som Sagar, Ransalu Senanayake
Multipole Attention for Efficient Long Context Reasoning
Coleman Hooper, Sebastian Zhao, Luca Manolache et al.
Linear Mixture Distributionally Robust Markov Decision Processes
Zhishuai Liu, Pan Xu
Flatness is Necessary, Neural Collapse is Not: Rethinking Generalization via Grokking
Ting Han, Linara Adilova, Henning Petzka et al.
Attention! Your Vision Language Model Could Be Maliciously Manipulated
Xiaosen Wang, Shaokang Wang, Zhijin Ge et al.
A Statistical Theory of Contrastive Learning via Approximate Sufficient Statistics
Licong Lin, Song Mei
Sample-efficient Learning of Concepts with Theoretical Guarantees: from Data to Concepts without Interventions
Hidde Fokkema, Tim van Erven, Sara Magliacane
AutoDiscovery: Open-ended Scientific Discovery via Bayesian Surprise
Dhruv Agarwal, Bodhisattwa Prasad Majumder, Reece Adamson et al.
CoP: Agentic Red-teaming for Large Language Models using Composition of Principles
Chen Xiong, Pin-Yu Chen, Tsung-Yi Ho
Taming generative video models for zero-shot optical flow extraction
Seungwoo Kim, Khai Loong Aw, Klemen Kotar et al.
Dense Associative Memory with Epanechnikov Energy
Benjamin Hoover, Zhaoyang Shi, Krishnakumar Balasubramanian et al.
BlockScan: Detecting Anomalies in Blockchain Transactions
Jiahao Yu, Xian Wu, Hao Liu et al.
SafeVid: Toward Safety Aligned Video Large Multimodal Models
Yixu Wang, Jiaxin Song, Yifeng Gao et al.
StreamForest: Efficient Online Video Understanding with Persistent Event Memory
Xiangyu Zeng, Kefan Qiu, Qingyu Zhang et al.
AceSearcher: Bootstrapping Reasoning and Search for LLMs via Reinforced Self-Play
Ran Xu, Yuchen Zhuang, Zihan Dong et al.
MMMG: A Massive, Multidisciplinary, Multi-Tier Generation Benchmark for Text-to-Image Reasoning
Yuxuan Luo, Ryan Yuan, Junwen Chen et al.
Parallelizing MCMC Across the Sequence Length
David Zoltowski, Skyler Wu, Xavier Gonzalez et al.
Kernel Density Steering: Inference-Time Scaling via Mode Seeking for Image Restoration
Yuyang Hu, Kangfu Mei, Mojtaba Ardakani et al.
CReFT-CAD: Boosting Orthographic Projection Reasoning for CAD via Reinforcement Fine-Tuning
Ke Niu, Zhuofan Chen, Haiyang Yu et al.
Learning to Focus: Causal Attention Distillation via Gradient‐Guided Token Pruning
Yiju Guo, Wenkai Yang, Zexu Sun et al.
CodeCrash: Exposing LLM Fragility to Misleading Natural Language in Code Reasoning
Man Ho Lam, Chaozheng Wang, Jen-Tse Huang et al.
Entropy Rectifying Guidance for Diffusion and Flow Models
Tariq Berrada Ifriqi, Adriana Romero-Soriano, Michal Drozdzal et al.
MEMOIR: Lifelong Model Editing with Minimal Overwrite and Informed Retention for LLMs
Ke Wang, Yiming QIN, Nikolaos Dimitriadis et al.
FlowerTune: A Cross-Domain Benchmark for Federated Fine-Tuning of Large Language Models
Yan Gao, Massimo R. Scamarcia, Javier Fernandez-Marques et al.
On the Robustness of Transformers against Context Hijacking for Linear Classification
Tianle Li, Chenyang Zhang, Xingwu Chen et al.
PoLAR: Polar-Decomposed Low-Rank Adapter Representation
Kai Lion, Liang Zhang, Bingcong Li et al.
On Fairness of Unified Multimodal Large Language Model for Image Generation
Ming Liu, Hao Chen, Jindong Wang et al.
RoPECraft: Training-Free Motion Transfer with Trajectory-Guided RoPE Optimization on Diffusion Transformers
Ahmet Berke Gökmen, Yiğit Ekin, Bahri Batuhan Bilecen et al.
Synthesize Privacy-Preserving High-Resolution Images via Private Textual Intermediaries
Haoxiang Wang, Zinan Lin, Da Yu et al.
CrossAD: Time Series Anomaly Detection with Cross-scale Associations and Cross-window Modeling
Beibu Li, Qichao Shentu, Yang Shu et al.
Sampling from multi-modal distributions with polynomial query complexity in fixed dimension via reverse diffusion
Adrien Vacher, Omar Chehab, Anna Korba
SAD Neural Networks: Divergent Gradient Flows and Asymptotic Optimality via o-minimal Structures
Julian Kranz, Davide Gallon, Steffen Dereich et al.
MOOSE-Chem2: Exploring LLM Limits in Fine-Grained Scientific Hypothesis Discovery via Hierarchical Search
Zonglin Yang, Wanhao Liu, Ben Gao et al.
Constrained Diffusers for Safe Planning and Control
Jichen Zhang, Liqun Zhao, Antonis Papachristodoulou et al.
Fairshare Data Pricing via Data Valuation for Large Language Models
Luyang Zhang, Cathy Jiao, Beibei Li et al.
Systematic Reward Gap Optimization for Mitigating VLM Hallucinations
Lehan He, Zeren Chen, Zhelun Shi et al.
Identifiability of Deep Polynomial Neural Networks
Konstantin Usevich, Ricardo Borsoi, Clara Dérand et al.
Escaping the SpuriVerse: Can Large Vision-Language Models Generalize Beyond Seen Spurious Correlations?
Yiwei Yang, Chung Peng Lee, Shangbin Feng et al.
Drag-and-Drop LLMs: Zero-Shot Prompt-to-Weights
Zhiyuan Liang, Dongwen Tang, Yuhao Zhou et al.
UniEdit: A Unified Knowledge Editing Benchmark for Large Language Models
Qizhou Chen, Dakan Wang, Taolin Zhang et al.
Learning to Specialize: Joint Gating-Expert Training for Adaptive MoEs in Decentralized Settings
Yehya Farhat, Hamza ElMokhtar Shili, Fangshuo Liao et al.
Anti-Aliased 2D Gaussian Splatting
Mae Younes, Adnane Boukhayma
Probably Approximately Precision and Recall Learning
Lee Cohen, Yishay Mansour, Shay Moran et al.
Can We Infer Confidential Properties of Training Data from LLMs?
Pengrun Huang, Chhavi Yadav, Kamalika Chaudhuri et al.
BRACE: A Benchmark for Robust Audio Caption Quality Evaluation
Tianyu Guo, Hongyu Chen, Hao Liang et al.
Surprise3D: A Dataset for Spatial Understanding and Reasoning in Complex 3D Scenes
Jiaxin Huang, Ziwen Li, Hanlue Zhang et al.
Deep learning for continuous-time stochastic control with jumps
Patrick Cheridito, Jean-Loup Dupret, Donatien Hainaut
LabelAny3D: Label Any Object 3D in the Wild
Jin Yao, Radowan Mahmud Redoy, Sebastian Elbaum et al.
In the Eye of MLLM: Benchmarking Egocentric Video Intent Understanding with Gaze-Guided Prompting
Taiying Peng, Jiacheng Hua, Miao Liu et al.
Whose View of Safety? A Deep DIVE Dataset for Pluralistic Alignment of Text-to-Image Models
Charvi Rastogi, Tian Huey Teh, Pushkar Mishra et al.
Silencer: From Discovery to Mitigation of Self-Bias in LLM-as-Benchmark-Generator
Peiwen Yuan, Yiwei Li, Shaoxiong Feng et al.
Unveiling the Learning Mind of Language Models: A Cognitive Framework and Empirical Study
Zhengyu Hu, Jianxun Lian, Zheyuan Xiao et al.
Styl3R: Instant 3D Stylized Reconstruction for Arbitrary Scenes and Styles
Peng Wang, Xiang Liu, Peidong Liu
LARGO: Latent Adversarial Reflection through Gradient Optimization for Jailbreaking LLMs
Ran Li, Hao Wang, Chengzhi Mao
A Unified Framework for the Transportability of Population-Level Causal Measures
Ahmed Boughdiri, Clément Berenfeld, Julie Josse et al.
GeoRanker: Distance-Aware Ranking for Worldwide Image Geolocalization
Pengyue Jia, Seongheon Park, Song Gao et al.
MEgoHand: Multimodal Egocentric Hand-Object Interaction Motion Generation
Bohan Zhou, Yi Zhan, Zhongbin Zhang et al.
Constrained Entropic Unlearning: A Primal-Dual Framework for Large Language Models
Taha Entesari, Arman Hatami, Rinat Khaziev et al.
Scaling Laws for Robust Comparison of Open Foundation Language-Vision Models and Datasets
Marianna Nezhurina, Tomer Porian, Giovanni Puccetti et al.
ODG: Occupancy Prediction Using Dual Gaussians
Yunxiao Shi, Yinhao Zhu, Herbert Cai et al.
Effortless, Simulation-Efficient Bayesian Inference using Tabular Foundation Models
Julius Vetter, Manuel Gloeckler, Daniel Gedon et al.
Rectified Point Flow: Generic Point Cloud Pose Estimation
Tao Sun, Liyuan Zhu, Shengyu Huang et al.
Gene Regulatory Network Inference in the Presence of Selection Bias and Latent Confounders
Gongxu Luo, Haoyue Dai, Longkang Li et al.
GenPO: Generative Diffusion Models Meet On-Policy Reinforcement Learning
Shutong Ding, Ke Hu, Shan Zhong et al.
STEER-ME: Assessing the Microeconomic Reasoning of Large Language Models
Narun Raman, Taylor Lundy, Thiago Amin et al.
SViMo: Synchronized Diffusion for Video and Motion Generation in Hand-object Interaction Scenarios
Lingwei Dang, Ruizhi Shao, Hongwen Zhang et al.
RePO: Understanding Preference Learning Through ReLU-Based Optimization
Junkang Wu, Kexin Huang, xue wang et al.
Emergence of Linear Truth Encodings in Language Models
Shauli Ravfogel, Gilad Yehudai, Tal Linzen et al.
E-BATS: Efficient Backpropagation-Free Test-Time Adaptation for Speech Foundation Models
Jiaheng Dong, Hong Jia, Soumyajit Chatterjee et al.
Seeing in the Dark: Benchmarking Egocentric 3D Vision with the Oxford Day-and-Night Dataset
Zirui Wang, Wenjing Bian, Xinghui Li et al.
A Closer Look at Model Collapse: From a Generalization-to-Memorization Perspective
Lianghe Shi, Meng Wu, Huijie Zhang et al.
How Different from the Past? Spatio-Temporal Time Series Forecasting with Self-Supervised Deviation Learning
Haotian Gao, Zheng Dong, Jiawei Yong et al.
Polar Sparsity: High Throughput Batched LLM Inferencing with Scalable Contextual Sparsity
Susav Shrestha, Bradley Settlemyer, Nikoli Dryden et al.
Optimal Neural Compressors for the Rate-Distortion-Perception Tradeoff
Eric Lei, Hamed Hassani, Shirin Saeedi Bidokhti
Zero-Shot Trajectory Planning for Signal Temporal Logic Tasks
Ruijia Liu, Ancheng Hou, Xiao Yu et al.
COALA: Numerically Stable and Efficient Framework for Context-Aware Low-Rank Approximation
Uliana Parkina, Maxim Rakhuba
Adaptive Distraction: Probing LLM Contextual Robustness with Automated Tree Search
Yanbo Wang, Zixiang Xu, Yue Huang et al.
CHOICE: Benchmarking the Remote Sensing Capabilities of Large Vision-Language Models
Xiao An, Jiaxing Sun, Zihan Gui et al.
Learning to Better Search with Language Models via Guided Reinforced Self-Training
Seungyong Moon, Bumsoo Park, Hyun Oh Song
RLZero: Direct Policy Inference from Language Without In-Domain Supervision
Harshit Sushil Sikchi, Siddhant Agarwal, Pranaya Jajoo et al.
Generalization Error Analysis for Selective State-Space Models Through the Lens of Attention
Arya Honarpisheh, Mustafa Bozdag, Octavia Camps et al.
Proxy Target: Bridging the Gap Between Discrete Spiking Neural Networks and Continuous Control
Zijie Xu, Tong Bu, Zecheng Hao et al.
Memory-Enhanced Neural Solvers for Routing Problems
Felix Chalumeau, Refiloe Shabe, Noah De Nicola et al.
Vision Function Layer in Multimodal LLMs
Cheng Shi, Yizhou Yu, Sibei Yang
Angular Steering: Behavior Control via Rotation in Activation Space
Minh Hieu Vu, Tan Nguyen
Avoiding exp(R) scaling in RLHF through Preference-based Exploration
Mingyu Chen, Yiding Chen, Wen Sun et al.
Self-Refining Language Model Anonymizers via Adversarial Distillation
Kyuyoung Kim, Hyunjun Jeon, Jinwoo Shin
Codifying Character Logic in Role-Playing
Letian Peng, Jingbo Shang
Seeing What Matters: Generalizable AI-generated Video Detection with Forensic-Oriented Augmentation
Riccardo Corvi, Davide Cozzolino, Ekta Prashnani et al.
Tracing Back the Malicious Clients in Poisoning Attacks to Federated Learning
Yuqi Jia, Minghong Fang, Hongbin Liu et al.
Topology-Aware Conformal Prediction for Stream Networks
Jifan Zhang, Fangxin Wang, Zihe Song et al.
Activation Control for Efficiently Eliciting Long Chain-of-thought Ability of Language Models
Zekai Zhao, Qi Liu, Kun Zhou et al.
Towards Better & Faster Autoregressive Image Generation: From the Perspective of Entropy
Xiaoxiao Ma, Feng Zhao, Pengyang Ling et al.
A Implies B: Circuit Analysis in LLMs for Propositional Logical Reasoning
Guan Zhe Hong, Nishanth Dikkala, Enming Luo et al.
Learning to Instruct for Visual Instruction Tuning
Zhihan Zhou, Feng Hong, JIAAN LUO et al.
Universal Sequence Preconditioning
Annie Marsden, Elad Hazan
Watch and Listen: Understanding Audio-Visual-Speech Moments with Multimodal LLM
Zinuo Li, Xian Zhang, Yongxin Guo et al.
THUNDER: Tile-level Histopathology image UNDERstanding benchmark
Pierre Marza, Leo Fillioux, Sofiène Boutaj et al.
STRAP: Spatio-Temporal Pattern Retrieval for Out-of-Distribution Generalization
Haoyu Zhang, WentaoZhang, Hao Miao et al.
VQToken: Neural Discrete Token Representation Learning for Extreme Token Reduction in Video Large Language Models
Haichao Zhang, Yun Fu
Optimal Control for Transformer Architectures: Enhancing Generalization, Robustness and Efficiency
Kelvin Kan, Xingjian Li, Benjamin Zhang et al.
Extremely Simple Multimodal Outlier Synthesis for Out-of-Distribution Detection and Segmentation
Moru Liu, Hao Dong, Jessica Kelly et al.
Plug-and-Play Context Feature Reuse for Efficient Masked Generation
Xuejie Liu, Anji Liu, Guy Van den Broeck et al.
PiKE: Adaptive Data Mixing for Large-Scale Multi-Task Learning Under Low Gradient Conflicts
Zeman Li, Yuan Deng, Peilin Zhong et al.
EvoLM: In Search of Lost Language Model Training Dynamics
Zhenting Qi, Fan Nie, Alexandre Alahi et al.
Predictability Enables Parallelization of Nonlinear State Space Models
Xavier Gonzalez, Leo Kozachkov, David Zoltowski et al.
SATURN: SAT-based Reinforcement Learning to Unleash LLMs Reasoning
Huanyu Liu, Jia Li, Hao Zhu et al.
Differentiation Through Black-Box Quadratic Programming Solvers
Connor Magoon, Fengyu Yang, Noam Aigerman et al.
$\boldsymbol{\lambda}$-Orthogonality Regularization for Compatible Representation Learning
Simone Ricci, Niccolò Biondi, Federico Pernici et al.
DERD-Net: Learning Depth from Event-based Ray Densities
Diego de Oliveira Hitzges, Suman Ghosh, Guillermo Gallego
NeurIPT: Foundation Model for Neural Interfaces
Zitao Fang, Chenxuan Li, Hongting Zhou et al.
DanmakuTPPBench: A Multi-modal Benchmark for Temporal Point Process Modeling and Understanding
Yue Jiang, Jichu Li, Yang Liu et al.
Go With the Flow: Fast Diffusion for Gaussian Mixture Models
George Rapakoulias, Ali Reza Pedram, Fengjiao Liu et al.
Diffusion Classifiers Understand Compositionality, but Conditions Apply
Yujin Jeong, Arnas Uselis, Seong Joon Oh et al.
EmoNet-Face: An Expert-Annotated Benchmark for Synthetic Emotion Recognition
Christoph Schuhmann, Robert Kaczmarczyk, Gollam Rabby et al.
TrajAgent: An LLM-Agent Framework for Trajectory Modeling via Large-and-Small Model Collaboration
Yuwei Du, Jie Feng, Jie Zhao et al.
MedSG-Bench: A Benchmark for Medical Image Sequences Grounding
Jingkun Yue, Siqi Zhang, Zinan Jia et al.
IGD: Token Decisiveness Modeling via Information Gain in LLMs for Personalized Recommendation
Zijie Lin, Yang Zhang, Xiaoyan Zhao et al.
MV-CoLight: Efficient Object Compositing with Consistent Lighting and Shadow Generation
Kerui Ren, Jiayang Bai, Linning Xu et al.
SCAN: Self-Denoising Monte Carlo Annotation for Robust Process Reward Learning
Yuyang Ding, Xinyu Shi, Juntao Li et al.
VideoVLA: Video Generators Can Be Generalizable Robot Manipulators
Yichao Shen, Fangyun Wei, Zhiying Du et al.
SceneDecorator: Towards Scene-Oriented Story Generation with Scene Planning and Scene Consistency
Quanjian Song, Donghao Zhou, Jingyu Lin et al.
State Entropy Regularization for Robust Reinforcement Learning
Yonatan Ashlag, Uri Koren, Mirco Mutti et al.
AdaDetectGPT: Adaptive Detection of LLM-Generated Text with Statistical Guarantees
Hongyi Zhou, Jin Zhu, Pingfan Su et al.
LightFair: Towards an Efficient Alternative for Fair T2I Diffusion via Debiasing Pre-trained Text Encoders
Boyu Han, Qianqian Xu, Shilong Bao et al.
Learning Differential Pyramid Representation for Tone Mapping
Qirui Yang, Yinbo Li, Yihao Liu et al.
Outcome-Based Online Reinforcement Learning: Algorithms and Fundamental Limits
Fan Chen, Zeyu Jia, Alexander Rakhlin et al.
$\Psi$-Sampler: Initial Particle Sampling for SMC-Based Inference-Time Reward Alignment in Score Models
Taehoon Yoon, Yunhong Min, Kyeongmin Yeo et al.
Learning long range dependencies through time reversal symmetry breaking
Guillaume Pourcel, Maxence Ernoult
Tracing the Representation Geometry of Language Models from Pretraining to Post-training
Melody Li, Kumar Krishna Agrawal, Arna Ghosh et al.
DINGO: Constrained Inference for Diffusion LLMs
Tarun Suresh, Debangshu Banerjee, Shubham Ugare et al.
DexFlyWheel: A Scalable and Self-improving Data Generation Framework for Dexterous Manipulation
Kefei Zhu, Fengshuo Bai, YuanHao Xiang et al.
SpecEdge: Scalable Edge-Assisted Serving Framework for Interactive LLMs
Jinwoo Park, Seunggeun Cho, Dongsu Han
Who You Are Matters: Bridging Interests and Social Roles via LLM-Enhanced Logic Recommendation
Qing Yu, Xiaobei Wang, Shuchang Liu et al.
Deep RL Needs Deep Behavior Analysis: Exploring Implicit Planning by Model-Free Agents in Open-Ended Environments
Riley Simmons-Edler, Ryan Badman, Felix Berg et al.
CAPability: A Comprehensive Visual Caption Benchmark for Evaluating Both Correctness and Thoroughness
Zhihang Liu, Chen-Wei Xie, Bin Wen et al.
REOBench: Benchmarking Robustness of Earth Observation Foundation Models
Xiang Li, Yong Tao, Siyuan Zhang et al.
MLRC-Bench: Can Language Agents Solve Machine Learning Research Challenges?
Yunxiang Zhang, Muhammad Khalifa, Shitanshu Bhushan et al.
Efficient Federated Learning against Byzantine Attacks and Data Heterogeneity via Aggregating Normalized Gradients
Shiyuan Zuo, Xingrun Yan, Rongfei Fan et al.
SMMILE: An expert-driven benchmark for multimodal medical in-context learning
Melanie Rieff, Maya Varma, Ossian Rabow et al.
SD-VLM: Spatial Measuring and Understanding with Depth-Encoded Vision-Language Models
Pingyi Chen, Yujing Lou, Shen Cao et al.
Unveiling the Compositional Ability Gap in Vision-Language Reasoning Model
Tianle Li, Jihai Zhang, Yongming Rao et al.
ReDi: Rectified Discrete Flow
Jaehoon Yoo, Wonjung Kim, Seunghoon Hong
CSI-Bench: A Large-Scale In-the-Wild Dataset for Multi-task WiFi Sensing
Guozhen Zhu, Yuqian Hu, Weihang Gao et al.
TalkCuts: A Large-Scale Dataset for Multi-Shot Human Speech Video Generation
Jiaben Chen, Zixin Wang, AILING ZENG et al.
Pause Tokens Strictly Increase the Expressivity of Constant-Depth Transformers
Charles London, Varun Kanade
Brain-Like Processing Pathways Form in Models With Heterogeneous Experts
Jack Cook, Danyal Akarca, Rui Costa et al.
FlySearch: Exploring how vision-language models explore
Adam Pardyl, Dominik Matuszek, Mateusz Przebieracz et al.
Unifying Re-Identification, Attribute Inference, and Data Reconstruction Risks in Differential Privacy
Bogdan Kulynych, Juan Gomez, Georgios Kaissis et al.
CAML: Collaborative Auxiliary Modality Learning for Multi-Agent Systems
Rui Liu, Yu Shen, Peng Gao et al.
Open-Insect: Benchmarking Open-Set Recognition of Novel Species in Biodiversity Monitoring
Yuyan Chen, Nico Lang, B. Schmidt et al.
Longer Context, Deeper Thinking: Uncovering the Role of Long-Context Ability in Reasoning
Wang Yang, Zirui Liu, Hongye Jin et al.
Brain network science modelling of sparse neural networks enables Transformers and LLMs to perform as fully connected
Yingtao Zhang, Diego Cerretti, Jialin Zhao et al.
Making Classic GNNs Strong Baselines Across Varying Homophily: A Smoothness–Generalization Perspective
Ming Gu, Zhuonan Zheng, Sheng Zhou et al.
Reinforcement Learning for Out-of-Distribution Reasoning in LLMs: An Empirical Study on Diagnosis-Related Group Coding
Hanyin Wang, Zhenbang Wu, Gururaj Kolar et al.