ICML Papers
5,975 papers found • Page 115 of 120
Task Groupings Regularization: Data-Free Meta-Learning with Heterogeneous Pre-trained Models
Yongxian Wei, Zixuan Hu, Li Shen et al.
Taylor Videos for Action Recognition
Lei Wang, Xiuyuan Yuan, Tom Gedeon et al.
T-Cal: An Optimal Test for the Calibration of Predictive Models
Donghwan Lee, Xinmeng Huang, Hamed Hassani et al.
Tell, Don't Show: Language Guidance Eases Transfer Across Domains in Images and Videos
Tarun Kalluri, Bodhisattwa Prasad Majumder, Manmohan Chandraker
Temporal Logic Specification-Conditioned Decision Transformer for Offline Safe Reinforcement Learning
Zijian Guo, Weichao Zhou, Wenchao Li
Temporal Spiking Neural Networks with Synaptic Delay for Graph Reasoning
Mingqing Xiao, Yixin Zhu, Di He et al.
TENG: Time-Evolving Natural Gradient for Solving PDEs With Deep Neural Nets Toward Machine Precision
Zhuo Chen, Jacob McCarran, Esteban Vizcaino et al.
TERD: A Unified Framework for Safeguarding Diffusion Models Against Backdoors
Yichuan Mo, Hui Huang, Mingjie Li et al.
Testing the Feasibility of Linear Programs with Bandit Feedback
Aditya Gangrade, Aditya Gopalan, Venkatesh Saligrama et al.
Test-Time Degradation Adaptation for Open-Set Image Restoration
Yuanbiao Gou, Haiyu Zhao, Boyun Li et al.
Test-Time Model Adaptation with Only Forward Passes
Shuaicheng Niu, Chunyan Miao, Guohao Chen et al.
Test-Time Regret Minimization in Meta Reinforcement Learning
Mirco Mutti, Aviv Tamar
The Balanced-Pairwise-Affinities Feature Transform
Daniel Shalam, Simon Korman
The Benefits of Reusing Batches for Gradient Descent in Two-Layer Networks: Breaking the Curse of Information and Leap Exponents
Yatin Dandi, Emanuele Troiani, Luca Arnaboldi et al.
The Computational Complexity of Finding Second-Order Stationary Points
Andreas Kontogiannis, Vasilis Pollatos, Sotiris Kanellopoulos et al.
The Effect of Weight Precision on the Neuron Count in Deep ReLU Networks
Songhua He, Periklis Papakonstantinou
The Emergence of Reproducibility and Consistency in Diffusion Models
Huijie Zhang, Jinfan Zhou, Yifu Lu et al.
The Entropy Enigma: Success and Failure of Entropy Minimization
Ori Press, Ravid Shwartz-Ziv, Yann LeCun et al.
The Expressive Power of Path-Based Graph Neural Networks
Caterina Graziani, Tamara Drucks, Fabian Jogl et al.
The Fundamental Limits of Least-Privilege Learning
Theresa Stadler, Bogdan Kulynych, Michael Gastpar et al.
The good, the bad and the ugly sides of data augmentation: An implicit spectral regularization perspective
Chi-Heng Lin, Chiraag Kaushik, Eva Dyer et al.
The Good, The Bad, and Why: Unveiling Emotions in Generative AI
CHENG LI, Jindong Wang, Yixuan Zhang et al.
The Illusion of State in State-Space Models
William Merrill, Jackson Petty, Ashish Sabharwal
The Linear Representation Hypothesis and the Geometry of Large Language Models
Kiho Park, Yo Joong Choe, Victor Veitch
The Max-Min Formulation of Multi-Objective Reinforcement Learning: From Theory to a Model-Free Algorithm
Giseung Park, woohyeon Byeon, Seongmin Kim et al.
The Merit of River Network Topology for Neural Flood Forecasting
Nikolas Kirschstein, Yixuan Sun
The Non-linear $F$-Design and Applications to Interactive Learning
Alekh Agarwal, Jian Qian, Alexander Rakhlin et al.
Theoretical Analysis of Learned Database Operations under Distribution Shift through Distribution Learnability
Sepanta Zeighami, Cyrus Shahabi
Theoretical Guarantees for Variational Inference with Fixed-Variance Mixture of Gaussians
Tom Huix, Anna Korba, Alain Oliviero Durmus et al.
Theoretical insights for diffusion guidance: A case study for Gaussian mixture models
Yuchen Wu, Minshuo Chen, Zihao Li et al.
Theory of Consistency Diffusion Models: Distribution Estimation Meets Fast Sampling
Zehao Dou, Minshuo Chen, Mengdi Wang et al.
The Perception-Robustness Tradeoff in Deterministic Image Restoration
Guy Ohayon, Tomer Michaeli, Michael Elad
The Pitfalls and Promise of Conformal Inference Under Adversarial Attacks
Ziquan Liu, Yufei Cui, Yan Yan et al.
The Pitfalls of Next-Token Prediction
Gregor Bachmann, Vaishnavh Nagarajan
The Privacy Power of Correlated Noise in Decentralized Learning
Youssef Allouah, Anastasiia Koloskova, Aymane Firdoussi et al.
The Relative Value of Prediction in Algorithmic Decision Making
Juan Perdomo
Thermometer: Towards Universal Calibration for Large Language Models
Maohao Shen, Subhro Das, Kristjan Greenewald et al.
The Role of Learning Algorithms in Collective Action
Omri Ben-Dov, Jake Fawkes, Samira Samadi et al.
The Stronger the Diffusion Model, the Easier the Backdoor: Data Poisoning to Induce Copyright BreachesWithout Adjusting Finetuning Pipeline
Haonan Wang, Qianli Shen, Yao Tong et al.
The Surprising Effectiveness of Skip-Tuning in Diffusion Sampling
Jiajun Ma, Shuchen Xue, Tianyang Hu et al.
The WMDP Benchmark: Measuring and Reducing Malicious Use with Unlearning
Nathaniel Li, Alexander Pan, Anjali Gopal et al.
Think Before You Act: Decision Transformers with Working Memory
Jikun Kang, Romain Laroche, Xingdi Yuan et al.
TIC-TAC: A Framework For Improved Covariance Estimation In Deep Heteroscedastic Regression
Megh Shukla, Mathieu Salzmann, Alexandre Alahi
Tight Partial Identification of Causal Effects with Marginal Distribution of Unmeasured Confounders
Zhiheng Zhang
Tilt and Average : Geometric Adjustment of the Last Layer for Recalibration
Gyusang Cho, Chan-Hyun Youn
Tilting the Odds at the Lottery: the Interplay of Overparameterisation and Curricula in Neural Networks
Stefano Mannelli, Yaraslau Ivashynka, Andrew Saxe et al.
Tilt your Head: Activating the Hidden Spatial-Invariance of Classifiers
Johann Schmidt, Sebastian Stober
TimeMIL: Advancing Multivariate Time Series Classification via a Time-aware Multiple Instance Learning
Xiwen Chen, Peijie Qiu, Wenhui Zhu et al.
Timer: Generative Pre-trained Transformers Are Large Time Series Models
Yong Liu, Haoran Zhang, Chenyu Li et al.
Time Series Diffusion in the Frequency Domain
Jonathan Crabbé, Nicolas Huynh, Jan Stanczuk et al.