ICML "computational efficiency" Papers
28 papers found
Beyond Implicit Bias: The Insignificance of SGD Noise in Online Learning
Nikhil Vyas, Depen Morwani, Rosie Zhao et al.
Code as Reward: Empowering Reinforcement Learning with VLMs
David Venuto, Mohammad Sami Nur Islam, Martin Klissarov et al.
Craftax: A Lightning-Fast Benchmark for Open-Ended Reinforcement Learning
Michael Matthews, Michael Beukman, Benjamin Ellis et al.
CrossGET: Cross-Guided Ensemble of Tokens for Accelerating Vision-Language Transformers
Dachuan Shi, Chaofan Tao, Anyi Rao et al.
Deep Fusion: Efficient Network Training via Pre-trained Initializations
Hanna Mazzawi, Xavi Gonzalvo, Michael Wunder et al.
Differentially Private Bias-Term Fine-tuning of Foundation Models
Zhiqi Bu, Yu-Xiang Wang, Sheng Zha et al.
DistiLLM: Towards Streamlined Distillation for Large Language Models
Jongwoo Ko, Sungnyun Kim, Tianyi Chen et al.
Do Efficient Transformers Really Save Computation?
Kai Yang, Jan Ackermann, Zhenyu He et al.
Efficient Precision and Recall Metrics for Assessing Generative Models using Hubness-aware Sampling
Yuanbang Liang, Jing Wu, Yu-Kun Lai et al.
Enabling Uncertainty Estimation in Iterative Neural Networks
Nikita Durasov, Doruk Oner, Jonathan Donier et al.
Enhancing Storage and Computational Efficiency in Federated Multimodal Learning for Large-Scale Models
Zixin Zhang, Fan Qi, Changsheng Xu
Enhancing Vision Transformer: Amplifying Non-Linearity in Feedforward Network Module
Yixing Xu, Chao Li, Dong Li et al.
Evaluation of Test-Time Adaptation Under Computational Time Constraints
Motasem Alfarra, Hani Itani, Alejandro Pardo et al.
Fast Decision Boundary based Out-of-Distribution Detector
Litian Liu, Yao Qin
In-context Vectors: Making In Context Learning More Effective and Controllable Through Latent Space Steering
Sheng Liu, Haotian Ye, Lei Xing et al.
Learning Causal Dynamics Models in Object-Oriented Environments
Zhongwei Yu, Jingqing Ruan, Dengpeng Xing
ODIM: Outlier Detection via Likelihood of Under-Fitted Generative Models
Dongha Kim, Jaesung Hwang, Jongjin Lee et al.
Orthogonal Bootstrap: Efficient Simulation of Input Uncertainty
Kaizhao Liu, Jose Blanchet, Lexing Ying et al.
Partially Stochastic Infinitely Deep Bayesian Neural Networks
Sergio Calvo Ordoñez, Matthieu Meunier, Francesco Piatti et al.
PhAST: Physics-Aware, Scalable, and Task-Specific GNNs for Accelerated Catalyst Design
Alexandre Duval, Victor Schmidt, Santiago Miret et al.
Random Exploration in Bayesian Optimization: Order-Optimal Regret and Computational Efficiency
Sudeep Salgia, Sattar Vakili, Qing Zhao
Saliency strikes back: How filtering out high frequencies improves white-box explanations
Sabine Muzellec, Thomas FEL, Victor Boutin et al.
Scaling Laws for Fine-Grained Mixture of Experts
Jan Ludziejewski, Jakub Krajewski, Kamil Adamczewski et al.
See More Details: Efficient Image Super-Resolution by Experts Mining
Eduard Zamfir, Zongwei Wu, Nancy Mehta et al.
Split-Ensemble: Efficient OOD-aware Ensemble via Task and Model Splitting
Anthony Chen, Huanrui Yang, Yulu Gan et al.
Thermometer: Towards Universal Calibration for Large Language Models
Maohao Shen, Subhro Das, Kristjan Greenewald et al.
Translating Subgraphs to Nodes Makes Simple GNNs Strong and Efficient for Subgraph Representation Learning
Dongkwan Kim, Alice Oh
Various Lengths, Constant Speed: Efficient Language Modeling with Lightning Attention
Zhen Qin, Weigao Sun, Dong Li et al.