2024 "computational efficiency" Papers

50 papers found

Accelerating the Global Aggregation of Local Explanations

Alon Mor, Yonatan Belinkov, Benny Kimelfeld

AAAI 2024paperarXiv:2312.07991
6
citations

Agglomerative Token Clustering

Joakim Bruslund Haurum, Sergio Escalera, Graham W. Taylor et al.

ECCV 2024posterarXiv:2409.11923
7
citations

An Image is Worth 1/2 Tokens After Layer 2: Plug-and-Play Inference Acceleration for Large Vision-Language Models

Liang Chen, Haozhe Zhao, Tianyu Liu et al.

ECCV 2024posterarXiv:2403.06764
343
citations

Beyond Implicit Bias: The Insignificance of SGD Noise in Online Learning

Nikhil Vyas, Depen Morwani, Rosie Zhao et al.

ICML 2024spotlight

Bi-ViT: Pushing the Limit of Vision Transformer Quantization

Yanjing Li, Sheng Xu, Mingbao Lin et al.

AAAI 2024paperarXiv:2305.12354

Code as Reward: Empowering Reinforcement Learning with VLMs

David Venuto, Mohammad Sami Nur Islam, Martin Klissarov et al.

ICML 2024spotlight

Context-Aware Iteration Policy Network for Efficient Optical Flow Estimation

Ri Cheng, Ruian He, Xuhao Jiang et al.

AAAI 2024paperarXiv:2312.07180
1
citations

Craftax: A Lightning-Fast Benchmark for Open-Ended Reinforcement Learning

Michael Matthews, Michael Beukman, Benjamin Ellis et al.

ICML 2024spotlight

CrossGET: Cross-Guided Ensemble of Tokens for Accelerating Vision-Language Transformers

Dachuan Shi, Chaofan Tao, Anyi Rao et al.

ICML 2024poster

Deep Fusion: Efficient Network Training via Pre-trained Initializations

Hanna Mazzawi, Xavi Gonzalvo, Michael Wunder et al.

ICML 2024poster

Differentially Private Bias-Term Fine-tuning of Foundation Models

Zhiqi Bu, Yu-Xiang Wang, Sheng Zha et al.

ICML 2024poster

DistiLLM: Towards Streamlined Distillation for Large Language Models

Jongwoo Ko, Sungnyun Kim, Tianyi Chen et al.

ICML 2024poster

Do Efficient Transformers Really Save Computation?

Kai Yang, Jan Ackermann, Zhenyu He et al.

ICML 2024poster

Dynamic Data Selection for Efficient SSL via Coarse-to-Fine Refinement

Aditay Tripathi, Pradeep Shenoy, Anirban Chakraborty

ECCV 2024poster
3
citations

Efficient Precision and Recall Metrics for Assessing Generative Models using Hubness-aware Sampling

Yuanbang Liang, Jing Wu, Yu-Kun Lai et al.

ICML 2024spotlight

Enabling Uncertainty Estimation in Iterative Neural Networks

Nikita Durasov, Doruk Oner, Jonathan Donier et al.

ICML 2024poster

Enhancing Storage and Computational Efficiency in Federated Multimodal Learning for Large-Scale Models

Zixin Zhang, Fan Qi, Changsheng Xu

ICML 2024poster

Enhancing Vision Transformer: Amplifying Non-Linearity in Feedforward Network Module

Yixing Xu, Chao Li, Dong Li et al.

ICML 2024poster

Evaluation of Test-Time Adaptation Under Computational Time Constraints

Motasem Alfarra, Hani Itani, Alejandro Pardo et al.

ICML 2024poster

Fast Decision Boundary based Out-of-Distribution Detector

Litian Liu, Yao Qin

ICML 2024poster

FMBoost: Boosting Latent Diffusion with Flow Matching

Johannes Schusterbauer-Fischer, Ming Gui, Pingchuan Ma et al.

ECCV 2024poster

Frugal 3D Point Cloud Model Training via Progressive Near Point Filtering and Fused Aggregation

Donghyun Lee, Yejin Lee, Jae W. Lee et al.

ECCV 2024poster
2
citations

Grid-Attention: Enhancing Computational Efficiency of Large Vision Models without Fine-Tuning

Pengyu Li, Biao Wang, Tianchu Guo et al.

ECCV 2024poster

Hierarchical Separable Video Transformer for Snapshot Compressive Imaging

Ping Wang, Yulun Zhang, Lishun Wang et al.

ECCV 2024posterarXiv:2407.11946
4
citations

In-context Vectors: Making In Context Learning More Effective and Controllable Through Latent Space Steering

Sheng Liu, Haotian Ye, Lei Xing et al.

ICML 2024poster

Inducing Point Operator Transformer: A Flexible and Scalable Architecture for Solving PDEs

Seungjun Lee, TaeIL Oh

AAAI 2024paperarXiv:2312.10975

Learning Causal Dynamics Models in Object-Oriented Environments

Zhongwei Yu, Jingqing Ruan, Dengpeng Xing

ICML 2024poster

Learning Temporal Resolution in Spectrogram for Audio Classification

Haohe Liu, Xubo Liu, Qiuqiang Kong et al.

AAAI 2024paperarXiv:2210.01719

LION: Implicit Vision Prompt Tuning

Haixin Wang, Jianlong Chang, Yihang Zhai et al.

AAAI 2024paperarXiv:2303.09992
35
citations

Object-Centric Diffusion for Efficient Video Editing

Kumara Kahatapitiya, Adil Karjauv, Davide Abati et al.

ECCV 2024posterarXiv:2401.05735
22
citations

ODIM: Outlier Detection via Likelihood of Under-Fitted Generative Models

Dongha Kim, Jaesung Hwang, Jongjin Lee et al.

ICML 2024poster

One-stage Prompt-based Continual Learning

Youngeun Kim, YUHANG LI, Priyadarshini Panda

ECCV 2024posterarXiv:2402.16189
17
citations

Optimizing Diffusion Models for Joint Trajectory Prediction and Controllable Generation

Yixiao Wang, Chen Tang, Lingfeng Sun et al.

ECCV 2024posterarXiv:2408.00766
16
citations

Orthogonal Bootstrap: Efficient Simulation of Input Uncertainty

Kaizhao Liu, Jose Blanchet, Lexing Ying et al.

ICML 2024poster

Partially Stochastic Infinitely Deep Bayesian Neural Networks

Sergio Calvo Ordoñez, Matthieu Meunier, Francesco Piatti et al.

ICML 2024poster

PhAST: Physics-Aware, Scalable, and Task-Specific GNNs for Accelerated Catalyst Design

Alexandre Duval, Victor Schmidt, Santiago Miret et al.

ICML 2024poster

Random Exploration in Bayesian Optimization: Order-Optimal Regret and Computational Efficiency

Sudeep Salgia, Sattar Vakili, Qing Zhao

ICML 2024poster

Rethinking Video Deblurring with Wavelet-Aware Dynamic Transformer and Diffusion Model

chen rao, Guangyuan Li, Zehua Lan et al.

ECCV 2024posterarXiv:2408.13459
9
citations

SAFNet: Selective Alignment Fusion Network for Efficient HDR Imaging

Lingtong Kong, Bo Li, Yike Xiong et al.

ECCV 2024posterarXiv:2407.16308
13
citations

Saliency strikes back: How filtering out high frequencies improves white-box explanations

Sabine Muzellec, Thomas FEL, Victor Boutin et al.

ICML 2024poster

Scaling Laws for Fine-Grained Mixture of Experts

Jan Ludziejewski, Jakub Krajewski, Kamil Adamczewski et al.

ICML 2024poster

See More Details: Efficient Image Super-Resolution by Experts Mining

Eduard Zamfir, Zongwei Wu, Nancy Mehta et al.

ICML 2024poster

SeTformer Is What You Need for Vision and Language

Pourya Shamsolmoali, Masoumeh Zareapoor, Eric Granger et al.

AAAI 2024paperarXiv:2401.03540
7
citations

Split-Ensemble: Efficient OOD-aware Ensemble via Task and Model Splitting

Anthony Chen, Huanrui Yang, Yulu Gan et al.

ICML 2024poster

Thermometer: Towards Universal Calibration for Large Language Models

Maohao Shen, Subhro Das, Kristjan Greenewald et al.

ICML 2024poster

Transformer-Based Selective Super-resolution for Efficient Image Refinement

Tianyi Zhang, Kishore Kasichainula, Yaoxin Zhuo et al.

AAAI 2024paperarXiv:2312.05803
16
citations

Translating Subgraphs to Nodes Makes Simple GNNs Strong and Efficient for Subgraph Representation Learning

Dongkwan Kim, Alice Oh

ICML 2024poster

Turbo: Informativity-Driven Acceleration Plug-In for Vision-Language Large Models

Chen Ju, Haicheng Wang, Haozhe Cheng et al.

ECCV 2024posterarXiv:2407.11717
12
citations

Understanding and Improving Optimization in Predictive Coding Networks

Nicholas Alonso, Jeffrey Krichmar, Emre Neftci

AAAI 2024paperarXiv:2305.13562
10
citations

Various Lengths, Constant Speed: Efficient Language Modeling with Lightning Attention

Zhen Qin, Weigao Sun, Dong Li et al.

ICML 2024poster