2024 "computational efficiency" Papers
50 papers found
Accelerating the Global Aggregation of Local Explanations
Alon Mor, Yonatan Belinkov, Benny Kimelfeld
Agglomerative Token Clustering
Joakim Bruslund Haurum, Sergio Escalera, Graham W. Taylor et al.
An Image is Worth 1/2 Tokens After Layer 2: Plug-and-Play Inference Acceleration for Large Vision-Language Models
Liang Chen, Haozhe Zhao, Tianyu Liu et al.
Beyond Implicit Bias: The Insignificance of SGD Noise in Online Learning
Nikhil Vyas, Depen Morwani, Rosie Zhao et al.
Bi-ViT: Pushing the Limit of Vision Transformer Quantization
Yanjing Li, Sheng Xu, Mingbao Lin et al.
Code as Reward: Empowering Reinforcement Learning with VLMs
David Venuto, Mohammad Sami Nur Islam, Martin Klissarov et al.
Context-Aware Iteration Policy Network for Efficient Optical Flow Estimation
Ri Cheng, Ruian He, Xuhao Jiang et al.
Craftax: A Lightning-Fast Benchmark for Open-Ended Reinforcement Learning
Michael Matthews, Michael Beukman, Benjamin Ellis et al.
CrossGET: Cross-Guided Ensemble of Tokens for Accelerating Vision-Language Transformers
Dachuan Shi, Chaofan Tao, Anyi Rao et al.
Deep Fusion: Efficient Network Training via Pre-trained Initializations
Hanna Mazzawi, Xavi Gonzalvo, Michael Wunder et al.
Differentially Private Bias-Term Fine-tuning of Foundation Models
Zhiqi Bu, Yu-Xiang Wang, Sheng Zha et al.
DistiLLM: Towards Streamlined Distillation for Large Language Models
Jongwoo Ko, Sungnyun Kim, Tianyi Chen et al.
Do Efficient Transformers Really Save Computation?
Kai Yang, Jan Ackermann, Zhenyu He et al.
Dynamic Data Selection for Efficient SSL via Coarse-to-Fine Refinement
Aditay Tripathi, Pradeep Shenoy, Anirban Chakraborty
Efficient Precision and Recall Metrics for Assessing Generative Models using Hubness-aware Sampling
Yuanbang Liang, Jing Wu, Yu-Kun Lai et al.
Enabling Uncertainty Estimation in Iterative Neural Networks
Nikita Durasov, Doruk Oner, Jonathan Donier et al.
Enhancing Storage and Computational Efficiency in Federated Multimodal Learning for Large-Scale Models
Zixin Zhang, Fan Qi, Changsheng Xu
Enhancing Vision Transformer: Amplifying Non-Linearity in Feedforward Network Module
Yixing Xu, Chao Li, Dong Li et al.
Evaluation of Test-Time Adaptation Under Computational Time Constraints
Motasem Alfarra, Hani Itani, Alejandro Pardo et al.
Fast Decision Boundary based Out-of-Distribution Detector
Litian Liu, Yao Qin
FMBoost: Boosting Latent Diffusion with Flow Matching
Johannes Schusterbauer-Fischer, Ming Gui, Pingchuan Ma et al.
Frugal 3D Point Cloud Model Training via Progressive Near Point Filtering and Fused Aggregation
Donghyun Lee, Yejin Lee, Jae W. Lee et al.
Grid-Attention: Enhancing Computational Efficiency of Large Vision Models without Fine-Tuning
Pengyu Li, Biao Wang, Tianchu Guo et al.
Hierarchical Separable Video Transformer for Snapshot Compressive Imaging
Ping Wang, Yulun Zhang, Lishun Wang et al.
In-context Vectors: Making In Context Learning More Effective and Controllable Through Latent Space Steering
Sheng Liu, Haotian Ye, Lei Xing et al.
Inducing Point Operator Transformer: A Flexible and Scalable Architecture for Solving PDEs
Seungjun Lee, TaeIL Oh
Learning Causal Dynamics Models in Object-Oriented Environments
Zhongwei Yu, Jingqing Ruan, Dengpeng Xing
Learning Temporal Resolution in Spectrogram for Audio Classification
Haohe Liu, Xubo Liu, Qiuqiang Kong et al.
LION: Implicit Vision Prompt Tuning
Haixin Wang, Jianlong Chang, Yihang Zhai et al.
Object-Centric Diffusion for Efficient Video Editing
Kumara Kahatapitiya, Adil Karjauv, Davide Abati et al.
ODIM: Outlier Detection via Likelihood of Under-Fitted Generative Models
Dongha Kim, Jaesung Hwang, Jongjin Lee et al.
One-stage Prompt-based Continual Learning
Youngeun Kim, YUHANG LI, Priyadarshini Panda
Optimizing Diffusion Models for Joint Trajectory Prediction and Controllable Generation
Yixiao Wang, Chen Tang, Lingfeng Sun et al.
Orthogonal Bootstrap: Efficient Simulation of Input Uncertainty
Kaizhao Liu, Jose Blanchet, Lexing Ying et al.
Partially Stochastic Infinitely Deep Bayesian Neural Networks
Sergio Calvo Ordoñez, Matthieu Meunier, Francesco Piatti et al.
PhAST: Physics-Aware, Scalable, and Task-Specific GNNs for Accelerated Catalyst Design
Alexandre Duval, Victor Schmidt, Santiago Miret et al.
Random Exploration in Bayesian Optimization: Order-Optimal Regret and Computational Efficiency
Sudeep Salgia, Sattar Vakili, Qing Zhao
Rethinking Video Deblurring with Wavelet-Aware Dynamic Transformer and Diffusion Model
chen rao, Guangyuan Li, Zehua Lan et al.
SAFNet: Selective Alignment Fusion Network for Efficient HDR Imaging
Lingtong Kong, Bo Li, Yike Xiong et al.
Saliency strikes back: How filtering out high frequencies improves white-box explanations
Sabine Muzellec, Thomas FEL, Victor Boutin et al.
Scaling Laws for Fine-Grained Mixture of Experts
Jan Ludziejewski, Jakub Krajewski, Kamil Adamczewski et al.
See More Details: Efficient Image Super-Resolution by Experts Mining
Eduard Zamfir, Zongwei Wu, Nancy Mehta et al.
SeTformer Is What You Need for Vision and Language
Pourya Shamsolmoali, Masoumeh Zareapoor, Eric Granger et al.
Split-Ensemble: Efficient OOD-aware Ensemble via Task and Model Splitting
Anthony Chen, Huanrui Yang, Yulu Gan et al.
Thermometer: Towards Universal Calibration for Large Language Models
Maohao Shen, Subhro Das, Kristjan Greenewald et al.
Transformer-Based Selective Super-resolution for Efficient Image Refinement
Tianyi Zhang, Kishore Kasichainula, Yaoxin Zhuo et al.
Translating Subgraphs to Nodes Makes Simple GNNs Strong and Efficient for Subgraph Representation Learning
Dongkwan Kim, Alice Oh
Turbo: Informativity-Driven Acceleration Plug-In for Vision-Language Large Models
Chen Ju, Haicheng Wang, Haozhe Cheng et al.
Understanding and Improving Optimization in Predictive Coding Networks
Nicholas Alonso, Jeffrey Krichmar, Emre Neftci
Various Lengths, Constant Speed: Efficient Language Modeling with Lightning Attention
Zhen Qin, Weigao Sun, Dong Li et al.