2024 Poster "computational efficiency" Papers
38 papers found
Agglomerative Token Clustering
Joakim Bruslund Haurum, Sergio Escalera, Graham W. Taylor et al.
An Image is Worth 1/2 Tokens After Layer 2: Plug-and-Play Inference Acceleration for Large Vision-Language Models
Liang Chen, Haozhe Zhao, Tianyu Liu et al.
CrossGET: Cross-Guided Ensemble of Tokens for Accelerating Vision-Language Transformers
Dachuan Shi, Chaofan Tao, Anyi Rao et al.
Deep Fusion: Efficient Network Training via Pre-trained Initializations
Hanna Mazzawi, Xavi Gonzalvo, Michael Wunder et al.
Differentially Private Bias-Term Fine-tuning of Foundation Models
Zhiqi Bu, Yu-Xiang Wang, Sheng Zha et al.
DistiLLM: Towards Streamlined Distillation for Large Language Models
Jongwoo Ko, Sungnyun Kim, Tianyi Chen et al.
Do Efficient Transformers Really Save Computation?
Kai Yang, Jan Ackermann, Zhenyu He et al.
Dynamic Data Selection for Efficient SSL via Coarse-to-Fine Refinement
Aditay Tripathi, Pradeep Shenoy, Anirban Chakraborty
Enabling Uncertainty Estimation in Iterative Neural Networks
Nikita Durasov, Doruk Oner, Jonathan Donier et al.
Enhancing Storage and Computational Efficiency in Federated Multimodal Learning for Large-Scale Models
Zixin Zhang, Fan Qi, Changsheng Xu
Enhancing Vision Transformer: Amplifying Non-Linearity in Feedforward Network Module
Yixing Xu, Chao Li, Dong Li et al.
Evaluation of Test-Time Adaptation Under Computational Time Constraints
Motasem Alfarra, Hani Itani, Alejandro Pardo et al.
Fast Decision Boundary based Out-of-Distribution Detector
Litian Liu, Yao Qin
FMBoost: Boosting Latent Diffusion with Flow Matching
Johannes Schusterbauer-Fischer, Ming Gui, Pingchuan Ma et al.
Frugal 3D Point Cloud Model Training via Progressive Near Point Filtering and Fused Aggregation
Donghyun Lee, Yejin Lee, Jae W. Lee et al.
Grid-Attention: Enhancing Computational Efficiency of Large Vision Models without Fine-Tuning
Pengyu Li, Biao Wang, Tianchu Guo et al.
Hierarchical Separable Video Transformer for Snapshot Compressive Imaging
Ping Wang, Yulun Zhang, Lishun Wang et al.
In-context Vectors: Making In Context Learning More Effective and Controllable Through Latent Space Steering
Sheng Liu, Haotian Ye, Lei Xing et al.
Learning Causal Dynamics Models in Object-Oriented Environments
Zhongwei Yu, Jingqing Ruan, Dengpeng Xing
Object-Centric Diffusion for Efficient Video Editing
Kumara Kahatapitiya, Adil Karjauv, Davide Abati et al.
ODIM: Outlier Detection via Likelihood of Under-Fitted Generative Models
Dongha Kim, Jaesung Hwang, Jongjin Lee et al.
One-stage Prompt-based Continual Learning
Youngeun Kim, YUHANG LI, Priyadarshini Panda
Optimizing Diffusion Models for Joint Trajectory Prediction and Controllable Generation
Yixiao Wang, Chen Tang, Lingfeng Sun et al.
Orthogonal Bootstrap: Efficient Simulation of Input Uncertainty
Kaizhao Liu, Jose Blanchet, Lexing Ying et al.
Partially Stochastic Infinitely Deep Bayesian Neural Networks
Sergio Calvo Ordoñez, Matthieu Meunier, Francesco Piatti et al.
PhAST: Physics-Aware, Scalable, and Task-Specific GNNs for Accelerated Catalyst Design
Alexandre Duval, Victor Schmidt, Santiago Miret et al.
Random Exploration in Bayesian Optimization: Order-Optimal Regret and Computational Efficiency
Sudeep Salgia, Sattar Vakili, Qing Zhao
Rethinking Video Deblurring with Wavelet-Aware Dynamic Transformer and Diffusion Model
chen rao, Guangyuan Li, Zehua Lan et al.
SAFNet: Selective Alignment Fusion Network for Efficient HDR Imaging
Lingtong Kong, Bo Li, Yike Xiong et al.
Saliency strikes back: How filtering out high frequencies improves white-box explanations
Sabine Muzellec, Thomas FEL, Victor Boutin et al.
Scaling Laws for Fine-Grained Mixture of Experts
Jan Ludziejewski, Jakub Krajewski, Kamil Adamczewski et al.
See More Details: Efficient Image Super-Resolution by Experts Mining
Eduard Zamfir, Zongwei Wu, Nancy Mehta et al.
SMFANet: A Lightweight Self-Modulation Feature Aggregation Network for Efficient Image Super-Resolution
mingjun zheng, Long Sun, Jiangxin Dong et al.
Split-Ensemble: Efficient OOD-aware Ensemble via Task and Model Splitting
Anthony Chen, Huanrui Yang, Yulu Gan et al.
Thermometer: Towards Universal Calibration for Large Language Models
Maohao Shen, Subhro Das, Kristjan Greenewald et al.
Translating Subgraphs to Nodes Makes Simple GNNs Strong and Efficient for Subgraph Representation Learning
Dongkwan Kim, Alice Oh
Turbo: Informativity-Driven Acceleration Plug-In for Vision-Language Large Models
Chen Ju, Haicheng Wang, Haozhe Cheng et al.
Various Lengths, Constant Speed: Efficient Language Modeling with Lightning Attention
Zhen Qin, Weigao Sun, Dong Li et al.