Poster "multi-task learning" Papers
45 papers found
CSI-Bench: A Large-Scale In-the-Wild Dataset for Multi-task WiFi Sensing
Guozhen Zhu, Yuqian Hu, Weihang Gao et al.
Decouple-Then-Merge: Finetune Diffusion Models as Multi-Task Learning
Qianli Ma, Xuefei Ning, Dongrui Liu et al.
Efficient Depth Estimation for Unstable Stereo Camera Systems on AR Glasses
Yongfan Liu, Hyoukjun Kwon
Fast Rate Bounds for Multi-Task and Meta-Learning with Different Sample Sizes
Hossein Zakerinia, Christoph Lampert
FedRAM: Federated Reweighting and Aggregation for Multi-Task Learning
Fan Wu, Xinyu Yan, Jiabei Liu et al.
FREE-Merging: Fourier Transform for Efficient Model Merging
Shenghe Zheng, Hongzhi Wang
GlycanML: A Multi-Task and Multi-Structure Benchmark for Glycan Machine Learning
Minghao Xu, Yunteng Geng, Yihang Zhang et al.
How Far Are We from True Unlearnability?
Kai Ye, Liangcai Su, Chenxiong Qian
LiFT: Learning to Fine-Tune via Bayesian Parameter Efficient Meta Fine-Tuning
Minyoung Kim, Timothy Hospedales
MotionLab: Unified Human Motion Generation and Editing via the Motion-Condition-Motion Paradigm
Ziyan Guo, Zeyu HU, Na Zhao et al.
Progressive Homeostatic and Plastic Prompt Tuning for Audio-Visual Multi-Task Incremental Learning
Jiong Yin, Liang Li, Jiehua Zhang et al.
Provable Meta-Learning with Low-Rank Adaptations
Jacob Block, Sundararajan Srinivasan, Liam Collins et al.
Resolving Token-Space Gradient Conflicts: Token Space Manipulation for Transformer-Based Multi-Task Learning
Wooseong Jeong, Kuk-Jin Yoon
Swiss Army Knife: Synergizing Biases in Knowledge from Vision Foundation Models for Multi-Task Learning
Yuxiang Lu, Shengcao Cao, Yu-Xiong Wang
Task Vector Quantization for Memory-Efficient Model Merging
Youngeun Kim, Seunghwan Lee, Aecheon Jung et al.
Towards Minimizing Feature Drift in Model Merging: Layer-wise Task Vector Fusion for Adaptive Knowledge Integration
Wenju Sun, Qingyong Li, Wen Wang et al.
Vulnerability-Aware Spatio-Temporal Learning for Generalizable Deepfake Video Detection
Dat NGUYEN, Marcella Astrid, Anis Kacem et al.
Z-Magic: Zero-shot Multiple Attributes Guided Image Creator
Yingying Deng, Xiangyu He, Fan Tang et al.
Adaptive Multi-task Learning for Few-shot Object Detection
Yan Ren, Yanling Li, Wai-Kin Adams Kong
A Hierarchical Adaptive Multi-Task Reinforcement Learning Framework for Multiplier Circuit Design
Zhihai Wang, Jie Wang, Dongsheng Zuo et al.
Bayesian Uncertainty for Gradient Aggregation in Multi-Task Learning
Idan Achituve, Idit Diamant, Arnon Netzer et al.
Careful with that Scalpel: Improving Gradient Surgery with an EMA
Yu-Guan Hsieh, James Thornton, Eugene Ndiaye et al.
Collaborative Learning with Different Labeling Functions
yuyang deng, Mingda Qiao
Contextualized Policy Recovery: Modeling and Interpreting Medical Decisions with Adaptive Imitation Learning
Jannik Deuschel, Caleb Ellington, Yingtao Luo et al.
DG-PIC: Domain Generalized Point-In-Context Learning for Point Cloud Understanding
Jincen Jiang, Qianyu Zhou, Yuhang Li et al.
DMTG: One-Shot Differentiable Multi-Task Grouping
Yuan Gao, Shuguo Jiang, Moran Li et al.
Exploring Correlations of Self-Supervised Tasks for Graphs
Taoran Fang, Wei Chow, Yifei Sun et al.
Exploring Training on Heterogeneous Data with Mixture of Low-rank Adapters
Yuhang Zhou, Zhao Zihua, Siyuan Du et al.
Fair Resource Allocation in Multi-Task Learning
Hao Ban, Kaiyi Ji
Fast and Sample Efficient Multi-Task Representation Learning in Stochastic Contextual Bandits
Jiabin Lin, Shana Moothedath, Namrata Vaswani
Guarantees for Nonlinear Representation Learning: Non-identical Covariates, Dependent Data, Fewer Samples
Thomas T. Zhang, Bruce Lee, Ingvar Ziemann et al.
Learning with Adaptive Resource Allocation
Jing Wang, Miao Yu, Peng Zhao et al.
Localizing Task Information for Improved Model Merging and Compression
Ke Wang, Nikolaos Dimitriadis, Guillermo Ortiz-Jimenez et al.
Merging Multi-Task Models via Weight-Ensembling Mixture of Experts
Anke Tang, Li Shen, Yong Luo et al.
Multi-Task Domain Adaptation for Language Grounding with 3D Objects
Penglei SUN, Yaoxian Song, Xinglin Pan et al.
MVMoE: Multi-Task Vehicle Routing Solver with Mixture-of-Experts
Jianan Zhou, Zhiguang Cao, Yaoxin Wu et al.
Provable Multi-Task Representation Learning by Two-Layer ReLU Neural Networks
Liam Collins, Hamed Hassani, Mahdi Soltanolkotabi et al.
Quality-Diversity with Limited Resources
Ren-Jian Wang, Ke Xue, Cong Guan et al.
Representation Surgery for Multi-Task Model Merging
Enneng Yang, Li Shen, Zhenyi Wang et al.
Robust Multi-Task Learning with Excess Risks
Yifei He, Shiji Zhou, Guojun Zhang et al.
Sparse-to-dense Multimodal Image Registration via Multi-Task Learning
Kaining Zhang, Jiayi Ma
Switch Diffusion Transformer: Synergizing Denoising Tasks with Sparse Mixture-of-Experts
Byeongjun Park, Hyojun Go, Jin-Young Kim et al.
Thermometer: Towards Universal Calibration for Large Language Models
Maohao Shen, Subhro Das, Kristjan Greenewald et al.
Towards Modular LLMs by Building and Reusing a Library of LoRAs
Oleksiy Ostapenko, Zhan Su, Edoardo Ponti et al.
VersatileGaussian: Real-time Neural Rendering for Versatile Tasks using Gaussian Splatting
Renjie Li, Zhiwen Fan, Bohua Wang et al.