Poster "multi-task learning" Papers
78 papers found • Page 2 of 2
Conference
Collaborative Learning with Different Labeling Functions
yuyang deng, Mingda Qiao
Contextualized Policy Recovery: Modeling and Interpreting Medical Decisions with Adaptive Imitation Learning
Jannik Deuschel, Caleb Ellington, Yingtao Luo et al.
DG-PIC: Domain Generalized Point-In-Context Learning for Point Cloud Understanding
Jincen Jiang, Qianyu Zhou, Yuhang Li et al.
DMTG: One-Shot Differentiable Multi-Task Grouping
Yuan Gao, Shuguo Jiang, Moran Li et al.
DocRes: A Generalist Model Toward Unifying Document Image Restoration Tasks
Jiaxin Zhang, Dezhi Peng, Chongyu Liu et al.
Domain Generalization of 3D Object Detection by Density-Resampling
Shuangzhi Li, Lei Ma, Xingyu Li
Efficient Multitask Dense Predictor via Binarization
Yuzhang Shang, Dan Xu, Gaowen Liu et al.
Exploring Correlations of Self-Supervised Tasks for Graphs
Taoran Fang, Wei Chow, Yifei Sun et al.
Exploring Training on Heterogeneous Data with Mixture of Low-rank Adapters
Yuhang Zhou, Zhao Zihua, Siyuan Du et al.
Fair Resource Allocation in Multi-Task Learning
Hao Ban, Kaiyi Ji
Fast and Sample Efficient Multi-Task Representation Learning in Stochastic Contextual Bandits
Jiabin Lin, Shana Moothedath, Namrata Vaswani
Guarantees for Nonlinear Representation Learning: Non-identical Covariates, Dependent Data, Fewer Samples
Thomas T. Zhang, Bruce Lee, Ingvar Ziemann et al.
Learning with Adaptive Resource Allocation
Jing Wang, Miao Yu, Peng Zhao et al.
Localizing Task Information for Improved Model Merging and Compression
Ke Wang, Nikolaos Dimitriadis, Guillermo Ortiz-Jimenez et al.
Merging Multi-Task Models via Weight-Ensembling Mixture of Experts
Anke Tang, Li Shen, Yong Luo et al.
Multi-Space Alignments Towards Universal LiDAR Segmentation
Youquan Liu, Lingdong Kong, Xiaoyang Wu et al.
Multi-Task Domain Adaptation for Language Grounding with 3D Objects
Penglei SUN, Yaoxian Song, Xinglin Pan et al.
MVMoE: Multi-Task Vehicle Routing Solver with Mixture-of-Experts
Jianan Zhou, Zhiguang Cao, Yaoxin Wu et al.
Provable Multi-Task Representation Learning by Two-Layer ReLU Neural Networks
Liam Collins, Hamed Hassani, Mahdi Soltanolkotabi et al.
Quality-Diversity with Limited Resources
Ren-Jian Wang, Ke Xue, Cong Guan et al.
Representation Surgery for Multi-Task Model Merging
Enneng Yang, Li Shen, Zhenyi Wang et al.
Robust Multi-Task Learning with Excess Risks
Yifei He, Shiji Zhou, Guojun Zhang et al.
Sparse-to-dense Multimodal Image Registration via Multi-Task Learning
Kaining Zhang, Jiayi Ma
Switch Diffusion Transformer: Synergizing Denoising Tasks with Sparse Mixture-of-Experts
Byeongjun Park, Hyojun Go, Jin-Young Kim et al.
Thermometer: Towards Universal Calibration for Large Language Models
Maohao Shen, Subhro Das, Kristjan Greenewald et al.
Towards Modular LLMs by Building and Reusing a Library of LoRAs
Oleksiy Ostapenko, Zhan Su, Edoardo Ponti et al.
Training-Free Pretrained Model Merging
Zhengqi Xu, Ke Yuan, Huiqiong Wang et al.
VersatileGaussian: Real-time Neural Rendering for Versatile Tasks using Gaussian Splatting
Renjie Li, Zhiwen Fan, Bohua Wang et al.