Curriculum Learning
Training with ordered samples
Top Papers
Demystifying CLIP Data
Hu Xu, Saining Xie, Xiaoqing Tan et al.
In-Context Pretraining: Language Modeling Beyond Document Boundaries
Weijia Shi, Sewon Min, Maria Lomeli et al.
BadCLIP: Trigger-Aware Prompt Learning for Backdoor Attacks on CLIP
Jiawang Bai, Kuofeng Gao, Shaobo Min et al.
Position: The No Free Lunch Theorem, Kolmogorov Complexity, and the Role of Inductive Biases in Machine Learning
Micah Goldblum, Marc Finzi, Keefer Rowan et al.
Causal Order: The Key to Leveraging Imperfect Experts in Causal Inference
Aniket Vashishtha, Abbavaram Gowtham Reddy, Abhinav Kumar et al.
Organize the Web: Constructing Domains Enhances Pre-Training Data Curation
Alexander Wettig, Kyle Lo, Sewon Min et al.
Class-Incremental Learning with CLIP: Adaptive Representation Adjustment and Parameter Fusion
Linlan Huang, Xusheng Cao, Haori Lu et al.
Does CLIP’s generalization performance mainly stem from high train-test similarity?
Prasanna Mayilvahanan, Thaddäus Wiedemer, Evgenia Rusak et al.
Prompting Language-Informed Distribution for Compositional Zero-Shot Learning
Wentao Bao, Lichang Chen, Heng Huang et al.
Blind Image Quality Assessment Based on Geometric Order Learning
Nyeong-Ho Shin, Seon-Ho Lee, Chang-Su Kim
AMU-Tuning: Effective Logit Bias for CLIP-based Few-shot Learning
Yuwei Tang, ZhenYi Lin, Qilong Wang et al.
Cascade Prompt Learning for Visual-Language Model Adaptation
Ge Wu, Xin Zhang, Zheng Li et al.
OphCLIP: Hierarchical Retrieval-Augmented Learning for Ophthalmic Surgical Video-Language Pretraining
Ming Hu, Kun yuan, Yaling Shen et al.
Non-exemplar Online Class-Incremental Continual Learning via Dual-Prototype Self-Augment and Refinement
Fushuo Huo, Wenchao Xu, Jingcai Guo et al.
BooW-VTON: Boosting In-the-Wild Virtual Try-On via Mask-Free Pseudo Data Training
Xuanpu Zhang, Dan Song, pengxin zhan et al.
Summarizing Stream Data for Memory-Constrained Online Continual Learning
Jianyang Gu, Kai Wang, Wei Jiang et al.
Long-Tailed Anomaly Detection with Learnable Class Names
Chih-Hui Ho, Kuan-Chuan Peng, Nuno Vasconcelos
Class Incremental Learning via Likelihood Ratio Based Task Prediction
Haowei Lin, Yijia Shao, Weinan Qian et al.
Code-Style In-Context Learning for Knowledge-Based Question Answering
Zhijie Nie, Richong Zhang, Zhongyuan Wang et al.
Instruct-SkillMix: A Powerful Pipeline for LLM Instruction Tuning
Simran Kaur, Simon Park, Anirudh Goyal et al.
OVOR: OnePrompt with Virtual Outlier Regularization for Rehearsal-Free Class-Incremental Learning
Wei-Cheng Huang, Chun-Fu Chen, Hsiang Hsu
Revisiting Adversarial Training Under Long-Tailed Distributions
Xinli Yue, Ningping Mou, Qian Wang et al.
One-stage Prompt-based Continual Learning
Youngeun Kim, YUHANG LI, Priyadarshini Panda
MOS: Model Surgery for Pre-Trained Model-Based Class-Incremental Learning
Hai-Long Sun, Da-Wei Zhou, Hanbin Zhao et al.
Structured Packing in LLM Training Improves Long Context Utilization
Konrad Staniszewski, Szymon Tworkowski, Sebastian Jaszczur et al.
Mixture of Noise for Pre-Trained Model-Based Class-Incremental Learning
Kai Jiang, Zhengyan Shi, Dell Zhang et al.
Learning MDL Logic Programs from Noisy Data
Céline Hocquette, Andreas Niskanen, Matti Järvisalo et al.
Adaptive teachers for amortized samplers
Minsu Kim, Sanghyeok Choi, Taeyoung Yun et al.
BioCLIP 2: Emergent Properties from Scaling Hierarchical Contrastive Learning
Jianyang Gu, Sam Stevens, Elizabeth Campolongo et al.
BEHAVIOR Vision Suite: Customizable Dataset Generation via Simulation
Yunhao Ge, Yihe Tang, Jiashu Xu et al.
TimeDP: Learning to Generate Multi-Domain Time Series with Domain Prompts
Yu-Hao Huang, Chang Xu, Yueying Wu et al.
KITS: Inductive Spatio-Temporal Kriging with Increment Training Strategy
Qianxiong Xu, Cheng Long, Ziyue Li et al.
Coreset Selection via Reducible Loss in Continual Learning
Ruilin Tong, Yuhang Liu, Javen Qinfeng Shi et al.
Optimal Sample Complexity of Contrastive Learning
Noga Alon, Dmitrii Avdiukhin, Dor Elboim et al.
Adapter Merging with Centroid Prototype Mapping for Scalable Class-Incremental Learning
Takuma Fukuda, Hiroshi Kera, Kazuhiko Kawamoto
DELTA: Pre-Train a Discriminative Encoder for Legal Case Retrieval via Structural Word Alignment
Haitao Li, Qingyao Ai, Xinyan Han et al.
CamoTeacher: Dual-Rotation Consistency Learning for Semi-Supervised Camouflaged Object Detection
Xunfa Lai, Zhiyu Yang, Jie Hu et al.
RANKCLIP: Ranking-Consistent Language-Image Pretraining
Yiming Zhang, Zhuokai Zhao, Zhaorun Chen et al.
Budgeted Online Continual Learning by Adaptive Layer Freezing and Frequency-based Sampling
Minhyuk Seo, Hyunseo Koh, Jonghyun Choi
Effective Training Data Synthesis for Improving MLLM Chart Understanding
Yuwei Yang, Zeyu Zhang, Yunzhong Hou et al.
Neural structure learning with stochastic differential equations
Benjie Wang, Joel Jennings, Wenbo Gong
ProMotion: Prototypes As Motion Learners
Yawen Lu, Dongfang Liu, Qifan Wang et al.
Anytime Continual Learning for Open Vocabulary Classification
Zhen Zhu, Yiming Gong, Derek Hoiem
Made to Order: Discovering monotonic temporal changes via self-supervised video ordering
Charig Yang, Weidi Xie, Andrew ZISSERMAN
Near, far: Patch-ordering enhances vision foundation models' scene understanding
Valentinos Pariza, Mohammadreza Salehi, Gertjan J Burghouts et al.
SfM-Free 3D Gaussian Splatting via Hierarchical Training
Bo Ji, Angela Yao
CAPrompt: Cyclic Prompt Aggregation for Pre-Trained Model Based Class Incremental Learning
Qiwei Li, Jiahuan Zhou
Causally Aligned Curriculum Learning
Mingxuan Li, Junzhe Zhang, Elias Bareinboim
COAP: Memory-Efficient Training with Correlation-Aware Gradient Projection
Jinqi Xiao, Shen Sang, Tiancheng Zhi et al.
DualCP: Rehearsal-Free Domain-Incremental Learning via Dual-Level Concept Prototype
Qiang Wang, Yuhang He, Songlin Dong et al.
Learning-Augmented Search Data Structures
Chunkai Fu, Brandon G. Nguyen, Jung Seo et al.
Adaptive Non-Uniform Timestep Sampling for Accelerating Diffusion Model Training
Myunsoo Kim, Donghyeon Ki, Seong-Woong Shim et al.
Learning Physics Informed Neural ODEs with Partial Measurements
Paul Ghanem, Ahmet Demirkaya, Tales Imbiriba et al.
SCOMatch: Alleviating Overtrusting in Open-set Semi-supervised Learning
ZERUN WANG, Liuyu Xiang, Lang Huang et al.
Montessori-Instruct: Generate Influential Training Data Tailored for Student Learning
Xiaochuan Li, Zichun Yu, Chenyan Xiong
LiveCC: Learning Video LLM with Streaming Speech Transcription at Scale
Joya Chen, Yiqi Lin, Ziyun Zeng et al.
Distilled Prompt Learning for Incomplete Multimodal Survival Prediction
Yingxue Xu, Fengtao ZHOU, Chenyu Zhao et al.
Predicting the Susceptibility of Examples to Catastrophic Forgetting
Guy Hacohen, Tinne Tuytelaars
Pick-or-Mix: Dynamic Channel Sampling for ConvNets
Ashish Kumar, Daneul Kim, Jaesik Park et al.
Architecture-Aware Learning Curve Extrapolation via Graph Ordinary Differential Equation
Yanna Ding, Zijie Huang, Xiao Shou et al.
Multi-Accurate CATE is Robust to Unknown Covariate Shifts
Angela Zhou, Christoph Kern, Michael Kim
SCOD: From Heuristics to Theory
Vojtech Franc, Jakub Paplham, Daniel Prusa
Sufficient Invariant Learning for Distribution Shift
Taero Kim, Subeen Park, Sungjun Lim et al.
Convergence and Implicit Bias of Gradient Descent on Continual Linear Classification
Hyunji Jung, Hanseul Cho, Chulhee Yun
Compositional Caching for Training-free Open-vocabulary Attribute Detection
Marco Garosi, Alessandro Conti, Gaowen Liu et al.
Stochastic Online Conformal Prediction with Semi-Bandit Feedback
Haosen Ge, Hamsa Bastani, Osbert Bastani
PAC Learning with Improvements
Idan Attias, Avrim Blum, Keziah Naggita et al.
Learning to Count without Annotations
Lukas Knobel, Tengda Han, Yuki Asano
See Further When Clear: Curriculum Consistency Model
Yunpeng Liu, Boxiao Liu, Yi Zhang et al.
T-CIL: Temperature Scaling using Adversarial Perturbation for Calibration in Class-Incremental Learning
Seong-Hyeon Hwang, Minsu Kim, Steven Euijong Whang
SkyLadder: Better and Faster Pretraining via Context Window Scheduling
Tongyao Zhu, Qian Liu, Haonan Wang et al.
How to Trade Off the Quantity and Capacity of Teacher Ensemble: Learning Categorical Distribution to Stochastically Employ a Teacher for Distillation
Zixiang Ding, Guoqing Jiang, Shuai Zhang et al.
Preconditioners for the Stochastic Training of Neural Fields
Shin-Fang Chng, Hemanth Saratchandran, Simon Lucey
Sequence Complementor: Complementing Transformers for Time Series Forecasting with Learnable Sequences
Xiwen Chen, Peijie Qiu, Wenhui Zhu et al.
Exploring Temporal Event Cues for Dense Video Captioning in Cyclic Co-Learning
Zhuyang Xie, Yan Yang, Yankai Yu et al.
POA: Pre-training Once for Models of All Sizes
Yingying Zhang, Xin Guo, Jiangwei Lao et al.
Few-shot Personalized Scanpath Prediction
Ruoyu Xue, Jingyi Xu, Sounak Mondal et al.
Deriving Causal Order from Single-Variable Interventions: Guarantees & Algorithm
Mathieu Chevalley, Patrick Schwab, Arash Mehrjou
AdaSTaR: Adaptive Data Sampling for Training Self-Taught Reasoners
Reiss Koh, Wonbeen Oh, Jaein Jang et al.
Orthogonal Survival Learners for Estimating Heterogeneous Treatment Effects from Time-to-Event Data
Dennis Frauen, Maresa Schröder, Konstantin Hess et al.
The adaptive complexity of parallelized log-concave sampling
Huanjian Zhou, Baoxiang Wang, Masashi Sugiyama
Online Learning of Pure States is as Hard as Mixed States
Maxime Meyer, Soumik Adhikary, Naixu Guo et al.
Curriculum Abductive Learning
Wen-Chao Hu, Qi-Jie Li, Lin-Han Jia et al.
Get a Head Start: On-Demand Pedagogical Policy Selection in Intelligent Tutoring
Ge Gao, Xi Yang, Min Chi
Learning to Insert for Constructive Neural Vehicle Routing Solver
Fu Luo, Xi Lin, Mengyuan Zhong et al.
Characterization and Learning of Causal Graphs from Hard Interventions
Zihan Zhou, Muhammad Qasim Elahi, Murat Kocaoglu
Exploring Learning Complexity for Efficient Downstream Dataset Pruning
Wenyu Jiang, Zhenlong Liu, Zejian Xie et al.
CLDyB: Towards Dynamic Benchmarking for Continual Learning with Pre-trained Models
Shengzhuang Chen, Yikai Liao, Xiaoxiao Sun et al.
Filter Like You Test: Data-Driven Data Filtering for CLIP Pretraining
Mikey Shechter, Yair Carmon
Let Samples Speak: Mitigating Spurious Correlation by Exploiting the Clusterness of Samples
WEIWEI LI, Junzhuo Liu, Yuanyuan Ren et al.
Learning Neural Networks with Distribution Shift: Efficiently Certifiable Guarantees
Gautam Chandrasekaran, Adam Klivans, Lin Lin Lee et al.
Pathology-knowledge Enhanced Multi-instance Prompt Learning for Few-shot Whole Slide Image Classification
Linhao Qu, Dingkang Yang, Dan Huang et al.
InsViE-1M: Effective Instruction-based Video Editing with Elaborate Dataset Construction
Yuhui WU, Liyi Chen, Ruibin Li et al.
OrderChain: Towards General Instruct-Tuning for Stimulating the Ordinal Understanding Ability of MLLM
Jinhong Wang, Shuo Tong, Jintai CHEN et al.
DCLP: Neural Architecture Predictor with Curriculum Contrastive Learning
Shenghe Zheng, Hongzhi Wang, Tianyu Mu
Unlearning the Noisy Correspondence Makes CLIP More Robust
Haochen Han, Alex Jinpeng Wang, Peijun Ye et al.
Cycle-Consistent Learning for Joint Layout-to-Image Generation and Object Detection
Xinhao Cai, Qiuxia Lai, Gensheng Pei et al.
PLAN: Proactive Low-Rank Allocation for Continual Learning
XIEQUN WANG, Zhan Zhuang, Yu Zhang
Towards Comprehensive Lecture Slides Understanding: Large-scale Dataset and Effective Method
Enming Zhang, Yuzhe Li, Yuliang Liu et al.
When Does Curriculum Learning Help? A Theoretical Perspective
Raman Arora, Yunjuan Wang, Kaibo Zhang