Jianmin Wang

28
Papers
87
Total Citations

Papers (28)

Sundial: A Family of Highly Capable Time Series Foundation Models

ICML 2025arXiv
55
citations

Transolver++: An Accurate Neural Solver for PDEs on Million-Scale Geometries

ICML 2025arXiv
32
citations

CogDPM: Diffusion Probabilistic Models via Cognitive Predictive Coding

ICML 2024
0
citations

TimeSiam: A Pre-Training Framework for Siamese Time-Series Modeling

ICML 2024
0
citations

Timer: Generative Pre-trained Transformers Are Large Time Series Models

ICML 2024
0
citations

HarmonyDream: Task Harmonization Inside World Models

ICML 2024arXiv
0
citations

Transolver: A Fast Transformer Solver for PDEs on General Geometries

ICML 2024arXiv
0
citations

HelmFluid: Learning Helmholtz Dynamics for Interpretable Fluid Prediction

ICML 2024arXiv
0
citations

Progressive Adversarial Networks for Fine-Grained Domain Adaptation

CVPR 2020
0
citations

Regressive Domain Adaptation for Unsupervised Keypoint Detection

CVPR 2021arXiv
0
citations

MotionRNN: A Flexible Model for Video Prediction With Spacetime-Varying Motions

CVPR 2021arXiv
0
citations

Transferable Query Selection for Active Domain Adaptation

CVPR 2021
0
citations

Open Domain Generalization with Domain-Augmented Meta-Learning

CVPR 2021arXiv
0
citations

MetaSets: Meta-Learning on Point Sets for Generalizable Representations

CVPR 2021arXiv
0
citations

Learning to Detect Open Classes for Universal Domain Adaptation

ECCV 2020
0
citations

Minimum Class Confusion for Versatile Domain Adaptation

ECCV 2020
0
citations

Stochastic Normalization

NeurIPS 2020
0
citations

Co-Tuning for Transfer Learning

NeurIPS 2020
0
citations

Transferable Calibration with Lower Bias and Variance in Domain Adaptation

NeurIPS 2020arXiv
0
citations

Learning to Adapt to Evolving Domains

NeurIPS 2020
0
citations

Autoformer: Decomposition Transformers with Auto-Correlation for Long-Term Series Forecasting

NeurIPS 2021arXiv
0
citations

Cycle Self-Training for Domain Adaptation

NeurIPS 2021arXiv
0
citations

Non-stationary Transformers: Exploring the Stationarity in Time Series Forecasting

NeurIPS 2022arXiv
0
citations

Supported Policy Optimization for Offline Reinforcement Learning

NeurIPS 2022arXiv
0
citations

Debiased Self-Training for Semi-Supervised Learning

NeurIPS 2022arXiv
0
citations

Hub-Pathway: Transfer Learning from A Hub of Pre-trained Models

NeurIPS 2022arXiv
0
citations

Koopa: Learning Non-stationary Time Series Dynamics with Koopman Predictors

NeurIPS 2023arXiv
0
citations

SimMTM: A Simple Pre-Training Framework for Masked Time-Series Modeling

NeurIPS 2023arXiv
0
citations