α
Research
Alpha Leak
Conferences
Topics
Top Authors
Rankings
Browse All
EN
中
Home
/
Authors
/
Gen Li
Gen Li
23
papers
86
total citations
papers (23)
Phantom: Subject-Consistent Video Generation via Cross-Modal Alignment
ICCV 2025
arXiv
55
citations
EgoGen: An Egocentric Synthetic Data Generator
CVPR 2024
arXiv
24
citations
EgoM2P: Egocentric Multimodal Multitask Pretraining
ICCV 2025
arXiv
4
citations
VolumetricSMPL: A Neural Volumetric Body Model for Efficient Interactions, Contacts, and Collisions
ICCV 2025
arXiv
3
citations
Removing Interference and Recovering Content Imaginatively for Visible Watermark Removal
AAAI 2024
arXiv
0
citations
One-Shot Open Affordance Learning with Foundation Models
CVPR 2024
arXiv
0
citations
Advancing Dynamic Sparse Training by Exploring Optimization Opportunities
ICML 2024
0
citations
Accelerating Convergence of Score-Based Diffusion Models, Provably
ICML 2024
arXiv
0
citations
Outlier Weighed Layerwise Sparsity (OWL): A Missing Secret Sauce for Pruning LLMs to High Sparsity
ICML 2024
arXiv
0
citations
Adaptive Prototype Learning and Allocation for Few-Shot Segmentation
CVPR 2021
0
citations
Towards High-Quality and Efficient Video Super-Resolution via Spatial-Temporal Data Overfitting
CVPR 2023
arXiv
0
citations
LOCATE: Localize and Transfer Object Parts for Weakly Supervised Affordance Grounding
CVPR 2023
arXiv
0
citations
OSRT: Omnidirectional Image Super-Resolution With Distortion-Aware Transformer
CVPR 2023
arXiv
0
citations
VQFR: Blind Face Restoration with Vector-Quantized Dictionary and Parallel Decoder
ECCV 2022
0
citations
Sample Complexity of Asynchronous Q-Learning: Sharper Analysis and Variance Reduction
NeurIPS 2020
arXiv
0
citations
Breaking the Sample Size Barrier in Model-Based Reinforcement Learning with a Generative Model
NeurIPS 2020
arXiv
0
citations
Sample-Efficient Reinforcement Learning Is Feasible for Linearly Realizable MDPs with Limited Revisiting
NeurIPS 2021
arXiv
0
citations
Breaking the Sample Complexity Barrier to Regret-Optimal Model-Free Reinforcement Learning
NeurIPS 2021
arXiv
0
citations
Minimax-Optimal Multi-Agent RL in Markov Games With a Generative Model
NeurIPS 2022
arXiv
0
citations
Reward-agnostic Fine-tuning: Provable Statistical Benefits of Hybrid Reinforcement Learning
NeurIPS 2023
arXiv
0
citations
Dynamic Sparsity Is Channel-Level Sparsity Learner
NeurIPS 2023
arXiv
0
citations
The Curious Price of Distributional Robustness in Reinforcement Learning with a Generative Model
NeurIPS 2023
arXiv
0
citations
Regret-Optimal Model-Free Reinforcement Learning for Discounted MDPs with Short Burn-In Time
NeurIPS 2023
arXiv
0
citations