Xu Yang

20
Papers
174
Total Citations

Papers (20)

Unlearning Concepts in Diffusion Model via Concept Domain Correction and Concept Preserving Gradient

AAAI 2025
49
citations

Texture-Preserving Diffusion Models for High-Fidelity Virtual Try-On

CVPR 2024
37
citations

How to Configure Good In-Context Sequence for Visual Question Answering

CVPR 2024
36
citations

KRIS-Bench: Benchmarking Next-Level Intelligent Image Editing Models

NeurIPS 2025
23
citations

MemoNav: Working Memory Model for Visual Navigation

CVPR 2024
10
citations

Mimic In-Context Learning for Multimodal Tasks

CVPR 2025
8
citations

Unveiling the Unknown: Unleashing the Power of Unknown to Known in Open-Set Source-Free Domain Adaptation

CVPR 2024
6
citations

Building Variable-Sized Models via Learngene Pool

AAAI 2024arXiv
5
citations

Inheriting Generalized Learngene for Efficient Knowledge Transfer across Multiple Tasks

AAAI 2025
0
citations

Transformer as Linear Expansion of Learngene

AAAI 2024
0
citations

Video Repurposing from User Generated Content: A Large-scale Dataset and Benchmark

AAAI 2025
0
citations

A Versatile Framework for Continual Test-Time Domain Adaptation: Balancing Discriminability and Generalizability

CVPR 2024
0
citations

Democratizing High-Fidelity Co-Speech Gesture Video Generation

ICCV 2025
0
citations

Devils in Middle Layers of Large Vision-Language Models: Interpreting, Detecting and Mitigating Object Hallucinations via Attention Lens

CVPR 2025
0
citations

Redefining <Creative> in Dictionary: Towards an Enhanced Semantic Understanding of Creative Generation

CVPR 2025
0
citations

Number it: Temporal Grounding Videos like Flipping Manga

CVPR 2025
0
citations

Long-Tail Class Incremental Learning via Independent Sub-prototype Construction

CVPR 2024
0
citations

VinT-6D: A Large-Scale Object-in-hand Dataset from Vision, Touch and Proprioception

ICML 2024
0
citations

Vision Transformers as Probabilistic Expansion from Learngene

ICML 2024
0
citations

One Meta-tuned Transformer is What You Need for Few-shot Learning

ICML 2024
0
citations