Zicheng Liu

21
Papers
283
Total Citations

Papers (21)

MogaNet: Multi-order Gated Aggregation Network

ICLR 2024
125
citations

MM-Narrator: Narrating Long-form Videos with Multimodal In-Context Learning

CVPR 2024
49
citations

SoftVQ-VAE: Efficient 1-Dimensional Continuous Tokenizer

CVPR 2025
32
citations

SemiReward: A General Reward Model for Semi-supervised Learning

ICLR 2024
18
citations

PSC-CPI: Multi-Scale Protein Sequence-Structure Contrasting for Efficient and Generalizable Compound-Protein Interaction Prediction

AAAI 2024arXiv
18
citations

CBGBench: Fill in the Blank of Protein-Molecule Complex Binding Graph

ICLR 2025
16
citations

Tuning Timestep-Distilled Diffusion Model Using Pairwise Sample Optimization

ICLR 2025
14
citations

MergeVQ: A Unified Framework for Visual Generation and Representation with Disentangled Token Merging and Quantization

CVPR 2025arXiv
6
citations

DaCapo: Score Distillation as Stacked Bridge for Fast and High-quality 3D Editing

CVPR 2025
4
citations

Exploring Invariance in Images through One-way Wave Equations

ICML 2025
1
citations

StrokeNUWA—Tokenizing Strokes for Vector Graphic Synthesis

ICML 2024
0
citations

MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities

ICML 2024
0
citations

B-VLLM: A Vision Large Language Model with Balanced Spatio-Temporal Tokens

ICCV 2025
0
citations

MyGO: Virtual Reality Locomotion Prediction using Multitask Learning

ISMAR 2025
0
citations

Training Diffusion Models Towards Diverse Image Generation with Reinforcement Learning

CVPR 2024
0
citations

DisCo: Disentangled Control for Realistic Human Dance Generation

CVPR 2024
0
citations

Segment and Caption Anything

CVPR 2024
0
citations

Completing Visual Objects via Bridging Generation and Segmentation

ICML 2024
0
citations

VQDNA: Unleashing the Power of Vector Quantization for Multi-Species Genomic Sequence Modeling

ICML 2024arXiv
0
citations

PPFLOW: Target-Aware Peptide Design with Torsional Flow Matching

ICML 2024
0
citations

Short-Long Convolutions Help Hardware-Efficient Linear Attention to Focus on Long Sequences

ICML 2024
0
citations