Xingjun Ma

26
Papers
77
Total Citations
1
Affiliations

Affiliations

Fudan University

Papers (26)

Adversarial Prompt Tuning for Vision-Language Models

ECCV 2024
33
citations

BlueSuffix: Reinforced Blue Teaming for Vision-Language Models Against Jailbreak Attacks

ICLR 2025
16
citations

LDReg: Local Dimensionality Regularized Self-Supervised Learning

ICLR 2024
9
citations

Anyattack: Towards Large-scale Self-supervised Adversarial Attacks on Vision-language Models

CVPR 2025
9
citations

Free-Form Motion Control: Controlling the 6D Poses of Camera and Objects in Video Generation

ICCV 2025
4
citations

AIM: Additional Image Guided Generation of Transferable Adversarial Attacks

AAAI 2025
3
citations

HoneypotNet: Backdoor Attacks Against Model Extraction

AAAI 2025
3
citations

Unlearnable Clusters: Towards Label-Agnostic Unlearnable Examples

CVPR 2023arXiv
0
citations

Symmetric Cross Entropy for Robust Learning With Noisy Labels

ICCV 2019
0
citations

Revisiting Adversarial Robustness Distillation: Robust Soft Labels Make Student Better

ICCV 2021arXiv
0
citations

Short-Term and Long-Term Context Aggregation Network for Video Inpainting

ECCV 2020
0
citations

Reflection Backdoor: A Natural Backdoor Attack on Deep Neural Networks

ECCV 2020
0
citations

TAPT: Test-Time Adversarial Prompt Tuning for Robust Inference in Vision-Language Models

CVPR 2025
0
citations

Towards Million-Scale Adversarial Robustness Evaluation With Stronger Individual Attacks

CVPR 2025
0
citations

StolenLoRA: Exploring LoRA Extraction Attacks via Synthetic Data

ICCV 2025
0
citations

IDEATOR: Jailbreaking and Benchmarking Large Vision-Language Models Using Themselves

ICCV 2025
0
citations

Iterative Learning With Open-Set Noisy Labels

CVPR 2018arXiv
0
citations

Adversarial Camouflage: Hiding Physical-World Attacks With Natural Styles

CVPR 2020arXiv
0
citations

Clean-Label Backdoor Attacks on Video Recognition Models

CVPR 2020arXiv
0
citations

Exploring Architectural Ingredients of Adversarially Robust Deep Neural Networks

NeurIPS 2021
0
citations

Anti-Backdoor Learning: Training Clean Models on Poisoned Data

NeurIPS 2021
0
citations

Gradient Driven Rewards to Guarantee Fairness in Collaborative Machine Learning

NeurIPS 2021
0
citations

$\alpha$-IoU: A Family of Power Intersection over Union Losses for Bounding Box Regression

NeurIPS 2021
0
citations

CalFAT: Calibrated Federated Adversarial Training with Label Skewness

NeurIPS 2022
0
citations

Dimensionality-Driven Learning with Noisy Labels

ICML 2018
0
citations

On the Convergence and Robustness of Adversarial Training

ICML 2019
0
citations