Tianlong Chen

56
Papers
92
Total Citations

Papers (56)

DLF: Disentangled-Language-Focused Multimodal Sentiment Analysis

AAAI 2025
43
citations

Facial Affective Behavior Analysis with Instruction Tuning

ECCV 2024
23
citations

Graph Sparsification via Mixture of Graphs

ICLR 2025
17
citations

PortLLM: Personalizing Evolving Large Language Models with Training-Free and Portable Model Patches

ICLR 2025
3
citations

Towards Stabilized and Efficient Diffusion Transformers through Long-Skip-Connections with Spectral Constraints

ICCV 2025
3
citations

Mapping from Meaning: Addressing the Miscalibration of Prompt-Sensitive Language Models

AAAI 2025
2
citations

BrainMAP: Learning Multiple Activation Pathways in Brain Networks

AAAI 2025
1
citations

$\texttt{MoE-RBench}$: Towards Building Reliable Language Models with Sparse Mixture-of-Experts

ICML 2024
0
citations

Evolution-Inspired Loss Functions for Protein Representation Learning

ICML 2024
0
citations

Sparse Cocktail: Every Sparse Pattern Every Sparse Ratio All At Once

ICML 2024
0
citations

Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark

ICML 2024
0
citations

Two Heads Are Better Than One: Boosting Graph Sparse Training via Semantic and Topological Awareness

ICML 2024
0
citations

Position: TrustLLM: Trustworthiness in Large Language Models

ICML 2024
0
citations

Adversarial Robustness: From Self-Supervised Pre-Training to Fine-Tuning

CVPR 2020arXiv
0
citations

L2-GCN: Layer-Wise and Learned Efficient Training of Graph Convolutional Networks

CVPR 2020
0
citations

Troubleshooting Blind Image Quality Models in the Wild

CVPR 2021arXiv
0
citations

Quarantine: Sparsity Can Uncover the Trojan Attack Trigger for Free

CVPR 2022arXiv
0
citations

The Principle of Diversity: Training Stronger Vision Transformers Calls for Reducing All Levels of Redundancy

CVPR 2022arXiv
0
citations

CADTransformer: Panoptic Symbol Spotting Transformer for CAD Drawings

CVPR 2022
0
citations

Aug-NeRF: Training Stronger Neural Radiance Fields With Triple-Level Physically-Grounded Augmentations

CVPR 2022
0
citations

ABD-Net: Attentive but Diverse Person Re-Identification

ICCV 2019
0
citations

Enhancing NeRF akin to Enhancing LLMs: Generalizable NeRF Transformer with Mixture-of-View-Experts

ICCV 2023arXiv
0
citations

Robust Mixture-of-Expert Training for Convolutional Neural Networks

ICCV 2023arXiv
0
citations

AdaMV-MoE: Adaptive Multi-Task Vision Mixture-of-Experts

ICCV 2023
0
citations

HALO: Hardware-Aware Learning to Optimize

ECCV 2020
0
citations

Point Cloud Domain Adaptation via Masked Local 3D Structure Prediction

ECCV 2022
0
citations

DNA: Improving Few-Shot Transfer Learning with Low-Rank Decomposition and Alignment

ECCV 2022
0
citations

Scalable Learning to Optimize: A Learned Optimizer Can Train Big Models

ECCV 2022
0
citations

The Lottery Tickets Hypothesis for Supervised and Self-Supervised Pre-Training in Computer Vision Models

CVPR 2021arXiv
0
citations

Visual Prompting Upgrades Neural Network Sparsification: A Data-Model Perspective

AAAI 2025
0
citations

Sparse Transfer Learning Accelerates and Enhances Certified Robustness: A Comprehensive Study

AAAI 2025
0
citations

Tuning-Free Accountable Intervention for LLM Deployment – a Metacognitive Approach

AAAI 2025
0
citations

TFMQ-DM: Temporal Feature Maintenance Quantization for Diffusion Models

CVPR 2024
0
citations

Molecular Data Programming: Towards Molecule Pseudo-labeling with Systematic Weak Supervision

CVPR 2024
0
citations

Modalities Contribute Unequally: Enhancing Medical Multi-modal Learning through Adaptive Modality Token Re-balancing

ICML 2025
0
citations

Learning to Optimize in Swarms

NeurIPS 2019
0
citations

Graph Contrastive Learning with Augmentations

NeurIPS 2020
0
citations

Training Stronger Baselines for Learning to Optimize

NeurIPS 2020
0
citations

Once-for-All Adversarial Training: In-Situ Tradeoff between Robustness and Accuracy for Free

NeurIPS 2020
0
citations

The Lottery Ticket Hypothesis for Pre-trained BERT Networks

NeurIPS 2020
0
citations

Robust Pre-Training by Adversarial Contrastive Learning

NeurIPS 2020
0
citations

You are caught stealing my winning lottery ticket! Making a lottery ticket claim its ownership

NeurIPS 2021
0
citations

Improving Contrastive Learning on Imbalanced Data via Open-World Sampling

NeurIPS 2021
0
citations

Sparse Training via Boosting Pruning Plasticity with Neuroregeneration

NeurIPS 2021
0
citations

Sanity Checks for Lottery Tickets: Does Your Winning Ticket Really Win the Jackpot?

NeurIPS 2021
0
citations

Chasing Sparsity in Vision Transformers: An End-to-End Exploration

NeurIPS 2021
0
citations

Data-Efficient GAN Training Beyond (Just) Augmentations: A Lottery Ticket Perspective

NeurIPS 2021
0
citations

Augmentations in Hypergraph Contrastive Learning: Fabricated and Generative

NeurIPS 2022
0
citations

Sparse Winning Tickets are Data-Efficient Image Recognizers

NeurIPS 2022
0
citations

A Comprehensive Study on Large-Scale Graph Training: Benchmarking and Rethinking

NeurIPS 2022
0
citations

Old can be Gold: Better Gradient Flow can Make Vanilla-GCNs Great Again

NeurIPS 2022
0
citations

Advancing Model Pruning via Bi-level Optimization

NeurIPS 2022
0
citations

M³ViT: Mixture-of-Experts Vision Transformer for Efficient Multi-task Learning with Model-Accelerator Co-design

NeurIPS 2022
0
citations

Randomized Channel Shuffling: Minimal-Overhead Backdoor Attack Detection without Clean Datasets

NeurIPS 2022
0
citations

H2O: Heavy-Hitter Oracle for Efficient Generative Inference of Large Language Models

NeurIPS 2023
0
citations

The Emergence of Essential Sparsity in Large Pre-trained Models: The Weights that Matter

NeurIPS 2023
0
citations