Ce Zhang
35
Papers
1,582
Total Citations
1
Affiliations
Affiliations
UNC Chapel Hill
Papers (35)
Can Decentralized Algorithms Outperform Centralized Algorithms? A Case Study for Decentralized Parallel Stochastic Gradient Descent
NeurIPS 2017arXiv
1,364
citations
AntGPT: Can Large Language Models Help Long-term Action Anticipation from Videos?
ICLR 2024
81
citations
Cyclades: Conflict-free Asynchronous Machine Learning
NeurIPS 2016arXiv
64
citations
Concept-Guided Prompt Learning for Generalization in Vision-Language Models
AAAI 2024arXiv
33
citations
HiKER-SGG: Hierarchical Knowledge Enhanced Robust Scene Graph Generation
CVPR 2024
24
citations
BASKET: A Large-Scale Video Dataset for Fine-Grained Skill Estimation
CVPR 2025
9
citations
Speculative Prefill: Turbocharging TTFT with Lightweight and Training-Free Token Importance Estimation
ICML 2025
7
citations
Taming the Wild: A Unified Analysis of Hogwild-Style Algorithms
NeurIPS 2015
0
citations
Self-Correctable and Adaptable Inference for Generalizable Human Pose Estimation
CVPR 2023arXiv
0
citations
ONLY: One-Layer Intervention Sufficiently Mitigates Hallucinations in Large Vision-Language Models
ICCV 2025
0
citations
Neuro-Modulated Hebbian Learning for Fully Test-Time Adaptation
CVPR 2023arXiv
0
citations
Which Model To Transfer? Finding the Needle in the Growing Haystack
CVPR 2022arXiv
0
citations
Scalability vs. Utility: Do We Have To Sacrifice One for the Other in Data Importance Quantification?
CVPR 2021arXiv
0
citations
Rapidly Mixing Gibbs Sampling for a Class of Factor Graphs Using Hierarchy Width
NeurIPS 2015
0
citations
Mechanistic Design and Scaling of Hybrid Architectures
ICML 2024
0
citations
Skill-it! A data-driven skills framework for understanding and training language models
NeurIPS 2023
0
citations
ZipML: Training Linear Models with End-to-End Low Precision, and a Little Bit of Deep Learning
ICML 2017
0
citations
Asynchronous Decentralized Parallel Stochastic Gradient Descent
ICML 2018
0
citations
$D^2$: Decentralized Training over Decentralized Data
ICML 2018
0
citations
DL2: Training and Querying Neural Networks with Logic
ICML 2019
0
citations
Distributed Learning over Unreliable Networks
ICML 2019
0
citations
Communication Compression for Decentralized Training
NeurIPS 2018
0
citations
On Convergence of Nearest Neighbor Classifiers over Feature Transformations
NeurIPS 2020
0
citations
Learning to Mutate with Hypergradient Guided Population
NeurIPS 2020
0
citations
Spectral Temporal Graph Neural Network for Multivariate Time-series Forecasting
NeurIPS 2020
0
citations
TRS: Transferability Reduced Ensemble via Promoting Gradient Diversity and Model Smoothness
NeurIPS 2021
0
citations
VF-PS: How to Select Important Participants in Vertical Federated Learning, Efficiently and Securely?
NeurIPS 2022
0
citations
Fine-tuning Language Models over Slow Networks using Activation Quantization with Guarantees
NeurIPS 2022
0
citations
Decentralized Training of Foundation Models in Heterogeneous Environments
NeurIPS 2022
0
citations
Certifying Some Distributional Fairness with Subpopulation Decomposition
NeurIPS 2022
0
citations
Improving Certified Robustness via Statistical Learning with Logical Reasoning
NeurIPS 2022
0
citations
DataPerf: Benchmarks for Data-Centric AI Development
NeurIPS 2023
0
citations
Laughing Hyena Distillery: Extracting Compact Recurrences From Convolutions
NeurIPS 2023
0
citations
Goal-Conditioned Predictive Coding for Offline Reinforcement Learning
NeurIPS 2023
0
citations
WordScape: a Pipeline to extract multilingual, visually rich Documents with Layout Annotations from Web Crawl Data
NeurIPS 2023
0
citations