"pre-trained language models" Papers

11 papers found

Certifying Language Model Robustness with Fuzzed Randomized Smoothing: An Efficient Defense Against Backdoor Attacks

Bowei He, Lihao Yin, Huiling Zhen et al.

ICLR 2025posterarXiv:2502.06892
3
citations

Exploring the limits of strong membership inference attacks on large language models

Jamie Hayes, I Shumailov, Christopher A. Choquette-Choo et al.

NeurIPS 2025posterarXiv:2505.18773
10
citations

SMI-Editor: Edit-based SMILES Language Model with Fragment-level Supervision

Kangjie Zheng, Siyue Liang, Junwei Yang et al.

ICLR 2025posterarXiv:2412.05569
5
citations

Defense against Backdoor Attack on Pre-trained Language Models via Head Pruning and Attention Normalization

Xingyi Zhao, Depeng Xu, Shuhan Yuan

ICML 2024poster

LangCell: Language-Cell Pre-training for Cell Identity Understanding

Suyuan Zhao, Jiahuan Zhang, Yushuai Wu et al.

ICML 2024poster

Liberating Seen Classes: Boosting Few-Shot and Zero-Shot Text Classification via Anchor

Han Liu, Siyang Zhao, Xiaotong Zhang et al.

AAAI 2024paperarXiv:2405.03565
6
citations

Progressive Distillation Based on Masked Generation Feature Method for Knowledge Graph Completion

Cunhang Fan, Yujie Chen, Jun Xue et al.

AAAI 2024paperarXiv:2401.12997
5
citations

Question Calibration and Multi-Hop Modeling for Temporal Question Answering

Chao Xue, Di Liang, Pengfei Wang et al.

AAAI 2024paperarXiv:2402.13188
21
citations

Synergistic Anchored Contrastive Pre-training for Few-Shot Relation Extraction

Da Luo, Yanglei Gan, Rui Hou et al.

AAAI 2024paperarXiv:2312.12021
11
citations

Understanding Reasoning Ability of Language Models From the Perspective of Reasoning Paths Aggregation

Xinyi Wang, Alfonso Amayuelas, Kexun Zhang et al.

ICML 2024poster

Wikiformer: Pre-training with Structured Information of Wikipedia for Ad-Hoc Retrieval

Weihang Su, Qingyao Ai, Xiangsheng Li et al.

AAAI 2024paperarXiv:2312.10661
22
citations