ICML 2024 "language models" Papers
11 papers found
Applying language models to algebraic topology: generating simplicial cycles using multi-labeling in Wu's formula
Kirill Brilliantov, Fedor Pavutnitskiy, Dmitrii A. Pasechniuk et al.
ICML 2024posterarXiv:2306.16951
Converting Transformers to Polynomial Form for Secure Inference Over Homomorphic Encryption
Itamar Zimerman, Moran Baruch, Nir Drucker et al.
ICML 2024posterarXiv:2311.08610
Emergent Representations of Program Semantics in Language Models Trained on Programs
Charles Jin, Martin Rinard
ICML 2024posterarXiv:2305.11169
Instruction Tuning for Secure Code Generation
Jingxuan He, Mark Vero, Gabriela Krasnopolska et al.
ICML 2024posterarXiv:2402.09497
Language Models as Semantic Indexers
Bowen Jin, Hansi Zeng, Guoyin Wang et al.
ICML 2024posterarXiv:2310.07815
Model-Based Minimum Bayes Risk Decoding for Text Generation
Yuu Jinnai, Tetsuro Morimura, Ukyo Honda et al.
ICML 2024posterarXiv:2311.05263
OSSCAR: One-Shot Structured Pruning in Vision and Language Models with Combinatorial Optimization
Xiang Meng, Shibal Ibrahim, Kayhan Behdin et al.
ICML 2024posterarXiv:2403.12983
Position: Do pretrained Transformers Learn In-Context by Gradient Descent?
Lingfeng Shen, Aayush Mishra, Daniel Khashabi
ICML 2024poster
Revisiting Character-level Adversarial Attacks for Language Models
Elias Abad Rocamora, Yongtao Wu, Fanghui Liu et al.
ICML 2024posterarXiv:2405.04346
Simple linear attention language models balance the recall-throughput tradeoff
Simran Arora, Sabri Eyuboglu, Michael Zhang et al.
ICML 2024spotlightarXiv:2402.18668
StableSSM: Alleviating the Curse of Memory in State-space Models through Stable Reparameterization
Shida Wang, Qianxiao Li
ICML 2024posterarXiv:2311.14495