Poster Papers

24,624 papers found • Page 473 of 493

Texture-GS: Disentangle the Geometry and Texture for 3D Gaussian Splatting Editing

Tian-Xing Xu, WENBO HU, Yu-Kun Lai et al.

ECCV 2024posterarXiv:2403.10050
40
citations

Texture-Preserving Diffusion Models for High-Fidelity Virtual Try-On

Xu Yang, Changxing Ding, Zhibin Hong et al.

CVPR 2024posterarXiv:2404.01089
37
citations

TexVocab: Texture Vocabulary-conditioned Human Avatars

Yuxiao Liu, Zhe Li, Yebin Liu et al.

CVPR 2024posterarXiv:2404.00524
4
citations

TF-FAS: Twofold-Element Fine-Grained Semantic Guidance for Generalizable Face Anti-Spoofing

Xudong Wang, Ke-Yue Zhang, Taiping Yao et al.

ECCV 2024poster
11
citations

The Alignment Problem from a Deep Learning Perspective

Richard Ngo, Lawrence Chan, Sören Mindermann

ICLR 2024posterarXiv:2209.00626

The All-Seeing Project: Towards Panoptic Visual Recognition and Understanding of the Open World

Weiyun Wang, Min Shi, Qingyun Li et al.

ICLR 2024posterarXiv:2308.01907
118
citations

The All-Seeing Project V2: Towards General Relation Comprehension of the Open World

Weiyun Wang Weiyun, yiming ren, Haowen Luo et al.

ECCV 2024posterarXiv:2402.19474
86
citations

The Audio-Visual Conversational Graph: From an Egocentric-Exocentric Perspective

Wenqi Jia, Miao Liu, Hao Jiang et al.

CVPR 2024posterarXiv:2312.12870
15
citations

The Balanced-Pairwise-Affinities Feature Transform

Daniel Shalam, Simon Korman

ICML 2024posterarXiv:2407.01467

The Benefits of Reusing Batches for Gradient Descent in Two-Layer Networks: Breaking the Curse of Information and Leap Exponents

Yatin Dandi, Emanuele Troiani, Luca Arnaboldi et al.

ICML 2024posterarXiv:2402.03220

The Blessing of Randomness: SDE Beats ODE in General Diffusion-based Image Editing

Shen Nie, Hanzhong Guo, Cheng Lu et al.

ICLR 2024posterarXiv:2311.01410
59
citations

The Computational Complexity of Finding Second-Order Stationary Points

Andreas Kontogiannis, Vasilis Pollatos, Sotiris Kanellopoulos et al.

ICML 2024poster

The Cost of Scaling Down Large Language Models: Reducing Model Size Affects Memory before In-context Learning

Tian Jin, Nolan Clement, Xin Dong et al.

ICLR 2024poster

The Curse of Diversity in Ensemble-Based Exploration

Zhixuan Lin, Pierluca D'Oro, Evgenii Nikishin et al.

ICLR 2024posterarXiv:2405.04342
6
citations

The Devil is in the Details: StyleFeatureEditor for Detail-Rich StyleGAN Inversion and High Quality Image Editing

Denis Bobkov, Vadim Titov, Aibek Alanov et al.

CVPR 2024posterarXiv:2406.10601

The Devil is in the Neurons: Interpreting and Mitigating Social Biases in Language Models

Yan Liu, Yu Liu, Xiaokang Chen et al.

ICLR 2024poster

The Devil is in the Object Boundary: Towards Annotation-free Instance Segmentation using Foundation Models

cheng shi, Sibei Yang

ICLR 2024posterarXiv:2404.11957

The Devil is in the Statistics: Mitigating and Exploiting Statistics Difference for Generalizable Semi-supervised Medical Image Segmentation

Muyang Qiu, Jian Zhang, Lei Qi et al.

ECCV 2024posterarXiv:2407.11356
7
citations

The Effectiveness of Random Forgetting for Robust Generalization

Vijaya Raghavan T Ramkumar, Bahram Zonooz, Elahe Arani

ICLR 2024posterarXiv:2402.11733

The Effect of Intrinsic Dataset Properties on Generalization: Unraveling Learning Differences Between Natural and Medical Images

Nicholas Konz, Maciej Mazurowski

ICLR 2024posterarXiv:2401.08865
14
citations

The Effect of Weight Precision on the Neuron Count in Deep ReLU Networks

Songhua He, Periklis Papakonstantinou

ICML 2024poster

The Emergence of Reproducibility and Consistency in Diffusion Models

Huijie Zhang, Jinfan Zhou, Yifu Lu et al.

ICML 2024poster

The Entropy Enigma: Success and Failure of Entropy Minimization

Ori Press, Ravid Shwartz-Ziv, Yann LeCun et al.

ICML 2024posterarXiv:2405.05012

The Expressive Power of Low-Rank Adaptation

Yuchen Zeng, Kangwook Lee

ICLR 2024posterarXiv:2310.17513

The Expressive Power of Path-Based Graph Neural Networks

Caterina Graziani, Tamara Drucks, Fabian Jogl et al.

ICML 2024poster

The Expressive Power of Transformers with Chain of Thought

William Merrill, Ashish Sabharwal

ICLR 2024posterarXiv:2310.07923

The Fabrication of Reality and Fantasy: Scene Generation with LLM-Assisted Prompt Interpretation

Yi Yao, Chan-Feng Hsu, Jhe-Hao Lin et al.

ECCV 2024posterarXiv:2407.12579

The First to Know: How Token Distributions Reveal Hidden Knowledge in Large Vision-Language Models?

Qinyu Zhao, Ming Xu, Kartik Gupta et al.

ECCV 2024posterarXiv:2403.09037
15
citations

The Fundamental Limits of Least-Privilege Learning

Theresa Stadler, Bogdan Kulynych, Michael Gastpar et al.

ICML 2024posterarXiv:2402.12235

The Gaussian Discriminant Variational Autoencoder (GdVAE): A Self-Explainable Model with Counterfactual Explanations

Anselm Haselhoff, Kevin Trelenberg, Fabian Küppers et al.

ECCV 2024posterarXiv:2409.12952
5
citations

The Generative AI Paradox: “What It Can Create, It May Not Understand”

Peter West, Ximing Lu, Nouha Dziri et al.

ICLR 2024poster

The good, the bad and the ugly sides of data augmentation: An implicit spectral regularization perspective

Chi-Heng Lin, Chiraag Kaushik, Eva Dyer et al.

ICML 2024posterarXiv:2210.05021

The Good, The Bad, and Why: Unveiling Emotions in Generative AI

CHENG LI, Jindong Wang, Yixuan Zhang et al.

ICML 2024posterarXiv:2312.11111

The Hard Positive Truth about Vision-Language Compositionality

Amita Kamath, Cheng-Yu Hsieh, Kai-Wei Chang et al.

ECCV 2024posterarXiv:2409.17958
15
citations

The Hedgehog & the Porcupine: Expressive Linear Attentions with Softmax Mimicry

Michael Zhang, Kush Bhatia, Hermann Kumbong et al.

ICLR 2024posterarXiv:2402.04347
84
citations

The Hidden Language of Diffusion Models

Hila Chefer, Oran Lang, Mor Geva et al.

ICLR 2024posterarXiv:2306.00966
33
citations

The Human-AI Substitution game: active learning from a strategic labeler

Tom Yan, Chicheng Zhang

ICLR 2024poster

The Illusion of State in State-Space Models

William Merrill, Jackson Petty, Ashish Sabharwal

ICML 2024posterarXiv:2404.08819

The importance of feature preprocessing for differentially private linear optimization

Ziteng Sun, Ananda Theertha Suresh, Aditya Krishna Menon

ICLR 2024posterarXiv:2307.11106

The Joint Effect of Task Similarity and Overparameterization on Catastrophic Forgetting — An Analytical Model

Daniel Goldfarb, Itay Evron, Nir Weinberger et al.

ICLR 2024posterarXiv:2401.12617

The Linear Representation Hypothesis and the Geometry of Large Language Models

Kiho Park, Yo Joong Choe, Victor Veitch

ICML 2024posterarXiv:2311.03658

The Lipschitz-Variance-Margin Tradeoff for Enhanced Randomized Smoothing

Blaise Delattre, Alexandre Araujo, Quentin Barthélemy et al.

ICLR 2024posterarXiv:2309.16883
6
citations

The LLM Surgeon

Tycho van der Ouderaa, Markus Nagel, Mart van Baalen et al.

ICLR 2024posterarXiv:2312.17244

The Lottery Ticket Hypothesis in Denoising: Towards Semantic-Driven Initialization

Jiafeng Mao, Xueting Wang, Kiyoharu Aizawa

ECCV 2024posterarXiv:2312.08872
11
citations

The Manga Whisperer: Automatically Generating Transcriptions for Comics

Ragav Sachdeva, Andrew Zisserman

CVPR 2024posterarXiv:2401.10224

The Marginal Value of Momentum for Small Learning Rate SGD

Runzhe Wang, Sadhika Malladi, Tianhao Wang et al.

ICLR 2024posterarXiv:2307.15196

The Max-Min Formulation of Multi-Objective Reinforcement Learning: From Theory to a Model-Free Algorithm

Giseung Park, woohyeon Byeon, Seongmin Kim et al.

ICML 2024posterarXiv:2406.07826

The mechanistic basis of data dependence and abrupt learning in an in-context classification task

Gautam Reddy Nallamala

ICLR 2024poster

The Merit of River Network Topology for Neural Flood Forecasting

Nikolas Kirschstein, Yixuan Sun

ICML 2024posterarXiv:2405.19836

The Mirrored Influence Hypothesis: Efficient Data Influence Estimation by Harnessing Forward Passes

Myeongseob Ko, Feiyang Kang, Weiyan Shi et al.

CVPR 2024posterarXiv:2402.08922