Poster Papers
24,624 papers found • Page 393 of 493
Conference
Improving Point-based Crowd Counting and Localization Based on Auxiliary Point Guidance
I-HSIANG CHEN, Wei-Ting Chen, Yu-Wei Liu et al.
Improving protein optimization with smoothed fitness landscapes
Andrew Kirjner, Jason Yim, Raman Samusevich et al.
Improving Prototypical Visual Explanations with Reward Reweighing, Reselection, and Retraining
Aaron Li, Robin Netzorg, Zhihan Cheng et al.
Improving Robustness to Model Inversion Attacks via Sparse Coding Architectures
Sayanton Vhaduri Dibbo, Adam Breuer, Juston Moore et al.
Improving Robustness to Multiple Spurious Correlations by Multi-Objective Optimization
Nayeong Kim, Juwon Kang, Sungsoo Ahn et al.
Improving Sample Efficiency of Model-Free Algorithms for Zero-Sum Markov Games
Songtao Feng, Ming Yin, Yu-Xiang Wang et al.
Improving SAM Requires Rethinking its Optimization Formulation
Wanyun Xie, Fabian Latorre, Kimon Antonakopoulos et al.
Improving Semantic Correspondence with Viewpoint-Guided Spherical Maps
Octave Mariotti, Oisin Mac Aodha, Hakan Bilen
Improving Sharpness-Aware Minimization by Lookahead
Runsheng Yu, Youzhi Zhang, James Kwok
Improving Single Domain-Generalized Object Detection: A Focus on Diversification and Alignment
Muhammad Sohail Danish, Muhammad Haris Khan, Muhammad Akhtar Munir et al.
Improving Spectral Snapshot Reconstruction with Spectral-Spatial Rectification
Jiancheng Zhang, Haijin Zeng, Yongyong Chen et al.
Improving Subject-Driven Image Synthesis with Subject-Agnostic Guidance
Kelvin C.K. Chan, Yang Zhao, Xuhui Jia et al.
Improving Text-guided Object Inpainting with Semantic Pre-inpainting
Yifu Chen, Jingwen Chen, Yingwei Pan et al.
Improving the Convergence of Dynamic NeRFs via Optimal Transport
Sameera Ramasinghe, Violetta Shevchenko, Gil Avraham et al.
Improving the Generalization of Segmentation Foundation Model under Distribution Shift via Weakly Supervised Adaptation
Haojie Zhang, Yongyi Su, Xun Xu et al.
Improving Token-Based World Models with Parallel Observation Prediction
Lior Cohen, Kaixin Wang, Bingyi Kang et al.
Improving Training Efficiency of Diffusion Models via Multi-Stage Framework and Tailored Multi-Decoder Architecture
Huijie Zhang, Yifu Lu, Ismail Alkhouri et al.
Improving Transferable Targeted Adversarial Attacks with Model Self-Enhancement
Han Wu, Guanyan Ou, Weibin Wu et al.
Improving Transformers with Dynamically Composable Multi-Head Attention
Da Xiao, Qingye Meng, Shengping Li et al.
Improving Unsupervised Domain Adaptation: A Pseudo-Candidate Set Approach
Aveen Dayal, Rishabh Lalla, Linga Reddy Cenkeramaddi et al.
Improving Unsupervised Hierarchical Representation with Reinforcement Learning
Ruyi An, Yewen Li, Xu He et al.
Improving Video Segmentation via Dynamic Anchor Queries
Yikang Zhou, Tao Zhang, Xiangtai Li et al.
Improving Virtual Try-On with Garment-focused Diffusion Models
Siqi Wan, Yehao Li, Jingwen Chen et al.
Improving Vision and Language Concepts Understanding with Multimodal Counterfactual Samples
Chengen Lai, Shengli Song, Sitong Yan et al.
Improving Visual Recognition with Hyperbolical Visual Hierarchy Mapping
Hyeongjun Kwon, Jinhyun Jang, Jin Kim et al.
Improving Zero-Shot Generalization for CLIP with Variational Adapter
Ziqian Lu, Fengli Shen, Mushui Liu et al.
Improving Zero-shot Generalization of Learned Prompts via Unsupervised Knowledge Distillation
Marco Mistretta, Alberto Baldrati, Marco Bertini et al.
IMPUS: Image Morphing with Perceptually-Uniform Sampling Using Diffusion Models
Zhaoyuan Yang, Zhengyang Yu, Zhiwei Xu et al.
IM-Unpack: Training and Inference with Arbitrarily Low Precision Integers
Zhanpeng Zeng, Karthikeyan Sankaralingam, Vikas Singh
In2SET: Intra-Inter Similarity Exploiting Transformer for Dual-Camera Compressive Hyperspectral Imaging
Xin Wang, Lizhi Wang, Xiangtian Ma et al.
Incentive-Aware Federated Learning with Training-Time Model Rewards
Zhaoxuan Wu, Mohammad Mohammadi Amiri, Ramesh Raskar et al.
Incentivized Learning in Principal-Agent Bandit Games
Antoine Scheid, Daniil Tiapkin, Etienne Boursier et al.
Incentivized Truthful Communication for Federated Bandits
Zhepei Wei, Chuanhao Li, Tianze Ren et al.
InceptionNeXt: When Inception Meets ConvNeXt
Weihao Yu, Pan Zhou, Shuicheng Yan et al.
In-context Autoencoder for Context Compression in a Large Language Model
Tao Ge, Hu Jing, Lei Wang et al.
In-context Convergence of Transformers
Yu Huang, Yuan Cheng, Yingbin LIANG
In-Context Decision Transformer: Reinforcement Learning via Hierarchical Chain-of-Thought
sili huang, Jifeng Hu, Hechang Chen et al.
In-context Exploration-Exploitation for Reinforcement Learning
Zhenwen Dai, Federico Tomasi, Sina Ghiassian
In-Context Freeze-Thaw Bayesian Optimization for Hyperparameter Optimization
Herilalaina Rakotoarison, Steven Adriaensen, Neeratyoy Mallik et al.
In-Context Language Learning: Architectures and Algorithms
Ekin Akyürek, Bailin Wang, Yoon Kim et al.
In-Context Learning Agents Are Asymmetric Belief Updaters
Johannes A. Schubert, Akshay Kumar Jagadish, Marcel Binz et al.
In-Context Learning Learns Label Relationships but Is Not Conventional Learning
Jannik Kossen, Yarin Gal, Tom Rainforth
In-context Learning on Function Classes Unveiled for Transformers
Zhijie Wang, Bo Jiang, Shuai Li
In-Context Learning through the Bayesian Prism
Madhur Panwar, Kabir Ahuja, Navin Goyal
In-Context Principle Learning from Mistakes
Tianjun Zhang, Aman Madaan, Luyu Gao et al.
In-Context Reinforcement Learning for Variable Action Spaces
Viacheslav Sinii, Alexander Nikulin, Vladislav Kurenkov et al.
In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation
Shiqi Chen, Miao Xiong, Junteng Liu et al.
In-Context Unlearning: Language Models as Few-Shot Unlearners
Martin Pawelczyk, Seth Neel, Himabindu Lakkaraju
In-context Vectors: Making In Context Learning More Effective and Controllable Through Latent Space Steering
Sheng Liu, Haotian Ye, Lei Xing et al.
Incorporating Geo-Diverse Knowledge into Prompting for Increased Geographical Robustness in Object Recognition
Kyle Buettner, Sina Malakouti, Xiang Li et al.