2024 "instance segmentation" Papers
17 papers found
A Simple Background Augmentation Method for Object Detection with Diffusion Model
YUHANG LI, Xin Dong, Chen Chen et al.
Cached Transformers: Improving Transformers with Differentiable Memory Cached
Zhaoyang Zhang, Wenqi Shao, Yixiao Ge et al.
DenseNets Reloaded: Paradigm Shift Beyond ResNets and ViTs
Donghyun Kim, Byeongho Heo, Dongyoon Han
DetKDS: Knowledge Distillation Search for Object Detectors
Lujun Li, Yufan Bao, Peijie Dong et al.
FipTR: A Simple yet Effective Transformer Framework for Future Instance Prediction in Autonomous Driving
Xingtai Gui, Tengteng Huang, Haonan Shao et al.
Four Ways to Improve Verbo-visual Fusion for Dense 3D Visual Grounding
Ozan Unal, Christos Sakaridis, Suman Saha et al.
Generative Active Learning for Long-tailed Instance Segmentation
Muzhi Zhu, Chengxiang Fan, Hao Chen et al.
InsMapper: Exploring Inner-instance Information for Vectorized HD Mapping
Zhenhua Xu, Kwan-Yee K. Wong, Hengshuang ZHAO
MMVR: Millimeter-wave Multi-View Radar Dataset and Benchmark for Indoor Perception
Mohammad Mahbubur Rahman, Ryoma Yataka, Sorachi Kato et al.
OmniNOCS: A unified NOCS dataset and model for 3D lifting of 2D objects
Akshay Krishnan, Abhijit Kundu, Kevis Maninis et al.
One Step Learning, One Step Review
Huang Xiaolong, Qiankun Li, Xueran Li et al.
Quality Assured: Rethinking Annotation Strategies in Imaging AI
Tim Rädsch, Annika Reinke, Vivienn Weru et al.
Removing Rows and Columns of Tokens in Vision Transformer enables Faster Dense Prediction without Retraining
Diwei Su, cheng fei, Jianxu Luo
SegGen: Supercharging Segmentation Models with Text2Mask and Mask2Img Synthesis
Hanrong Ye, Jason Wen Yong Kuen, Qing Liu et al.
Segment, Lift and Fit: Automatic 3D Shape Labeling from 2D Prompts
Jianhao Li, Tianyu Sun, Zhongdao Wang et al.
Semantic-Aware Autoregressive Image Modeling for Visual Representation Learning
Kaiyou Song, Shan Zhang, Tong Wang
Sparse Cocktail: Every Sparse Pattern Every Sparse Ratio All At Once
Zhangheng Li, Shiwei Liu, Tianlong Chen et al.