Zhaochong An
PhD student at University of Copenhagen🇩🇰
Member of @belongielab.org, ELLIS @ellis.eu, and Pioneer Centre for AI🤖
Computer Vision | Multimodality
MSc CS at ETH Zurich
🔗: zhaochongan.github.io/
- I will present our #ICLR2025 Spotlight paper MM-FSS this week in Singapore! Curious how MULTIMODALITY can enhance FEW-SHOT 3D SEGMENTATION WITHOUT any additional cost? Come chat with us at the poster session — always happy to connect!🤝 🗓️ Fri 25 Apr, 3 - 5:30 pm 📍 Hall 3 + Hall 2B #504 More follow
- Thrilled to announce "Multimodality Helps Few-shot 3D Point Cloud Semantic Segmentation" is accepted as a Spotlight (5%) at #ICLR2025! Our model MM-FSS leverages 3D, 2D, & text modalities for robust few-shot 3D segmentation—all without extra labeling cost. 🤩 arxiv.org/pdf/2410.22489 More details👇
- 1/ Our previous work, COSeg, showed that explicitly modeling support–query relationships via correlation optimization can achieve SOTA 3D few-shot segmentation. With MM-FSS, we take it even further! Ref: COSeg arxiv.org/pdf/2410.22489