- We've all heard about CLIP but have you heard about GCL? Generalized Contrastive Learning (GCL) is a generalization of the CLIP. GCL accommodates any number of text and images when representing documents, and also encodes relevance (or rank) to provide better first stage retrieval. 🧵 (1/5)Nov 27, 2024 20:43
- GCL extends the benefits of CLIP for multimodal contrastive learning but adds in flexibility to deal with many aspects of real world data like continuous relevance and varying data sources. Metrics in the article below or the next thread! GCL Article: www.marqo.ai/blog/general... 🧵 (2/5)
- GCL achieves a 94.5% increase in NDCG@10 and 504% for ERR@10 for in-domain and 26.3 - 48.8% and 44.3 - 108.0% increases in NDCG@10 and ERR@10 respectively for cold-start evaluations, measured relative to the CLIP baseline. 🧵 (3/5)
- When compared to a keyword search only baseline of BM25, there is an improvement of 300 - 750% for in-domain and cold start respectively across NDCG@10 and ERR@10. GCL paper: arxiv.org/pdf/2404.08535 🧵 (4/5)
- With this, we built Marqtune, an embedding model training platform, based on the foundation of our GCL training framework. This means better, more relevant search results and recommendations. 🧵 (5/5) Learn more about Marqtune: www.marqo.ai/blog/introdu...