Pedro Cuenca
ML Engineer at Hugging Face
- Google's FunctionGemma is out 🥳 smol 🤏 270M (not B!) parameters model. Why is this interesting? 🔨 Designed for tool calling. 📲 Perfect for on-device use. 👌 Dramatically increases performance on your domain with fine-tuning.
- - Model: huggingface.co/google/func... - MLX quants by @Prince_Canuma: huggingface.co/collections... - Amazing game by @xenovacom: huggingface.co/spaces/webm...
- Get inspired, follow the fine-tuning guide, and build! x.com/ben_burtens...
- Reposted by Pedro CuencaJetBrains has been quietly building something special for the open-source LLM community. More details will be posted soon on Hugging Face. Stay tuned! 🧑💻
- Reposted by Pedro CuencaAnnouncing Global-MMLU - an improved MMLU Open dataset with evaluation coverage across 42 languages. The result of months of work with the goal of advancing Multilingual LLM evaluation. Built together with the community and amazing collaborators at Cohere4AI, MILA, MIT, and many more.
- Reposted by Pedro Cuenca[Not loaded yet]
- Reposted by Pedro Cuenca[Not loaded yet]
- Reposted by Pedro CuencaThe amazing, new Qwen2.5-Coder 32B model can now write SQL for any @hf.co dataset ✨
- Reposted by Pedro CuencaSo many open-source and open releases last week! Here's a recap, find the text-readable version here huggingface.co/posts/merve/...
- Reposted by Pedro Cuenca[Not loaded yet]
- Congrats!
- Reposted by Pedro Cuenca[Not loaded yet]
- Reposted by Pedro Cuenca[Not loaded yet]
- Reposted by Pedro CuencaThis is insane! Structured generation in the browser with the new @hf.co SmolLM2-1.7B model • Tiny 1.7B LLM running at 88 tokens / second ⚡ • Powered by MLC/WebLLM on WebGPU 🔥 • JSON Structured Generation entirely in the browser 🤏
- Reposted by Pedro Cuenca[Not loaded yet]
- Reposted by Pedro Cuenca[Not loaded yet]
- Reposted by Pedro Cuenca[Not loaded yet]
- Reposted by Pedro Cuenca[Not loaded yet]
- Reposted by Pedro Cuenca[Not loaded yet]
- Reposted by Pedro Cuenca[Not loaded yet]
- Reposted by Pedro CuencaI'm disheartened by how toxic and violent some responses were here. There was a mistake, a quick follow up to mitigate and an apology. I worked with Daniel for years and is one of the persons most preoccupied with ethical implications of AI. Some replies are Reddit-toxic level. We need empathy.
- Reposted by Pedro CuencaWe’re looking for an intern to join our SmolLM team! If you’re excited about training LLMs and building high-quality datasets, we’d love to hear from you. 🤗 US: apply.workable.com/huggingface/... EMEA: apply.workable.com/huggingface/...
- Reposted by Pedro Cuenca[Not loaded yet]
- Reposted by Pedro Cuenca[Not loaded yet]
- SmolVLM was just released 🚀 It's a great, small, and fully open VLM that I'm really excited about for fine-tuning and on-device use cases 💻 It also comes with 0-day MLX support via mlx-vlm, here's it running at > 80 tok/s on my M1 Max 🤯
- More info: ⛰️ Andi's post bsky.app/profile/andi... 📖 Blog post huggingface.co/blog/smolvlm 💃🏻 Models huggingface.co/collections/... 🎮 HF Demo huggingface.co/spaces/Huggi... 🔨 mlx-vlm PR [WIP] github.com/Blaizzy/mlx-...