Erik
SWE @hf.co
- We have Nvidia B200s ready to go for you in Hugging Face Inference Endpoints 🔥 I tried them out myself and the performance is amazing. On top of that we just got a fresh batch of H100s as well. At $4.5/hour it's a clear winner in terms of price/perf compared to the A100.
- We just refreshed 🍋 our analytics in @hf.co endpoints. More info below!
- Morning workout at the @hf.co Paris office is imo one of the best perks.
- Gemma 3 is live 🔥 You can deploy it from endpoints directly with an optimally selected hardware and configurations. Give it a try 👇
- Apparently, mom is a better engineer than what I am.
- today as part of a course, I implemented a program that takes a bit stream like so: 10001001110111101000100111111011 and decodes the intel 8088 assembly from it like: mov si, bx mov bx, di only works on the mov instruction, register to register. code: github.com/ErikKaum/bit...
- Ambition is a paradox. You should always aim higher, but that easily becomes a state where you're never satisfied. Just reached 10k MRR. Now there's the next goal of 20k. Sharif has a good talk on this: emotional runway. How do you deal with this paradox? video: www.youtube.com/watch?v=zUnQ...
- Qui Gon Jinn sharing some insightful prompting wisdom 👌🏼
- it's this time of the year 😍
- Reposted by Erik[Not loaded yet]
- Reposted by ErikLet's go! We are releasing SmolVLM, a smol 2B VLM built for on-device inference that outperforms all models at similar GPU RAM usage and tokens throughputs. SmolVLM can be fine-tuned on a Google collab and be run on a laptop! Or process millions of documents with a consumer GPU!