Celina
@hcelina_coding @ Hugging Face | maintainer of 🤗/huggingface_hub
If only we already had a few cool AI companies in Paris
SmolLM2 135M in 8-bit runs at almost 180 toks/sec with MLX Swift running fully on my iPhone 15 Pro. H/t the team at @huggingface for the small + high-quality models.
How to use PEFT LoRA adapters in llama.cpp you may ask? Introducing GGUF-my-LoRA, a brand new space that helps you to do just that!
Introducing SmolLM2: the new, best, and open 1B-parameter language model. We trained smol models on up to 11T tokens of meticulously curated datasets. Fully open-source Apache 2.0 and we will release all the datasets and training scripts!
✨ @huggingface Hub Python library now comes with easy inference for vision language models! $ pip install huggingface_hub 🤗
Spaces of the week! - we've got Text to Image, Text to Video, Multilingual LLMs and much more! 🔥
🔐Want safer models? Look no further! We've partnered with @ProtectAICorp and integrated their Guardian scanner to the Hub, enhancing model security for the community 😏 You should see scan results on your repository's page 🔥
We want to extend up to 1000+ languages the data-driven filtering approach we used to create the *Fineweb* and *Fineweb-edu* large scale pretraining datasets The first step –which proved surprisingly difficult– was to find reliable high-early-signal evaluations in many languages…
How can you deploy and scale open-source AI securely on your infrastructure? Introducing HUGS—an optimized, zero-configuration inference service by @huggingface that simplifies and accelerates the development of AI applications with open models for companies. 🤗 What is HUGS? 💡…
🫶
The next release of @huggingface TGI will include @neuralmagic's fused Marlin MoE kernels for AWQ checkpoints (in addition to the existing GPTQ support) 🔥. If you use quantized MoE models you'll get a large boost in throughput (on CC >= 8) by updating to the latest TGI.
PyTorch users, use the `huggingface_hub.PyTorchModelHubMixin` as much as you can. This provide illegal power to your modules. They still will not reach the global minima (that is a curse no one can fix), but can be shared easily with the Hugging Face Hub.
You can now quantize LLMs for MLX directly in the @huggingface Hub! Thanks to @reach_vb and @pcuenq for setting up the space:
> Coolest release today: The Open LLM Leaderboard, aka the best benchmark suite for comparing open LLMs, just released a visualizer to compare any two models together! 🔎 Example below: how does the new nvidia Llama-3.1-Nemotron-70B that we've heard so much about compare with…
❤️ You can now run models on 🤗 Hugging Face with Ollama. Let's go open-source and Ollama! 🚀🚀🚀
Fuck it! You can now run *any* GGUF on the Hugging Face Hub directly with @ollama 🔥 This has been a constant ask from the community, starting today you can point to any of the 45,000 GGUF repos on the Hub* *Without any changes whatsoever! ⚡ All you need to do is: ollama run…
United States Trends
- 1. Tyson 434 B posts
- 2. $MAYO 11,8 B posts
- 3. Pence 50,5 B posts
- 4. Kash 85,8 B posts
- 5. Debbie 26 B posts
- 6. Dora 23,6 B posts
- 7. Mike Rogers 15,1 B posts
- 8. Whoopi 80,3 B posts
- 9. #LetsBONK 10,5 B posts
- 10. Gabrielle Union 1.541 posts
- 11. Iron Mike 18,3 B posts
- 12. Laken Riley 52,6 B posts
- 13. The FBI 243 B posts
- 14. Ticketmaster 17,8 B posts
- 15. #FursuitFriday 16,5 B posts
- 16. National Energy Council 3.811 posts
- 17. Pirates 20,6 B posts
- 18. Fauci 179 B posts
- 19. Cenk 12,8 B posts
- 20. B-Fab N/A
Something went wrong.
Something went wrong.