@hcelina_ Profile picture

Celina

@hcelina_

coding @ Hugging Face | maintainer of 🤗/huggingface_hub

Celina Reposted

If only we already had a few cool AI companies in Paris


Celina Reposted

SmolLM2 135M in 8-bit runs at almost 180 toks/sec with MLX Swift running fully on my iPhone 15 Pro. H/t the team at @huggingface for the small + high-quality models.


Celina Reposted

How to use PEFT LoRA adapters in llama.cpp you may ask? Introducing GGUF-my-LoRA, a brand new space that helps you to do just that!

Tweet Image 1

Celina Reposted

Introducing SmolLM2: the new, best, and open 1B-parameter language model. We trained smol models on up to 11T tokens of meticulously curated datasets. Fully open-source Apache 2.0 and we will release all the datasets and training scripts!

Tweet Image 1

Celina Reposted

@huggingface Hub Python library now comes with easy inference for vision language models! $ pip install huggingface_hub 🤗

Tweet Image 1

Celina Reposted

Spaces of the week! - we've got Text to Image, Text to Video, Multilingual LLMs and much more! 🔥

Tweet Image 1

Celina Reposted

🔐Want safer models? Look no further! We've partnered with @ProtectAICorp and integrated their Guardian scanner to the Hub, enhancing model security for the community 😏 You should see scan results on your repository's page 🔥

Tweet Image 1

Celina Reposted

We want to extend up to 1000+ languages the data-driven filtering approach we used to create the *Fineweb* and *Fineweb-edu* large scale pretraining datasets The first step –which proved surprisingly difficult– was to find reliable high-early-signal evaluations in many languages…

Tweet Image 1

Celina Reposted

How can you deploy and scale open-source AI securely on your infrastructure? Introducing HUGS—an optimized, zero-configuration inference service by @huggingface that simplifies and accelerates the development of AI applications with open models for companies. 🤗 What is HUGS? 💡…

Tweet Image 1

🫶

You get a good sense of a company's culture when you collaborate with them. HuggingFace's is fantastic.



Celina Reposted

The next release of @huggingface TGI will include @neuralmagic's fused Marlin MoE kernels for AWQ checkpoints (in addition to the existing GPTQ support) 🔥. If you use quantized MoE models you'll get a large boost in throughput (on CC >= 8) by updating to the latest TGI.

Tweet Image 1
Tweet Image 2

Celina Reposted

PyTorch users, use the `huggingface_hub.PyTorchModelHubMixin` as much as you can. This provide illegal power to your modules. They still will not reach the global minima (that is a curse no one can fix), but can be shared easily with the Hugging Face Hub.


Celina Reposted

You can now quantize LLMs for MLX directly in the @huggingface Hub! Thanks to @reach_vb and @pcuenq for setting up the space:

Tweet Image 1

Celina Reposted

> Coolest release today: The Open LLM Leaderboard, aka the best benchmark suite for comparing open LLMs, just released a visualizer to compare any two models together! 🔎 Example below: how does the new nvidia Llama-3.1-Nemotron-70B that we've heard so much about compare with…


Celina Reposted

❤️ You can now run models on 🤗 Hugging Face with Ollama. Let's go open-source and Ollama! 🚀🚀🚀

Tweet Image 1

Fuck it! You can now run *any* GGUF on the Hugging Face Hub directly with @ollama 🔥 This has been a constant ask from the community, starting today you can point to any of the 45,000 GGUF repos on the Hub* *Without any changes whatsoever! ⚡ All you need to do is: ollama run…



United States Trends
Loading...

Something went wrong.


Something went wrong.