@Prince_Canuma Profile picture

Prince Canuma

@Prince_Canuma

Apple MLX King 🤴🏽• ML Research Engineer @arcee_ai👨🏾‍💻 • MLOps • LLMs • RAG • Speaker • Writer • Ex-@neptune_ai • https://t.co/iZnxoefJBU

Similar User
Derrick Mwiti photo

@_mwitiderrick

Notion Calendar photo

@NotionCalendar

Lingjie Liu photo

@LingjieLiu1

David photo

@DavidSHolz

Dan Luu photo

@danluu

Sebastien Bubeck photo

@SebastienBubeck

Craft.do photo

@craftdocsapp

Sam Bowman photo

@sleepinyourhat

Tim Pearce photo

@Tea_Pearce

Behnam Neyshabur photo

@bneyshabur

Sherwin Wu photo

@sherwinwu

Seth Kramer photo

@sethjkramer

Omar Sanseviero photo

@osanseviero

Ross Wightman photo

@wightmanr

GitBook photo

@GitBookIO

Pinned

New video is OUT 🎉🚀 Get started Gemma 2 Locally on Mac using MLX In this video, we'll explore how to convert and run Google's Gemma 2 language model locally on your Mac using the MLX framework. You'll learn: - What Google Gemma 2 is and its variants - How to convert a…

Tweet Image 1

Molmo won today. I will try again tomorrow 😁

Molmo port to MLX update 🔥🚀 Language Model is finally speaking English :) Next: Vision Model

Tweet Image 1


Quality of code 📉

Coding with AI in 2024.



The new Devin 👀 It’s nice demo, and I think the team worked incredibly hard. Kudos! However, I don’t think it’s a product for ML professionals, yet. ML is experimental in nature and the discoveries that matter don’t always come from just taking data and training. You could…

NEO’s score qualifies it as a Kaggle Grandmaster, effectively bringing world-class ML expertise to your fingertips. Neo is getting ready for early beta users. Join our waitlist here: heyneo.so/waitlist



Molmo port to MLX update 🔥🚀 Language Model is finally speaking English :) Next: Vision Model

Tweet Image 1

😎

My prayers have been answered 🙏 I'm calling it now, Apple Silicon is going to be the best ecosystem for the future of AI and semi-autonomous agents



The best there is 🔥 I love the daily mail


🚀😎

🚀 Qwen2-VL-2B-Q4 + M1 Pro 🚀 1-2 weeks old setup: Prompt: 9.441 t/s Generation: 31.157 t/s Updated mlx to 0.20.0: Prompt: 14.268 t/s Generation: 36.463 t/s Switched mlx-vlm to main branch: Prompt: 13.025 t/s Generation: 73.516 t/s



All thanks to @awnihannun ❤️

🚀 MLX-VLM upcoming release performance gains for Qwen2-VL: M3 Max (96GB): • 8-bit: ⚡️ 20% faster • 4-bit: ⚡️ 33% faster M2 Ultra (192GB): • 4-bit: 🤯 64% faster inference Qwen2-VL is about to get seriously fast.

Tweet Image 1
Tweet Image 2


“Don’t try to find the best design in software architecture; instead, strive for the least worst combination of trade-offs.”

Tweet Image 1

Cool! Who wants to run this model on MLX VLM?

Vision-language models (VLMs) are revolutionizing how we use Earth observation (EO) data, but none could reason over time—a critical need for applications like disaster relief—until now. Introducing TEOChat 🌍🤖, the first VLM for temporal EO data! arxiv.org/abs/2410.06234 1/8

Tweet Image 1


Congratulations guys 🔥🚀 This is super awesome news!

We raised $8M and are thrilled to have @SalesforceVC @generalcatalyst @julien_c @amasad @pirroh and other industry leader angels join us as investors. We are hiring across all positions! Our thoughts and job application links here: argmaxinc.com/blog/seed

Tweet Image 1


🚀

Wow, MLX is fast. I’ve been using Whisper locally, but using CPU 🫣 Switching to MLX, and it’s a night and day difference. BoltAI is going to get a lot faster.



Welcome back your Highness 👸🏽

👑Qwen2.5-Coder-32B-Instruct is currently ranked #1 on both Hugging Face Models @huggingface and Spaces @Gradio trending!

Tweet Image 1


632 tokens/sec for 4bit quant 🔥🚀

Tweet Image 1

Impressive: 468 tokens/sec running Florence-2 on Apple MLX at FP32 precision! 🚀 No quantization needed.



Impressive: 468 tokens/sec running Florence-2 on Apple MLX at FP32 precision! 🚀 No quantization needed.

Florence-2 is next level good! 🔥 On-device OCR will never be the same. Btw, I will be speaking about MLX at Data Science Summit in a couple weeks. Date: 22 of November 2024 Location: PGE Narodowy Stadium, Warsaw Poland 🇵🇱 20% Discount Code: DSS24SP20 See you there!

Tweet Image 1


GPT-4o level pair programmer that runs locally on your Mac 🚀🔥 Time to ship 🚢


Loading...

Something went wrong.


Something went wrong.