@joayats Profile picture

JosepOriol Ayats

@joayats

Coding brains to free ours

Similar User
gochujang photo

@justbhavyaugh

GeeksCAT photo

@Geeks_CAT

Leonardo A. Navarro-Labastida photo

@LeoNL09076176

GNUites photo

@gnuites

Gowtham Ramesh photo

@gowtham_ramesh1

Deepak chawla photo

@dchawla1307

Sol Cataldo photo

@SolCatv2

Daniel E.P.R. photo

@danielEPR94

cristina photo

@kix2mix2

Eduardo Moñino photo

@EduardoMonino

Sreekesh V photo

@Sreekesh11

Hao R photo

@hrong2450

Ashutosh Srivastava photo

@ashtava

Jordi Bagot 🐍 photo

@jordibagot

JosepOriol Ayats Reposted

hey everyone, i am sharing my repo for ViT where you can find a proper path to learn vision transformers and its uses in video processings. i have also implemented ViT from scratch so if you wanna know how it works, i got you covered. more coming soon! github.com/0xD4rky/Vision…


JosepOriol Ayats Reposted

🚀 Ja tenim l'AGENDA de la Festa de l'Open Source el 19/10/2024 a Casa de Cultura (Girona) amb uns ponents, xerrades i tallers que sincerament no et pots perdre 👀🔥 Reserva Ara la teva entrada aquí 👉 festa-opensource.geeks.cat #OpenSource #Girona #FestaOS4 #Tech

Tweet Image 1

JosepOriol Ayats Reposted

Smart!

Tweet Image 1

JosepOriol Ayats Reposted

Eii, tornem a fer la festa de l'open source de Girona. Si algú s'anima a fer una xerrada o taller feu proposta en aquest formulari! Us esperem 😁

👋 Tens una proposta de xerrada o taller relacionada amb el Codi Lliure? Comparteix i Participa a la propera Festa de l’Open Source el 19/10/2024 a Girona! Anima't 💪 omple el formulari ➡️ forms.gle/FNdvngy7rHahKx… #OpenSourceGirona #FestaOS24



JosepOriol Ayats Reposted

This is how I start my local linux servers


JosepOriol Ayats Reposted

👋 Tens una proposta de xerrada o taller relacionada amb el Codi Lliure? Comparteix i Participa a la propera Festa de l’Open Source el 19/10/2024 a Girona! Anima't 💪 omple el formulari ➡️ forms.gle/FNdvngy7rHahKx… #OpenSourceGirona #FestaOS24


JosepOriol Ayats Reposted

Phi goes MoE! @Microsoft just released Phi-3.5-MoE a 42B parameter MoE built upon datasets used for Phi-3. Phi-3.5 MoE outperforms bigger models in reasoning capability and is only behind GPT-4o-mini. 👀 TL;DR 🧮 42B parameters with 6.6B activated during generation 👨‍🏫  16…

Tweet Image 1

JosepOriol Ayats Reposted

Link to blog post: hamel.dev/blog/posts/cou… In the post, we tell you how to get the most out of the course, what to expect, and how to navigate the materials. We are still adding a few lessons, but 95% of them are on the site. This is a unique course with 30+ legendary…


JosepOriol Ayats Reposted

🚨 Introducing "ColPali: Efficient Document Retrieval with Vision Language Models" ! We use Vision LLMs + late interaction to improve document retrieval (RAG, search engines, etc.), solely using the image representation of document pages ! arxiv.org/abs/2407.01449 🧵(1/N)


JosepOriol Ayats Reposted

Whoa, I just realized that raising a kid is basically 18 years of prompt engineering 🤯


JosepOriol Ayats Reposted

Its genuinely hard to believe a 70B model is up there with the 1.8T GPT4? I guess training data really is everything

Welp folks, we have gpt-4 at home

Tweet Image 1


JosepOriol Ayats Reposted

great quote from karpathy most great organizations require leader(s) with a disproportionate amount of power when this is absent you end up with countless hierarchies of ineffective committees, e.g. many google products lack a Directly Responsible Individual with actual power


JosepOriol Ayats Reposted

(1/n) With our wonderful student researcher @OscarLi101, we’re thrilled to release our OmniPred paper, showing a language model (only 200M params + trained from scratch) can be used as a universal regressor to predict experimental outcomes! Link: arxiv.org/abs/2402.14547 We…

Tweet Image 1

JosepOriol Ayats Reposted

Sora is Not Released. All this buzz to try to get people excited about something they can't use yet. While I am interested to try out Sora, I'm not gonna fall for the obvious marketing play and get excited about something that's still not usable. And I also know it will be…


JosepOriol Ayats Reposted

540x Faster than GPT-4 100x Longer Sequences than GPT-4 And, better performance on Long Sequence tasks than GPT-4. Multi-Modal Mamba is going to change the LLM game on a scale you couldn't possibly imagine in the flash of a lightning. [Get Access Now] github.com/kyegomez/Multi…

Tweet Image 1

Huuugee!!!!

🧵 (1/n) 👉 Introducing QuIP#, a new SOTA LLM quantization method that uses incoherence processing from QuIP & lattices to achieve 2 bit LLMs with near-fp16 performance! Now you can run LLaMA 2 70B on a 24G GPU w/out offloading! 💻 cornell-relaxml.github.io/quip-sharp/

Tweet Image 1
Tweet Image 2


JosepOriol Ayats Reposted

This chart shows a very common pattern for how to improve performance using different prompt engineering methods. When I saw it the first time, I wondered how generalizable this stuff is. It probably is as shown in this blog post that Microsoft published. You can keep track of…

Tweet Image 1

JosepOriol Ayats Reposted

Sorry I know it's a bit confusing: to download phi-2 go to Azure AI Studio, find the phi-2 page and click on the "artifacts" tab. See picture.

Tweet Image 1

No they fully released it. But they hide it very well for some reason. Go to artifacts tab.



JosepOriol Ayats Reposted

Product development

Tweet Image 1

JosepOriol Ayats Reposted

There's too much happening right now, so here's just a bunch of links GPT-4 + Medprompt -> SOTA MMLU microsoft.com/en-us/research… Mixtral 8x7B @ MLX nice and clean github.com/ml-explore/mlx… Beyond Human Data: Scaling Self-Training for Problem-Solving with Language Models…


Loading...

Something went wrong.


Something went wrong.