Mikel Artetxe
@artetxemCo-founder @RekaAILabs and Honorary Researcher @IxaGroup (University of the Basque Country) | Past: Research Scientist @AIatMeta (FAIR)
Similar User
@RekaAILabs
@YejinChoinka
@EdinburghNLP
@GuillaumeLample
@YiTayML
@srush_nlp
@uwnlp
@ai2_allennlp
@riedelcastro
@jaseweston
@OfirPress
@gneubig
@LukeZettlemoyer
@PSH_Lewis
@iatitov
A picture is worth a thousand words… Excited to release Reka Core! Try it for free at chat.reka.ai, and learn more about it in our tech report publications.reka.ai/reka-core-tech…
Meet Reka Core, our best and most capable multimodal language model yet. 🔮 It’s been a busy few months training this model and we are glad to finally ship it! 💪 Core has a lot of capabilities, and one of them is understanding video --- let’s see what Core thinks of the 3 body…
I wish there was a more direct question about keeping ARR vs. getting rid of it vs. making it optional, but it's still great that we have a chance to give our opinion. Please don't forget to vote!
What should the ACL peer review process be like in the future? Please cast your views in this survey: aclweb.org/portal/content… by 4th Nov 2024 #NLProc @ReviewAcl
Dani Yogatama, our CEO, had the opportunity to share about the work we do at Reka, our multimodal AI models, his perspectives of the future of AI, as well as how we use @AMD Instinct MI300X and ROCm software at AMD's Advancing AI event.
⚡️ Reka Flash ⚡️, our 21B multimodal assistant, is much better now 🔥🔥🔥 Try it for free at chat.reka.ai. No need to sign up anymore!
We have been busy the past few months and have some exciting updates!📢 We have a new version of Reka Flash⚡️, our powerful 21B model that supports interleaved multimodal inputs (text📄, image🖼️, video📽️, audio🎧). This update brings significant capability improvements on…
I'm happy to share that BertaQA paper has been accepted at NEURIPS 🎉 Congratulations to all coauthors @gazkune @Aitor57 @oierldl @artetxem @IxaGroup @Hitz_zentroa See you in Vancouver! #NeurIPS2024
How Much Do Language Models Know About Local Culture? We introduce BertaQA, a multiple-choice trivia dataset parallel in English and Basque. It consists of a local subset about the Basque culture, and a global subset with questions of broader interest. arxiv.org/abs/2406.07302
With LLMs crushing Mathematical Olympiads and competitive programming, you'd expect them to do well on Linguistic Olympiads too, right? Aren't they *language* models after all? Well, it turns out they are pretty bad at it! Check out our new paper and benchmark 👇
🚨NEW BENCHMARK🚨 Are LLMs good at linguistic reasoning if we minimize the chance of prior language memorization? We introduce Linguini🍝, a benchmark for linguistic reasoning in which SOTA models perform below 25%. w/ @b_alastruey, @artetxem, @costajussamarta et al. 🧵(1/n)
Imagine Coca-Cola starts sponsoring NumPy, and they change their license so every project using it has to be named "Coca-Cola {whatever}". Is this the future of open source? 😅
Meta lawyers and strategy team are serious about rebranding "open source AI" to "Llama-N AI"
Really glad that Latxa received the ACL 2024 Best Recource Paper Award! 🎉 Congratulations to all the coauthors form @Hitz_zentroa! #ACL2024NLP
🏆 ACL Best Resource Paper Award: - Latxa: An Open Language Model and Evaluation Suite for Basque by Etxaniz et al. - Dolma: an Open Corpus of Three Trillion Tokens for Language Model Pretraining Research by Soldaini et al. #NLProc #ACL2024NLP
Best resource paper for #Latxa!!! #ACL2024NLP
🏆 ACL Best Resource Paper Award: - Latxa: An Open Language Model and Evaluation Suite for Basque by Etxaniz et al. - Dolma: an Open Corpus of Three Trillion Tokens for Language Model Pretraining Research by Soldaini et al. #NLProc #ACL2024NLP
Latxa got the Best Resource Paper Award at #ACL2024NLP 🎉🎉🎉 Congrats to the entire team, and in particular the lead authors @juletxara, @osainz59 and @apinhole!
📢 Excited to present Latxa, a family of Basque LLMs based on Llama 2 🦙⏩🐑 Latxa outperforms all previous open models in Basque by a large margin, and it even outperforms GPT-4 Turbo on certain benchmarks 🚀 Some findings that could be useful beyond Basque 🧵👇
United States Trends
- 1. #TysonPaul 33,1 B posts
- 2. #NetflixFight 15,2 B posts
- 3. Ramos 60,2 B posts
- 4. Rosie Perez 5.865 posts
- 5. Jerry Jones 5.031 posts
- 6. #SmackDown 61,5 B posts
- 7. #buffering 2.235 posts
- 8. Serrano 24,7 B posts
- 9. My Netflix 25,7 B posts
- 10. Cedric 10,8 B posts
- 11. #netfix 2.187 posts
- 12. Goyat 21,8 B posts
- 13. Katie Taylor 12,3 B posts
- 14. Nunes 31,1 B posts
- 15. Michael Irvin N/A
- 16. Holyfield 7.380 posts
- 17. Christmas Day 12,8 B posts
- 18. Streameast 1.317 posts
- 19. Grok 50,5 B posts
- 20. Love is Blind 3.486 posts
Who to follow
-
Reka
@RekaAILabs -
Yejin Choi
@YejinChoinka -
EdinburghNLP
@EdinburghNLP -
Guillaume Lample @ ICLR 2024
@GuillaumeLample -
Yi Tay
@YiTayML -
Sasha Rush
@srush_nlp -
UW NLP
@uwnlp -
AllenNLP
@ai2_allennlp -
Sebastian Riedel (@[email protected])
@riedelcastro -
Jason Weston
@jaseweston -
Ofir Press
@OfirPress -
Graham Neubig
@gneubig -
Luke Zettlemoyer
@LukeZettlemoyer -
Patrick Lewis
@PSH_Lewis -
Ivan Titov
@iatitov
Something went wrong.
Something went wrong.