Similar User
@BzzAgent
@chadoconnor
@rorymcgrath
@ttaeubel
@RodBanner
@lilli_guidetti
@lowerscythe
Elon Musk is all in. (0:00) Elon Musk Is All in on Donald Trump (6:35) Providing Starlink to Victims of Hurricane Helene (9:22) If Trump Loses, This Is the Last Election (21:49) The Epstein and Diddy Client List (33:38) Vaccines (35:49) The Movement to Decriminalize Crime…
Emerging AI Agent Architectures Researchers from IBM and Microsoft present this concise summary of emerging AI agent architectures. It focuses the discussion on capabilities like reasoning, planning, and tool calling which are all needed to build complex AI-powered agentic…
Now available on Mapillary: Neural Radiance Fields (NeRFs)! 🎊 NeRFs allow for the transformation of a collection of 2D images into detailed, immersive 3D reconstructions. Read our blog post to learn more and see how you can get started with NeRFs: blog.mapillary.com/update/2024/03…
😮 Multi-agent LLM Coding Framework - Future of Code Development 🤖 Developed an application leveraging multi-agent LLM technology to automate code generation and testing using LangGraph and @LangChainAI @hwchase17 @gdb @GregKamradt @OpenAI
🚨 BREAKING: Nvidia just released Chat with RTX, an AI chatbot that runs locally on your PC. It can summarize or search documents across your PC's files and even YouTube videos and playlists. The chatbot runs locally, meaning results are fast, you can use it without the…
This is incredible. Fractal patterns were discovered when plotting a grid search over neural network hyperparameters. Blue colors show settings that work, while red colors show those that don't. It suggests a fundamental link between mathematical fractals and machine learning…
The first human received an implant from @Neuralink yesterday and is recovering well. Initial results show promising neuron spike detection.
Microsoft presents SliceGPT Compress Large Language Models by Deleting Rows and Columns paper page: huggingface.co/papers/2401.15… show that SliceGPT can remove up to 25% of the model parameters (including embeddings) for LLAMA2-70B, OPT 66B and Phi-2 models while maintaining 99%,…
JPMorgan announces DocLLM A layout-aware generative language model for multimodal document understanding paper page: huggingface.co/papers/2401.00… Enterprise documents such as forms, invoices, receipts, reports, contracts, and other similar records, often carry rich semantics at the…
Chatbot Arena is awesome. Bring your hardest prompts. Rank models. Arena calculates ELO. Personally I find it quite educational too because you get to get a sense of the "personalities" of many different models over time. RIP servers sorry :)
Arena live update: 1000+ new votes have just rolled in for Mixtral-8x7b! Excitingly, Mixtral-8x7b is overtaking Tulu-2-70B as the top open model and achieving ~50% winrate against gpt-3.5-turbo. Let's cast more votes and challenge it with the toughest prompt at…
There's too much happening right now, so here's just a bunch of links GPT-4 + Medprompt -> SOTA MMLU microsoft.com/en-us/research… Mixtral 8x7B @ MLX nice and clean github.com/ml-explore/mlx… Beyond Human Data: Scaling Self-Training for Problem-Solving with Language Models…
New YouTube video: 1hr general-audience introduction to Large Language Models youtube.com/watch?v=zjkBMF… Based on a 30min talk I gave recently; It tries to be non-technical intro, covers mental models for LLM inference, training, finetuning, the emerging LLM OS and LLM Security.
Here we go. AGI is upon us, if only we could agree what AGI actually is, lol arxiv.org/pdf/2311.02462…
New paper: managing-ai-risks.com Companies are planning to train models with 100x more computation than today’s state of the art, within 18 months. No one knows how powerful they will be. And there’s essentially no regulation on what they’ll be able to do with these models.
The Dawn of LMMs Analysis of GPT-4V to deepen the understanding of large multimodal models (LMMs). It focuses on probing GPT-4V across various application scenarios. 150 pages of examples ranging from code capabilities with vision to retrieval-augmented LMMs. "The findings…
With many 🧩 dropping recently, a more complete picture is emerging of LLMs not as a chatbot, but the kernel process of a new Operating System. E.g. today it orchestrates: - Input & Output across modalities (text, audio, vision) - Code interpreter, ability to write & run…
United States Trends
- 1. #CUTOSHI 9.032 posts
- 2. $PLUR 4.358 posts
- 3. Cubs 31,3 B posts
- 4. Tucker 60,2 B posts
- 5. Standard Time 9.237 posts
- 6. CORY 42,1 B posts
- 7. Astros 22,2 B posts
- 8. Daylight Savings Time 5.959 posts
- 9. Bregman 8.287 posts
- 10. Friday the 13th 135 B posts
- 11. Luxembourg 30 B posts
- 12. Cam Smith 6.973 posts
- 13. Bellinger 7.886 posts
- 14. #JoyfullyRoku 1.039 posts
- 15. #SantaChat 10,1 B posts
- 16. mark lee 37 B posts
- 17. Paredes 14,6 B posts
- 18. Thomas Bryant 1.061 posts
- 19. Devin Williams 17,5 B posts
- 20. Nancy Pelosi 82,9 B posts
Something went wrong.
Something went wrong.