Vineet Tiruvadi, MD PhD
@vineettiruvadiAI for Control Theory + Reverse Engineering ∩ Neuro//Affect, Medicine, Society;
Real "AI" is closer to solving PDEs than it is to ChatBots.
Very important work! And we all know that the best science has the coolest figures :)
5/ why does this matter? applied to neural data, we can now track shared topological features across subjects performing the same task. this reveals consistent neural coding mechanisms, even in high-dimensional and noisy data.
1/ new paper in PNAS (link at end) led by @irishryoon -- @chadgiusti, greg henselman-petrusek, yiyi yu, spencer lavere smith & i tackle a key challenge in TDA: how do you match topological features across populations or datasets? 🧠🔗
AI solves this
New paper published today in @SciReports, led by @vittoria_vd, with @EvelinaLeivada, Fritz Günther, @GaryMarcus We assess the claim that LLMs possess human-like compositional understanding and reasoning. We comprehensively show that they do not. nature.com/articles/s4159…
INDUSTRY REACTS TO LATEST ALLEGATIONS
I'm a bit confused by the "scaling is over" thing. Nobody really believed we'll get the full-fledged reasoning and creative AI god machine by training a ginormous 2019 GPT on 100x more data, right?
A neurocognitive theory of flexible emotion control: The role of the lateral frontal pole in emotion regulation nyaspubs.onlinelibrary.wiley.com/doi/10.1111/ny…
We often think of an "equilibrium" as something standing still, like a scale in perfect balance. But many equilibria are dynamic, like a flowing river which is never changing—yet never standing still. These dynamic equilibria are nicely described by so-called "detailed balance"
*The geometry of data* Fascinating blog post by @tarantulae on how we can use the Stein score (of diffusion renown) to build a metric tensor describing the geometry of our data (e.g., computing geodesics between points). blog.christianperone.com/2024/11/the-ge…
Book #OTD "Worlds Out of Nothing: A Course in the History of Geometry in the 19th Century" by Jeremy Gray The title builds upon Bolyai's hyperbolic geometry who wrote to his father in 1823 he had "... created a new, another world out of nothing..."
I will humbly note that I argued back in April that the AI promises were — in the truest sense of the term — smoke and mirrors bloodinthemachine.com/p/ai-really-is…
Multiple sources are now reporting LLMs aren't scaling as hoped—larger datasets and more compute aren't improving AI systems as fast. The companies, naturally, are pressing on, calling for billions more in investment. One way to read this: The hope is to make AI too big to fail.
log in ℝ and log in ℂ
Do I finish one of the 6 books I've been working on for 5 years, or start a new one?
What would you say is the max curvature of large-scale white matter tracts in the brain?
It's good Ilya now realizes that merely training on input-output pairs doesn't recover the process that generated that data. There is a very common misunderstanding in deep learning (DL) that training on lots of data eventually recovers the algorithm that generated that data.
Ilya Sutskever, perhaps the most influential proponent of the AI "scaling hypothesis," just told Reuters that scaling has plateaued. This is a big deal! This comes on the heels of a big report that OpenAI's in-development Orion model had disappointing results. 🧵
United States Trends
- 1. Jake Paul 995 B posts
- 2. #Arcane 230 B posts
- 3. Jayce 51,1 B posts
- 4. Good Saturday 26,2 B posts
- 5. #SaturdayVibes 3.120 posts
- 6. Serrano 248 B posts
- 7. #saturdaymorning N/A
- 8. #PlutoSeriesEP5 124 B posts
- 9. Vander 16,8 B posts
- 10. AioonMay Limerence 106 B posts
- 11. Pence 81,9 B posts
- 12. #SaturdayMotivation 2.060 posts
- 13. maddie 2.130 posts
- 14. WOOP WOOP 1.401 posts
- 15. John Oliver 14,4 B posts
- 16. Caturday 7.433 posts
- 17. Jinx 110 B posts
- 18. Fetterman 36,3 B posts
- 19. Father Time 10,8 B posts
- 20. He's 58 31,6 B posts
Something went wrong.
Something went wrong.