gamma caster
@Gamma_CasterI mostly dance around content one could find in Jim Holt books. Mostly.
Similar User
@hisjam
Well put @fchollet I was trying to formalize "novelty" using Kolmogrov Complexity (KC) as the critical metric. Since it can indeed "extrapolate," my idea was if a solution requires a novel "reasoning" sequence, a KC higher than what is in training data distribution is necessary.
People seem to be falling for two rather thoughtless extremes: 1. "LLMs are AGI, they work like the human brain, they can reason, etc." 2. "LLMs are dumb and useless." Reality is that LLMs are not AGI -- they're a big curve fit to a very large dataset. They work via…
All the reasons are too meta to be interesting
The biggest trick our brains play is to create the illusion of understanding what’s going on, both around us and in the world
How is 100% of this event also not available virtually? On the website it says "some virtual elements". If virtual attendees were given full access, it would obviously dramatically increase attendance, engagement, awareness, potential virality of content, etc. #ICML2023 #ICML
Going to #ICML2023? We’ll be sharing our latest advances in AI, covering themes such as: 🌐 AI in the (simulated) world 💡 The future of reinforcement learning ⭕ Challenges at the frontier of AI Find out more now: dpmd.ai/44Td7I9
BBC interview with Elon highlights that journalism is in crisis. The incentives are broken. A great journalist should: - seek deep understanding - have insatiable curiosity and empathy - think independently, fearlessly - put integrity above all else I have hope for journalism.
Wondering whether #ChatGPT (3.5 or 4) is conscious, is analogous to looking into the mirror and wondering whether the reflection is another person because the resemblance is uncanny.
ConvNets are a decent model of how the ventral pathway of the human visual cortex works. But LLMs don't seem to be a good model of how humans process language. There longer-term prediction taking place in the brain. Awesome work by the Brain-AI group at FAIR-Paris.
New in Nature Human Behavior, Meta AI researchers show how current language models differ from the human brain & highlight the role of long-range & hierarchical predictions. We hope these findings will help inform the next generation of AI ➡️ go.nature.com/3SKb3gX
Once CLIP models are made for two languages using the same set of images (if the images are different an intermediate mapping process will be needed), it would be cool to test and see how those two models can be combined for translation.
Why is OpenAI's new compiler, Triton, so exciting? And what distinguishes it from other efforts to provide a Python DSL for programming Nvidia GPUs, like Numba? To answer that, we need to look at the operation behind all of deep learning - matrix multiplication. (1/7)
Is mathematics invented or discovered? Read what Kurt Gödel had to say. Here's the text of his 1951 Gibbs Lecture (along with an introductory note) in which Gödel argues for Platonism in the philosophy of mathematics, as in the passage excerpted below. partiallyexaminedlife.com/wp-content/upl…
This quote explains emmergence: "The simulation is such that [one] generally perceives the sum of many billions of elementary processes simultaneously, so that the leveling law of large numbers completely obscures the real nature of the individual properties" - John Von Neumann
Is it possible to be an unbiased Godel with no prior assumptions? With no premises? No. To be Godel means to assume certain axioms. What is the most condensed set of such axioms? These should be built w/ LISP, then connected to NAS agent for various tasks. #GAI
How do we generate "world-models", like those mentioned by @ylecun on @lexfridman and stack them onto DNNs for practical Machine Learning applications that can be deployed?
The previous question about #GANs can be generalized as such: Can any GAN transcend it's training-data in any way? Due to injected noise, on occasion, generate an output more "real" than any training-input? In the special case of NNs, would that mean a better performing NN?
What if, with sufficient compute & sufficient training data, one trains the very best GAN-architectures that do exist to generate GAN-architectures that don't exist. Can the limit transcend the performance of training-data architectures on the same generative tasks? #GANs #AI
GAI exists on a spectrum but fully generalized or truly generalized AI cannot be any superposition of only DNNs. Some other logic (perhaps low order predicate logic) either wrapped around or in between is necessary. Even that may not work due to #Godel 's Incompleteness Theorem.
In AI, the concept of "a difference in quantity creates a difference in quality" can sneak up on us & the transitional boundary is even less clear when compared to other domains. If you stack fundamentally varied algorithms, this effect is amplified to an indeterminable extent.
"When information is lost in black holes, the second law of thermodynamics is not violated - it is transcended" - Sir Roger Penrose paraphrasing John Archibald Wheeler
United States Trends
- 1. Travis Hunter 11,4 B posts
- 2. Heisman 5.066 posts
- 3. Arkansas 28,4 B posts
- 4. Ewers 1.916 posts
- 5. Northwestern 6.251 posts
- 6. Carnell Tate 1.587 posts
- 7. $CUTO 8.237 posts
- 8. Sheppard 2.910 posts
- 9. Colorado 67,2 B posts
- 10. Sark 1.801 posts
- 11. Jeremiah Smith 1.273 posts
- 12. Caleb Downs N/A
- 13. Isaac Wilson N/A
- 14. Denzel Burke N/A
- 15. #HookEm 2.542 posts
- 16. #SkoBuffs 3.540 posts
- 17. Shedeur 3.703 posts
- 18. #Buckeyes N/A
- 19. Wrigley 3.814 posts
- 20. Arch 15,5 B posts
Who to follow
Something went wrong.
Something went wrong.