@Gamma_Caster Profile picture

gamma caster

@Gamma_Caster

I mostly dance around content one could find in Jim Holt books. Mostly.

Similar User
Achmad Hisyam photo

@hisjam

Well put @fchollet I was trying to formalize "novelty" using Kolmogrov Complexity (KC) as the critical metric. Since it can indeed "extrapolate," my idea was if a solution requires a novel "reasoning" sequence, a KC higher than what is in training data distribution is necessary.

People seem to be falling for two rather thoughtless extremes: 1. "LLMs are AGI, they work like the human brain, they can reason, etc." 2. "LLMs are dumb and useless." Reality is that LLMs are not AGI -- they're a big curve fit to a very large dataset. They work via…



gamma caster Reposted

All the reasons are too meta to be interesting


gamma caster Reposted

The biggest trick our brains play is to create the illusion of understanding what’s going on, both around us and in the world


How is 100% of this event also not available virtually? On the website it says "some virtual elements". If virtual attendees were given full access, it would obviously dramatically increase attendance, engagement, awareness, potential virality of content, etc. #ICML2023 #ICML

Going to #ICML2023? We’ll be sharing our latest advances in AI, covering themes such as: 🌐 AI in the (simulated) world 💡 The future of reinforcement learning ⭕ Challenges at the frontier of AI Find out more now: dpmd.ai/44Td7I9



gamma caster Reposted

BBC interview with Elon highlights that journalism is in crisis. The incentives are broken. A great journalist should: - seek deep understanding - have insatiable curiosity and empathy - think independently, fearlessly - put integrity above all else I have hope for journalism.


Wondering whether #ChatGPT (3.5 or 4) is conscious, is analogous to looking into the mirror and wondering whether the reflection is another person because the resemblance is uncanny.


gamma caster Reposted

ConvNets are a decent model of how the ventral pathway of the human visual cortex works. But LLMs don't seem to be a good model of how humans process language. There longer-term prediction taking place in the brain. Awesome work by the Brain-AI group at FAIR-Paris.

New in Nature Human Behavior, Meta AI researchers show how current language models differ from the human brain & highlight the role of long-range & hierarchical predictions. We hope these findings will help inform the next generation of AI ➡️ go.nature.com/3SKb3gX



Once CLIP models are made for two languages using the same set of images (if the images are different an intermediate mapping process will be needed), it would be cool to test and see how those two models can be combined for translation.


gamma caster Reposted

Why is OpenAI's new compiler, Triton, so exciting? And what distinguishes it from other efforts to provide a Python DSL for programming Nvidia GPUs, like Numba? To answer that, we need to look at the operation behind all of deep learning - matrix multiplication. (1/7)

Tweet Image 1

gamma caster Reposted

Is mathematics invented or discovered? Read what Kurt Gödel had to say. Here's the text of his 1951 Gibbs Lecture (along with an introductory note) in which Gödel argues for Platonism in the philosophy of mathematics, as in the passage excerpted below. partiallyexaminedlife.com/wp-content/upl…

Tweet Image 1

This quote explains emmergence: "The simulation is such that [one] generally perceives the sum of many billions of elementary processes simultaneously, so that the leveling law of large numbers completely obscures the real nature of the individual properties" - John Von Neumann


Is it possible to be an unbiased Godel with no prior assumptions? With no premises? No. To be Godel means to assume certain axioms. What is the most condensed set of such axioms? These should be built w/ LISP, then connected to NAS agent for various tasks. #GAI


How do we generate "world-models", like those mentioned by @ylecun on @lexfridman and stack them onto DNNs for practical Machine Learning applications that can be deployed?


The previous question about #GANs can be generalized as such: Can any GAN transcend it's training-data in any way? Due to injected noise, on occasion, generate an output more "real" than any training-input? In the special case of NNs, would that mean a better performing NN?


What if, with sufficient compute & sufficient training data, one trains the very best GAN-architectures that do exist to generate GAN-architectures that don't exist. Can the limit transcend the performance of training-data architectures on the same generative tasks? #GANs #AI


GAI exists on a spectrum but fully generalized or truly generalized AI cannot be any superposition of only DNNs. Some other logic (perhaps low order predicate logic) either wrapped around or in between is necessary. Even that may not work due to #Godel 's Incompleteness Theorem.


In AI, the concept of "a difference in quantity creates a difference in quality" can sneak up on us & the transitional boundary is even less clear when compared to other domains. If you stack fundamentally varied algorithms, this effect is amplified to an indeterminable extent.


"When information is lost in black holes, the second law of thermodynamics is not violated - it is transcended" - Sir Roger Penrose paraphrasing John Archibald Wheeler


United States Trends
Loading...

Something went wrong.


Something went wrong.