@francoisfleuret Profile picture

François Fleuret

@francoisfleuret

Research Scientist @meta (FAIR), Prof. @Unige_en, co-founder @nc_shape. I like reality.

Similar User
Lilian Weng photo

@lilianweng

Jürgen Schmidhuber photo

@SchmidhuberAI

Sebastien Bubeck photo

@SebastienBubeck

Lucas Beyer (bl16) photo

@giffmana

Percy Liang photo

@percyliang

Sasha Rush photo

@srush_nlp

Yannic Kilcher 🇸🇨 photo

@ykilcher

Max Welling photo

@wellingmax

Kevin Patrick Murphy photo

@sirbayes

Tim Dettmers photo

@Tim_Dettmers

Julien Chaumond photo

@julien_c

Eric Jang photo

@ericjang11

Zachary Lipton photo

@zacharylipton

Petar Veličković photo

@PetarV_93

Alfredo Canziani photo

@alfcnz

Pinned

My deep learning course @unige_en is available on-line. 1000+ slides, ~20h of screen-casts. Full of examples in @PyTorch fleuret.org/dlc/ And my "Little Book of Deep Learning" is available as a phone-formatted pdf (400k downloads!) fleuret.org/lbdl/

Tweet Image 1
Tweet Image 2
Tweet Image 3
Tweet Image 4

The trillions for AI comes from


When non-idiots are saying that "scaling" is enough to make the AGI-kraken they mean scaling


I stand by this, but remains the data problem. Without any type of "non-verbal embodiment" to get a sens of what is reality by "direct observation", such a model would have to be spoon-fed tons and tons of (synthetic?) boring texts about spatiality, temporality and causality.

If I remember correctly, a couple months after GPT-4 came out your hot take was that scaling a GPT + some kind of smart scratchpad / internal monologue technique may be all we need. I think that was probably spot on, because today everybody is rushing for the latter (o1, etc.).



People are like "@sama believes this", while @openai is pouring resources to develop sora and reasoning models.

I'm a bit confused by the "scaling is over" thing. Nobody really believed we'll get the full-fledged reasoning and creative AI god machine by training a ginormous 2019 GPT on 100x more data, right?



GPT is the A-bomb announcing--and possibly functionally at the core of--the H-bomb. "What a positive and cheerful analogy François !"

Wasn't that the central kayfabe of the entire circus.



François Fleuret Reposted

Gaetz is a distraction. Tulsi is their biggest priority. That's why the two were announced together.

Of the two nominees announced today, Tulsi is the big threat to America, not Gaetz. Wish more people understood that.



I'm a bit confused by the "scaling is over" thing. Nobody really believed we'll get the full-fledged reasoning and creative AI god machine by training a ginormous 2019 GPT on 100x more data, right?


François Fleuret Reposted

@francoisfleuret Seen on the François Chollet AMA news.ycombinator.com/item?id=421308… ☺️

Tweet Image 1

"It's actually a plateau"

there is no wall



We are PSEUDO random generators, with the same seed. And this also explains why LLMs are so good: any fancy question you make up is already one hundred times in the data set.

It's hard to grasp how our complicated cognitive process ends up generating a very predictable outcome. Those jokes are to this waitress what hiding spots are to a professional burglar: they know exactly what is the [low entropy] result distribution.



It's hard to grasp how our complicated cognitive process ends up generating a very predictable outcome. Those jokes are to this waitress what hiding spots are to a professional burglar: they know exactly what is the [low entropy] result distribution.

There was a defining moment for me once when I was twenty something. I made a joke with a waitress (I am a funny man) nothing rude or disrespectful, just a light joke that I was happy with. And for some reason I immediately asked her "is it a standard dumb patron's joke?" 1/2



The notion of "power differential" and the principle that you should never "punch down" are IMO excellent and clarify how some behaviors are the sign of a terrible character.


Current LLMs are neanderthals.


Loading...

Something went wrong.


Something went wrong.