Similar User
@xmgnr
@lBattleRhino
@hedgedhog7
@RyanWatkins_
@gametheorizing
@PopcornKirby
@SplitCapital
@0xngmi
@z0age
@52kskew
@Derivatives_Ape
@noon_cares
@Fiskantes
@mattigags
@slurpxbt
"indie hacking" is literally a slop factory has a single person who calls what they are doing "indie hacking" produced something that isn't slop?
There is this cringe phenomenon right now on twitter where people state things that they don’t believe in, in hope of Elon seeing it and engaging The most obvious example is when they talk about grok together with SOTA models. Sometimes they even pretend that they use grok
Great midwit signal when someone dunks by being proud of knowing the difference between mean and median, not realizing that average isn’t explicitly defined to refer to one over the other
Funny how people try to come up with a bunch of theories of why is ChatGPT more popular than Claude, when it's basically just first mover advantage No it's not because ChatGPT has a black & white UI while Claude doesn't lol
Is there any merit at all to the theory that openai and anthropic downgrades models exposed via the API before they touch the web app exposed models under high load? I had a bad experience months ago with Claude via their API months ago, but happy to be wrong here
surprisingly a good chunk of ai users don't really use apis especially apparent with Claude (lower limits) "im hitting my message limit" "im paying $20/mo this is unacceptable" "id pay for more usage" bring up the api they either don't know how or say its too expensive
DeepMind is clearly the front runner in AI for science Unlike in LLMs, it seems like no other lab is even close
Did not know that the USCIS is based
USCIS, regarding my EB-1 US visa application, referred to Y Combinator as “a technology bootcamp” with “no evidence of outstanding achievements”
This is actually based
love when gpt utilizes web search to do some sloppy RAG and regurgitate google slop instead of doing what it was made to do
Why are people like “oh it doesn’t matter that scaling laws for pretraining are over we’ll just scale post-training and test-time compute instead” like it’s a given that this will scale as well as pretraining did
Few understand what high quality content you can find on Facebook these days. 250k likes.
training on a petabyte scale dataset when you don't even own the data source is harder than i thought
o1 feels less personal than the classical LLMs, especially compared to claude when i ask o1 something, i feel like it thinks that it some kind of test and that it is being evaluated. it feels that it is under scrutiny and must perform! makes the conversation feels less natural
United States Trends
- 1. Georgia 140 B posts
- 2. Georgia 140 B posts
- 3. Beck 32,1 B posts
- 4. #SECChampionship 7.672 posts
- 5. Sark 9.792 posts
- 6. #Offset N/A
- 7. Ewers 6.730 posts
- 8. Gunner Stockton 2.793 posts
- 9. #UFC310 29,8 B posts
- 10. Kirby 17,8 B posts
- 11. Arch 18,9 B posts
- 12. Adames 8.573 posts
- 13. Arian Smith 1.799 posts
- 14. #GoDawgs 9.576 posts
- 15. Jill 51,1 B posts
- 16. Chase Hooper 1.935 posts
- 17. Bobo 33,5 B posts
- 18. #MLSCup 4.816 posts
- 19. Bert Auburn N/A
- 20. Macron 303 B posts
Who to follow
-
major
@xmgnr -
Rhino
@lBattleRhino -
hedgedhog
@hedgedhog7 -
Ryan Watkins
@RyanWatkins_ -
Jordi Alexander
@gametheorizing -
Popcorn Kirby 🚀
@PopcornKirby -
Zaheer
@SplitCapital -
0xngmi
@0xngmi -
0age
@z0age -
Skew Δ
@52kskew -
Derivatives Monke
@Derivatives_Ape -
نـون لخدمة العملاء
@noon_cares -
daddy fiskantes ⭐️🩸
@Fiskantes -
Matti
@mattigags -
slurp 〽️
@slurpxbt
Something went wrong.
Something went wrong.