Similar User
@xmgnr
@lBattleRhino
@hedgedhog7
@RyanWatkins_
@gametheorizing
@PopcornKirby
@SplitCapital
@0xngmi
@z0age
@52kskew
@Derivatives_Ape
@noon_cares
@Fiskantes
@mattigags
@slurpxbt
"indie hacking" is literally a slop factory has a single person who calls what they are doing "indie hacking" produced something that isn't slop?
There is this cringe phenomenon right now on twitter where people state things that they don’t believe in, in hope of Elon seeing it and engaging The most obvious example is when they talk about grok together with SOTA models. Sometimes they even pretend that they use grok
Great midwit signal when someone dunks by being proud of knowing the difference between mean and median, not realizing that average isn’t explicitly defined to refer to one over the other
Funny how people try to come up with a bunch of theories of why is ChatGPT more popular than Claude, when it's basically just first mover advantage No it's not because ChatGPT has a black & white UI while Claude doesn't lol
Is there any merit at all to the theory that openai and anthropic downgrades models exposed via the API before they touch the web app exposed models under high load? I had a bad experience months ago with Claude via their API months ago, but happy to be wrong here
surprisingly a good chunk of ai users don't really use apis especially apparent with Claude (lower limits) "im hitting my message limit" "im paying $20/mo this is unacceptable" "id pay for more usage" bring up the api they either don't know how or say its too expensive
DeepMind is clearly the front runner in AI for science Unlike in LLMs, it seems like no other lab is even close
Did not know that the USCIS is based
USCIS, regarding my EB-1 US visa application, referred to Y Combinator as “a technology bootcamp” with “no evidence of outstanding achievements”
This is actually based
love when gpt utilizes web search to do some sloppy RAG and regurgitate google slop instead of doing what it was made to do
Why are people like “oh it doesn’t matter that scaling laws for pretraining are over we’ll just scale post-training and test-time compute instead” like it’s a given that this will scale as well as pretraining did
Few understand what high quality content you can find on Facebook these days. 250k likes.
training on a petabyte scale dataset when you don't even own the data source is harder than i thought
o1 feels less personal than the classical LLMs, especially compared to claude when i ask o1 something, i feel like it thinks that it some kind of test and that it is being evaluated. it feels that it is under scrutiny and must perform! makes the conversation feels less natural
United States Trends
- 1. Rams 45,7 B posts
- 2. Josh Allen 31,4 B posts
- 3. Jay Z 69 B posts
- 4. #RHOP 3.017 posts
- 5. McDermott 8.064 posts
- 6. #BaddiesMidwest 3.767 posts
- 7. #ChiefsKingdom 4.766 posts
- 8. #YellowstoneTV 1.588 posts
- 9. Puka 14,9 B posts
- 10. Dave Parker 5.925 posts
- 11. Tony Buzbee 9.386 posts
- 12. #TSTheErasTour 41,8 B posts
- 13. Stafford 8.239 posts
- 14. Juan Soto 16,8 B posts
- 15. Alina 34,5 B posts
- 16. Tom Brady 5.218 posts
- 17. Roc Nation 5.108 posts
- 18. Vandy 26,8 B posts
- 19. The Cobra 5.727 posts
- 20. Rock Lee 1.181 posts
Who to follow
-
major
@xmgnr -
Rhino
@lBattleRhino -
hedgedhog
@hedgedhog7 -
Ryan Watkins
@RyanWatkins_ -
Jordi Alexander
@gametheorizing -
Popcorn Kirby 🚀
@PopcornKirby -
Zaheer
@SplitCapital -
0xngmi
@0xngmi -
0age
@z0age -
Skew Δ
@52kskew -
Derivatives Monke
@Derivatives_Ape -
نـون لخدمة العملاء
@noon_cares -
daddy fiskantes ⭐️🩸
@Fiskantes -
Matti
@mattigags -
slurp 〽️
@slurpxbt
Something went wrong.
Something went wrong.