Christine Herlihy
@crherlihyPhD candidate @umdcs 👩🏻💻 | ML/AI, RL | sequential decision-making | algorithmic fairness | knowledge representation & reasoning | healthcare✨she/her
Similar User
@cullennews
@rohanchandra30
@weijiavxu
@gmukunda
@kzintas
@chenzhucs
@MimansaJ
@emrek
@maharshigor
@arpitabiswas777
@JubaZiani
@shlokkkk
@mittal_trisha
@saurabh_garg67
@billfaries
📌 @Pinterest's internship app cycle is underway! 🙌 Our Inclusive AI team is looking for CS MS/PhD students interested in ✨rec sys x alg. fairness✨e.g., generative model evaluation, alignment, multi-obj. optimization, adversarial robustness, multimodal learning/reasoning.
✨Excited to be @ #UAI2024! Here to share work w/@ProfJenNeville, Tobias Schnabel, & Adith Swaminathan: 📄arxiv.org/abs/2406.01633 We study query underspecification in chat-based rec sys & propose interventions that increase E[util] by encouraging clarification when warranted.
pov: you're a final-year PhD student being haunted by the Asana progress tracker you made in first year 😅👻
In the spirit of quantified suffering for a good cause😂, I'm training for a half-marathon coming up…surprisingly soon 😨. I'm🏃🏻♀️to raise💸for St. Jude Children's Research Hospital, which supports childhood cancer patients🧸Plz consider donating 🙌: rb.gy/3xmi5o
If you’re ever wondering: should I buy my dog a flower bouquet toy?, the answer is yes 💐✨proof below 💕🥺 (Note that she has since deconstructed it, so now when she plays fetch it’s one flower at a time 😂🌹)
#5: we use the natural training cutoff to show evidence of contamination in llms on longitudinal benchmarks for codegen 📰 arxiv.org/abs/2310.10628 🏟️ @ICBINBWorkshop (contributed talk 🎉) with @manleyhroberts, Himanshu, @crherlihy, @crwhite_ml
> natural experiment 🤝 LLM evaluation ftw? 📄arxiv.org/pdf/2310.10628… joint work w/ a great team ~ Manley Roberts, Himanshu Thakur, @crwhite_ml, and @SpamuelDooley 🙌
🎉 our new paper uses a different methodology to study data contamination in llms through the lens of time! ⏳we see drastic performance differences before and after training cutoffs in gpt-3.5/4 on benchmarks which evolve over time. 📕arxiv.org/pdf/2310.10628
United States Trends
- 1. Remy 49,7 B posts
- 2. $VSG 3.307 posts
- 3. #WhyIChime N/A
- 4. YouTube TV 103 B posts
- 5. $LINGO 54,5 B posts
- 6. Federal Finance 3.775 posts
- 7. gracie 18,9 B posts
- 8. Claressa 11,8 B posts
- 9. VECTOR 7.467 posts
- 10. YTTV N/A
- 11. Person of the Year 215 B posts
- 12. Hulu 14,2 B posts
- 13. #Drgreennft N/A
- 14. Eazy 8.202 posts
- 15. NYSE 54,2 B posts
- 16. #thursdayvibes 6.086 posts
- 17. Cable 24,1 B posts
- 18. #TheGameAwards 20,3 B posts
- 19. 26 FBI 11,5 B posts
- 20. DirecTV 1.033 posts
Who to follow
-
Ross Cullen
@cullennews -
Rohan Chandra
@rohanchandra30 -
Weijia Xu
@weijiavxu -
Gautam Mukunda
@gmukunda -
Kazi Tasnim Zinat
@kzintas -
Chen Zhu
@chenzhucs -
Mimansa Jaiswal
@MimansaJ -
Emre Kıcıman
@emrek -
Maharshi Gor
@maharshigor -
Arpita Biswas
@arpitabiswas777 -
Juba Ziani
@JubaZiani -
Shlok Kumar Mishra
@shlokkkk -
TrishaMittal
@mittal_trisha -
Saurabh Garg
@saurabh_garg67 -
Bill Faries
@billfaries
Something went wrong.
Something went wrong.