Robin Jia
@robinomialAssistant Professor @CSatUSC | Previously Visiting Researcher @facebookai | Stanford CS PhD @StanfordNLP
Similar User
@YejinChoinka
@sewon__min
@HannaHajishirzi
@xiangrenNLP
@mohitban47
@kaiwei_chang
@MaartenSap
@LukeZettlemoyer
@ysu_nlp
@cocoweixu
@uwnlp
@royschwartzNLP
@MohitIyyer
@VioletNPeng
@swabhz
Any questions on generalisation that you feel need discussing? As them at one of our exciting invited talks by @najoungkim, @kylelostat and @sameer_), or share them so we can ask them at the @GenBench panel!
I’ll be on the GenBench panel this afternoon at 4pm! Please send in your questions!
Any questions on generalisation that you feel need discussing? As them at one of our exciting invited talks by @najoungkim, @kylelostat and @sameer_), or share them so we can ask them at the @GenBench panel!
Not EMNLP'd out yet? Join the @GenBench workshop on generalisation in NLP today! 🤩 genbench.org/workshop/ Location: Brickell
Come say hi! #EMNL2024 this week, featuring research by @CSatUSC researchers @swabhz @robinomial @_jessethomason_ @xiangrenNLP @jaspreetranjit_ and more!✨ @USCViterbi @USCAdvComputing
My "looking for a postdoc" stickers and I are in Miami for #EMNLP2024! 🤩 Do you have/know of a postdoc for summer/autumn '25 related to interpretability, figlang and/or memorisation (vs generalisation)? Reach out! Looking forward to #GenBench2024 on Saturday and the many many...
How can LLMs 🤖 find all 𝙚𝙨𝙨𝙚𝙣𝙩𝙞𝙖𝙡 facts to write about a topic? 🤔 In our work, we leverage the planning capabilities of LLMs to guide retrieval of fine-grained facts for improved grounding of response. See our #EMNLP2024 paper: aclanthology.org/2024.emnlp-mai… [1/N]
Drop by our poster tomorrow!! @emnlpmeeting #EMNLP2024 Nov 12 (Tue) at 11:00-12:30 Session: 02 Sub-session: Generation Looking forward to chatting with everyone!
How can LLMs 🤖 find all 𝙚𝙨𝙨𝙚𝙣𝙩𝙞𝙖𝙡 facts to write about a topic? 🤔 In our work, we leverage the planning capabilities of LLMs to guide retrieval of fine-grained facts for improved grounding of response. See our #EMNLP2024 paper: aclanthology.org/2024.emnlp-mai… [1/N]
I will be presenting our LLM Interp. paper at #EMNLP2024 in Miami! 🗞️When Parts are Greater Than Sums: Individual LLM Components Can Outperform Full Models 🗓️Nov 12 Tue 14:00-15:30 (Jasmine) terarachang.github.io/projects/llm-d…
In Miami for #EMNLP2024! Come check out our findings poster, Weak-to-Strong Reasoning, on Wednesday at 10:30am. Super excited for my first in-person conference. Looking forward to connecting and chatting about reasoning, hallucination, self-correction, and all things LLMs! 🌴🌴
Super excited about our new workshop on LLM memorization at ACL '25! Stay tuned :)
Our new workshop on Large Language Model Memorization will debut at ACL 2025 🎉 See you in Vienna!!
Thrilled about the fact that L2M2 got accepted as a #ACL2025 workshop!! Stay tuned for more info & I hope to see you in Vienna 🥳
🎉 Happy to announce that the L2M2 workshop has been accepted at @aclmeeting! #NLProc #ACL2025 More details will follow soon. Stay tuned and spread the word! 📣
Our new workshop on Large Language Model Memorization will debut at ACL 2025 🎉 See you in Vienna!!
🎉 Happy to announce that the L2M2 workshop has been accepted at @aclmeeting! #NLProc #ACL2025 More details will follow soon. Stay tuned and spread the word! 📣
Join us at USC's Thomas Lord Department of Computer Science! We’re hiring associate and full professors in all areas of computer science. Apply now: cs.usc.edu/about/open-fac… Please share with you communities! @USCViterbi @USCAdvComputing
For this week’s NLP Seminar, we are thrilled to host @jieyuzhao11 to talk about Building Accountable NLP Models for Social Good! When: 10/24 Thurs 11am PT Non-Stanford affiliates registration form (closed at 9am PT on the talk day): forms.gle/JbuJ1DiUuQp1sX…
Thanks for sharing our latest work on token-level reward models (TLDR) for multimodal models. Paper is out here: arxiv.org/abs/2410.04734
Token-Level Detective Reward (TLDR) model instead of giving one score for the whole text provides fine-grained feedback on each token level for Large Vision LMs (VLMs). This development by @AIatMeta and @USC enhances error diagnosis and self-correction. Let's see how it works:
United States Trends
- 1. Colorado 54,5 B posts
- 2. Kansas 28,3 B posts
- 3. Devin Neal 3.042 posts
- 4. Ole Miss 32,4 B posts
- 5. Travis Hunter 9.893 posts
- 6. Indiana 62,5 B posts
- 7. Penn State 8.505 posts
- 8. Shedeur 8.197 posts
- 9. Gators 19,2 B posts
- 10. Ewers 2.192 posts
- 11. #Huskers 2.462 posts
- 12. Jaxson Dart 7.004 posts
- 13. Sark 3.053 posts
- 14. James Franklin N/A
- 15. Minnesota 17,5 B posts
- 16. Ohio State 41,4 B posts
- 17. Olivia Miles 1.838 posts
- 18. Nebraska 12,6 B posts
- 19. Heisman 8.002 posts
- 20. Nissan 23,9 B posts
Who to follow
-
Yejin Choi
@YejinChoinka -
Sewon Min
@sewon__min -
Hanna Hajishirzi
@HannaHajishirzi -
Sean (Xiang) Ren
@xiangrenNLP -
Mohit Bansal
@mohitban47 -
Kai-Wei Chang
@kaiwei_chang -
Maarten Sap (he/him)
@MaartenSap -
Luke Zettlemoyer
@LukeZettlemoyer -
Yu Su ✈️ #NeurIPS2024
@ysu_nlp -
Wei Xu
@cocoweixu -
UW NLP
@uwnlp -
Roy Schwartz
@royschwartzNLP -
Mohit Iyyer
@MohitIyyer -
Violet Peng
@VioletNPeng -
Swabha Swayamdipta ✈️ EMNLP'24 🏖️
@swabhz
Something went wrong.
Something went wrong.