@gregd_nlp Profile picture

Greg Durrett

@gregd_nlp

CS professor at UT Austin. Large language models and NLP. he/him

Similar User
Yejin Choi photo

@YejinChoinka

Jacob Andreas photo

@jacobandreas

UW NLP photo

@uwnlp

EdinburghNLP photo

@EdinburghNLP

Hanna Hajishirzi photo

@HannaHajishirzi

Luke Zettlemoyer photo

@LukeZettlemoyer

Mohit Bansal photo

@mohitban47

Sewon Min photo

@sewon__min

Mohit Iyyer photo

@MohitIyyer

Maarten Sap (he/him) photo

@MaartenSap

Kai-Wei Chang photo

@kaiwei_chang

Sebastian Riedel (@riedelcastro@sigmoid.social) photo

@riedelcastro

Sean (Xiang) Ren photo

@xiangrenNLP

Wei Xu photo

@cocoweixu

Suchin Gururangan photo

@ssgrn

Pinned

📣 Today we launched an overhauled NLP course to 600 students in the online MS programs at UT Austin. 98 YouTube videos 🎥 + readings 📖 open to all! cs.utexas.edu/~gdurrett/cour… w/5 hours of new 🎥 on LLMs, RLHF, chain-of-thought, etc! Meme trailer 🎬 youtu.be/DcB6ZPReeuU 🧵


Greg Durrett Reposted

Very happy from the news that our paper "Which questions should I answer? Salience Prediction of Inquisitive Questions" received an outstanding paper award in EMNLP 2024. Congratulations to Yating, Ritika and the whole team. #EMNLP2024

Tweet Image 1

Two awards for UT Austin papers! Salience prediction of inquisitive questions by @YatingWu96 @ritikarmangla @AlexGDimakis me @jessyjli Learning AANNs and insights about grammatical generalization in pre-training by @kanishkamisra & @kmahowald Congrats to all the awardees!

Announcing the 20 **Outstanding Papers** for #EMNLP2024

Tweet Image 1
Tweet Image 2
Tweet Image 3
Tweet Image 4


Greg Durrett Reposted

New short course: Safe and Reliable AI via Guardrails! Learn to create production-ready, reliable LLM applications with guardrails in this new course, built in collaboration with @guardrails_ai and taught by its CEO and co-founder, @ShreyaR I see many companies worry about the…


Greg Durrett Reposted

Excited to share ✨ Contextualized Evaluations ✨! Benchmarks like Chatbot Arena contain underspecified queries, which can lead to arbitrary eval judgments. What happens if we provide evaluators with context (e.g who's the user, what's their intent) when judging LM outputs? 🧵↓

Tweet Image 1

Greg Durrett Reposted

Excited for #EMNLP2024! Check out work from my students and collaborators that will be presented: jessyli.com/emnlp2024

Tweet Image 1

Greg Durrett Reposted

My lab at Duke has multiple Ph.D. openings! Our mission is to augment human decision-making by advancing the reasoning, comprehension, and autonomy of modern AI systems. I am attending #emnlp2024, happy to chat about PhD applications, LLM agents, evaluation etc etc!


Greg Durrett Reposted

On my way to #EMNLP2024! Excited to present: 1/ Summary of a Haystack (w/ @alexfabbri4 & @jasonwu0731) 2/ Mini-Check (led by @LiyanTang4 & w/ @gregd_nlp) 3/ Prompt Leakage (led by @divyansha2212) Let's chat about reading/writing, HCI, factuality, summarization!


Greg Durrett Reposted

How do language models organize concepts and their properties? Do they use taxonomies to infer new properties, or infer based on concept similarities? Apparently, both! 🌟 New paper with my fantastic collaborators @amuuueller and @kanishkamisra!

Tweet Image 1

Greg Durrett Reposted

How much is a noisy image worth? 👀 We show that as long as a small set of high-quality images is available, noisy samples become extremely valuable, almost as valuable as clean ones. Buckle up for a thread about dataset design and the value of data 💰

Tweet Image 1

Greg Durrett Reposted

Happy Election Day, Longhorns! At 8:30 a.m., the Union is reporting a wait time of over 51 minutes, and @TheLBJSchool is reporting a wait time of 0-20 minutes. Polling locations are open from 7 a.m. to 7 p.m. today. Voters must have a valid ID and be registered. If you are in…


Greg Durrett Reposted

Why and when do preference annotators disagree? And how do reward models + LLM-as-Judge evaluators handle disagreements? We explore both these questions in a ✨new preprint✨ from my @allen_ai internship! [1/6]

Tweet Image 1

Greg Durrett Reposted

Introducing RARe: Retrieval Augmented Retrieval with In-Context Examples! 1/ Can retrieval models be trained to use in-context examples like LLMs? 🤔 Our preprint answers yes-showing unto +2.72% nDCG on open-domain retrieval benchmarks!🧵 w @yoonsang_ @sujaysanghavi @eunsolc

Tweet Image 1

Greg Durrett Reposted

Today is the last day of early voting! Eight locations are open tonight until 10 p.m. Visit VoteTravis.gov for all essential information to vote! #VoteEarly #VoteEasy

Tweet Image 1

Greg Durrett Reposted

Our department @UT_Linguistics is hiring 2 new faculty in computational linguistics! NLP at UT is an absolutely lovely family so join us 🥰 apply.interfolio.com/158280

Tweet Image 1

Like long-context LLMs & synthetic data? Lucy's work extends LLM context lengths on synthetic data and connects (1) improvements on long-context tasks; (2) emergence of retrieval heads in the LLM. We're excited about how mech interp insights can help make better training data!

1/ When does synthetic data help with long-context extension and why? 🤖 while more realistic data usually helps, symbolic data can be surprisingly effective 🔍effective synthetic data induces similar retrieval heads–but often only subsets of those learned on real data!

Tweet Image 1
Tweet Image 2


Loading...

Something went wrong.


Something went wrong.