Zhehao Zhang (Seek PhD position 25 Fall)
@Zhehao_Zhang123Graduate student at @Dartmouthcs ; Visiting Research Intern @SALT_NLP; Prev. Research Intern @adobe @MSFTResearch; NLP&ML #NLProc
Similar User
@Kellywealth_pro
@ValuersAfrica
@Puleng_Frank
@chlake77
@alexandriaannie
Nice survey paper presents a unified taxonomy bridging personalized text generation and downstream applications 🎯 Current research on LLM personalization is fragmented into two disconnected areas: direct personalized text generation and downstream task personalization. This…
Personally I think planning is the biggest bottleneck for language agents. So I'm super excited to introduce model-based planning, a new planning paradigm for LLM-based language agents––better than ReAct while safer and faster than tree search. ReAct-style planning is easy to…
❓Wondering how to scale inference-time compute with advanced planning for language agents? 🙋♂️Short answer: Using your LLM as a world model 💡More detailed answer: Using GPT-4o to predict the outcome of actions on a website can deliver strong performance with improved safety and…
Great article about our newest @stanfordnlp faculty member @Diyi_Yang in @Stanford Report: “I am passionate about developing a future where humans and AIs can collaborate to achieve greater collective intelligence in a variety of contexts, education, healthcare, & the workplace”
Thanks so much Leshem for sharing our work! Let’s dive into the great potential of personalized LLM!
LLMs personalization is difficult🥵 The whole point it is a Large LM so we can't train for each of us So what did people do? Criterions, data, prompts, techniques... A super comprehensive survey on current personalization alphaxiv.org/abs/2411.00027
Personalization of LLMs: A Survey Presents a comprehensive framework for understanding personalized LLMs. Introduces taxonomies for different aspects of personalization and unifying existing research across personalized text generation and downstream applications.
Like Sketching? 🤩 Using #Sketch2Code, VLMs can help aid the conversion of rudimentary sketches into webpage prototypes!
Sketch2Code: Evaluating Vision-Language Models for Interactive Web Design Prototyping @RyanLi0802 @Diyi_Yang Paper: arxiv.org/abs/2410.16232 Project Page: salt-nlp.github.io/Sketch2Code-Pr… Demo: sketch2code-demo.streamlit.app
1. Compilation of advice (by @shaily99): github.com/shaily99/advice 2. Application fee waivers (by @KaiserWhoLearns): github.com/KaiserWhoLearn… 3. SoP samples (collected by @zhaofeng_wu @alexisjross @shannonzshen): cs-sop.org [2/3]
DARG has been accepted by Neurips 2024!! Thanks so much, the best mentor @Diyi_Yang and the support from @jiaao_chen . See you guys in Vancouver!
🎉 New Paper Alert! 🎉 Are you tired of seeing ever-increasing results on common benchmarks and questioning if LLMs truly have such abilities? We've got something exciting for you! 📢 Introducing DARG: Dynamic Evaluation of Large Language Models via Adaptive Reasoning Graphs.
Language models today are (1) widely used in personalized contexts and (2) to build systems that interface with tools. Do they respect privacy when helping with daily tasks like emailing? Introducing PrivacyLens to evaluate if LMs know privacy norms in action at inference time!
United States Trends
- 1. Justin Tucker 26,3 B posts
- 2. Hunter 169 B posts
- 3. Ravens 64,2 B posts
- 4. Ravens 64,2 B posts
- 5. Lamar 64,4 B posts
- 6. Bryce Young 13,6 B posts
- 7. Panthers 23,3 B posts
- 8. #UniswapHack 55,3 B posts
- 9. Bucs 10,8 B posts
- 10. Saquon 23,2 B posts
- 11. #KeepPounding 2.691 posts
- 12. Adam Thielen 3.952 posts
- 13. Cooper DeJean 7.031 posts
- 14. Jalen Carter 5.869 posts
- 15. Baker 44,1 B posts
- 16. Chuba 2.727 posts
- 17. Bucky Irving 3.159 posts
- 18. Tony Romo 2.196 posts
- 19. #PHIvsBAL 8.117 posts
- 20. Todd Bowles 1.103 posts
Something went wrong.
Something went wrong.