@May_F1_ Profile picture

May Fung

@May_F1_

CSE AP @HKUST | Affiliated Appointment @MIT 🦫 (upcoming)

Similar User
Jie Huang photo

@jefffhj

Ning Ding photo

@stingning

Ningyu Zhang@ZJU photo

@zxlzr

Heng Ji photo

@hengjinlp

Xuandong Zhao photo

@xuandongzhao

Weijia Shi photo

@WeijiaShi2

Shangbin Feng photo

@shangbinfeng

Manling Li photo

@ManlingLi_

Shizhe Diao@EMNLP2024 photo

@shizhediao

Ziniu Hu photo

@acbuller

Tao Yu photo

@taoyds

Yu Zhang @ EMNLP 2024 photo

@yuz9yuz

Freda Shi photo

@fredahshi

Li Dong photo

@donglixp

Hanjie Chen photo

@hanjie_chen

Pinned

How can we better unlock LLM reasoning ability? How does code training steer LLMs to produce structured intermediate steps & self-improve? New Year, New Paper 🚀 Check out our systematic survey: How Code Empowers LLMs to Serve as Intelligent Agents arxiv.org/abs/2401.00812

Tweet Image 1

May Fung Reposted

UIUC-NLP @ EMNLP2024 - it was especially wonderful to see all alums!

Tweet Image 1

Arrived in Miami for #EMNLP2024! 🌴🤩 Excited to present our recent work, 🎆MACAROON🎆 (self-iMaginAtion for ContrAstive pReference OptimizatiON), for enhancing LVLM knowledge boundary awareness and personalization while mitigating hallucination. Data & Code:…

Tweet Image 1
Tweet Image 2
Tweet Image 3
Tweet Image 4

Is your Vision-Language Model really helpful at all times? Can we instruct them to interact with users during conversations to avoid hallucinations or biased responses? 🍰Take some bites of PIE and MACAROON! We present a benchmark to evaluate LVLMs’ proactive engagement…

Tweet Image 1


May Fung Reposted

Working Talk: Speculations on Test-Time Scaling Slides: github.com/srush/awesome-… (A lot of smart folks thinking about this topic, so I would love any feedback.) Public Lecture (tomorrow): simons.berkeley.edu/events/specula…

Tweet Image 1
Tweet Image 2
Tweet Image 3
Tweet Image 4

May Fung Reposted

Join our workshop on AI for research! With invited speakers @hengjinlp @_DougDowney @jeffclune @marinkazitnik @WeiWang1973 @aviadlevis

🚀 [CFP] Join Us at AAAI 2025 for the Second AI4Research Workshop! 🧪 Are you exploring cutting-edge research in an AI-assisted scientific research lifecycle? Do you want to uncover potential "hidden gems" and "sleeping beauties" in scientific literature? Our full-day workshop…

Tweet Image 1


May Fung Reposted

Our 75 pages survey paper about Tool Learning with Foundation Models has been accepted by ACM Computing Surveys, led by Dr. Yujia Qin @TsingYoga : arxiv.org/pdf/2304.08354


🚀 [CFP] Join Us at AAAI 2025 for the Second AI4Research Workshop! 🧪 Are you exploring cutting-edge research in an AI-assisted scientific research lifecycle? Do you want to uncover potential "hidden gems" and "sleeping beauties" in scientific literature? Our full-day workshop…

Tweet Image 1

May Fung Reposted

🚀 [CFP] Join Us at AAAI 2025 for the Second AI4Research Workshop! 🧪 Dive into the breakthroughs in AI-assisted research lifecycle. 🗓️Submission Deadline: Nov 24 @RealAAAI #callforpapers #aaai2025 #ai4research Submit your work now! twtr.to/_sqcK

Tweet Image 1

May Fung Reposted

🚀 Introducing Personalized Visual Instruction Tuning (PVIT)! Can your MLLM recognize you? We propose a novel formulation and a data construction framework to create MLLMs that conduct personalized dialogues. 📄 Paper: arxiv.org/pdf/2410.07113 💻 Code: github.com/sterzhang/PVIT

Tweet Image 1

May Fung Reposted

🧐Can we create a navigational agent that can handle thousands of new objects across a wide range of scenes? 🚀 We introduce DivScene bench and NatVLM. DivScene contains houses of 81 types and thousands of target objects. NatVLM is an end-to-end agent based on a Large Vision…

Tweet Image 1

May Fung Reposted

Follow us @hkustNLP 😁


May Fung Reposted

🚀 Launching the Second AI4Research Workshop at AAAI 2025 @RealAAAI! Dive into the interdisciplinary collaboration for breakthroughs in AI-assisted research lifecycle. Submit your research by Nov. 22! twtr.to/jA1sL


May Fung Reposted

Fall is here - Crisp air🌫️, falling leaves🍂, and our new preprint🚨 are all coming together! LLMs can be your best companions that truly know what you want - We train LLMs to ‘interact to align’, essentially cultivating the meta-skill of LLMs to implicitly infer the unspoken…

Tweet Image 1

We're open to industry sponsorships as well (shoot us an email)! Let's grow the AI4Research research community together~ 🌟

Tweet Image 1

🚀 Launching the Second AI4Research Workshop at AAAI 2025 @RealAAAI! Dive into the interdisciplinary collaboration for breakthroughs in AI-assisted research lifecycle. Submit your research by Nov. 22! twtr.to/jA1sL



May Fung Reposted

🎮 Check out the live demo for LM-Steer “Word Embeddings Are Steers for LMs” (Outstanding Paper ACL 2024) at huggingface.co/spaces/Glacioh…, which can: 1.🕹️Steer model generation 2.🔬Discover word embedding dimensions 3.📊Profile sentences & identify keywords #ACL2024 #LLMs #Science4LM

Tweet Image 1
Tweet Image 2
Tweet Image 3

🎖Excited that "LM-Steer: Word Embeddings Are Steers for Language Models" became my another 1st-authored Outstanding Paper #ACL2024 (besides LM-Infinite) We revealed steering roles of word embeddings for continuous, compositional, efficient, interpretable& transferrable control!

Tweet Image 1
Tweet Image 2
Tweet Image 3


Loading...

Something went wrong.


Something went wrong.