@materzynska Profile picture

Joanna

@materzynska

Ph.D. student at MIT

Joined December 2016
Similar User
Visual Geometry Group (VGG) photo

@Oxford_VGG

Elliott / Shangzhe Wu photo

@elliottszwu

Yonglong Tian photo

@YonglongT

Jiajun Wu photo

@jiajunwu_cs

Oxford Torr Vision Group photo

@OxfordTVG

Yuki photo

@y_m_asano

Weidi Xie photo

@WeidiXie

Rohit Girdhar photo

@_rohitgirdhar_

Despoina Paschalidou photo

@paschalidoud_1

Wei-Chiu Ma photo

@weichiuma

Hang Zhao photo

@zhaohang0124

Tomas Jakab photo

@JakabTomas

Max Bain photo

@maxhbain

Vicky Kalogeiton photo

@VickyKalogeiton

Michael Niemeyer photo

@Mi_Niemeyer

Joanna Reposted

Thrilled to join @DartmouthCS as Assistant Professor in Jan 2025! I’m seeking 1-2 PhD students to join in Fall 2025. Application is by December 15th; please feel free to reach out with any questions. More details here: dartgo.org/ns-phd-apps-20…


Joanna Reposted

RT to PhD applicants... If you are a biologist who wants extract knowledge from RNA/Protein LMs or AlphaFold, and you are considering a PhD studying mechanistic interpretability of these AI models, then apply to baulab.info. Applications: khoury.northeastern.edu/apply/phd-appl…

An earnest question for protein folding folks. Is alphafold an impenetrable black box, or do scientists think it reveals new science in its predictions? Have biologists distilled new general principles of protein chemistry from the ML model? Or do we just use it as a black box?



Come hang out!

Come and hear from domain experts on video generation from Berkeley, Google Deepmind, OpenAI, RunwayML, Nvidia, Weizmann institute of science at our workshop on Saturday! See below and our website (sites.google.com/corp/view/cvgi…) for finalised schedule 🚀🚀 @icmlconf

Tweet Image 1


Joanna Reposted

Come and hear from domain experts on video generation from Berkeley, Google Deepmind, OpenAI, RunwayML, Nvidia, Weizmann institute of science at our workshop on Saturday! See below and our website (sites.google.com/corp/view/cvgi…) for finalised schedule 🚀🚀 @icmlconf

Tweet Image 1

Joanna Reposted

Please retweet! @ndif_team needs your help to make large-scale AI research accessible. If you join the pilot for ndif.us, we will help you run your #llama3 405b experiments on it. Apply for #NDIF pilot program access by July 30: ndif.us/405b.htm


Excited to be back in the city of my alma mater attending @icmlconf ! Join us this Saturday for our "Text, Camera, Action! Frontiers in Controllable Video Generation" workshop. Let's chat about diffusion models, interpretability, and AI safety! DM me if you want to connect!


Joanna Reposted

✨New Preprint ✨ How are shifting norms on the web impacting AI? We find: 📉 A rapid decline in the consenting data commons (the web) ⚖️ Differing access to data by company, due to crawling restrictions (e.g.🔻26% OpenAI, 🔻13% Anthropic) ⛔️ Robots.txt preference protocols…

Tweet Image 1

Check out my Jonathon’s (❤️) talk tomorrow at XRNeRF workshop

If you’re at @CVPR #CVPR tomorrow (Tuesday) I will be giving a talk on Dynamic 3D Gaussians + SplaTAM (see 👇) as part of the XRNeRF workshop (sites.google.com/view/xrnerf/). Talk is at 9.30am in room Summit 332. The other talks at the workshop also seem super cool! Def check it out



Joanna Reposted

Clarification: we accept papers from 4-8 pages!


Joanna Reposted

We have decided to extend our deadline to 𝟒𝐭𝐡 𝐨𝐟 𝐉𝐮𝐧𝐞! If you have a piece of work on controllable video generation that you want ✨the world ✨ to see, turn it into a 4-pager and submit to our workshop 👩‍💻🧑‍💻 would love to see you at ICML2024! @icmlconf


Consider submitting to our workshop on controllable video generation!

We are pleased to announce the first *controllable video generation* workshop at @icmlconf 2024! 📽️📽️📽️ We welcome submissions that explore video generation via different modes of control (e.g. text, pose, action). Deadline: 31st May AOE Website: sites.google.com/corp/view/cvgi…

Tweet Image 1


Joanna Reposted

We are pleased to announce the first *controllable video generation* workshop at @icmlconf 2024! 📽️📽️📽️ We welcome submissions that explore video generation via different modes of control (e.g. text, pose, action). Deadline: 31st May AOE Website: sites.google.com/corp/view/cvgi…

Tweet Image 1

Working on controllable video generation? Consider submitting to our workshop at @icmlconf in Vienna!

We are pleased to announce the first *controllable video generation* workshop at @icmlconf 2024! 📽️📽️📽️ We welcome submissions that explore video generation via different modes of control (e.g. text, pose, action). Deadline: 31st May AOE Website: sites.google.com/corp/view/cvgi…

Tweet Image 1


Mindblown

Introducing Sora, our text-to-video model. Sora can create videos of up to 60 seconds featuring highly detailed scenes, complex camera motion, and multiple characters with vibrant emotions. openai.com/sora Prompt: “Beautiful, snowy…



Joanna Reposted

Introducing Sora, our text-to-video model. Sora can create videos of up to 60 seconds featuring highly detailed scenes, complex camera motion, and multiple characters with vibrant emotions. openai.com/sora Prompt: “Beautiful, snowy…


Come talk to @TamarRottShaham @cogconfluence and me about benchmarks for interpretability at poster 1620 🎉🎊

Happening in 30 minutes!



Joanna Reposted

Happening in 30 minutes!

Excited to present our work about Automated Interpretability Agents and the FIND benchmark arxiv.org/abs/2309.03886 tomorrow at #NeurIPS23, 10:45 poster #1620, with @cogconfluence



Loading...

Something went wrong.


Something went wrong.