@63_days Profile picture

Juil Koo

@63_days

PhD student @ KAIST CS | Research Intern @Adobe | SDE, 3D Geometry, Vision-Language

Similar User
Minghua Liu photo

@MinghuaLiu_

Kunho Kim photo

@kunho_kim_

Minhyuk Sung photo

@MinhyukSung

Seungwoo Yoo photo

@USeungwoo0115

Phillip (Yuseung) Lee photo

@yuseungleee

Jiahui photo

@JiahuiLei1998

Jihyun Lee photo

@jyun_leee

Mikaela Angelina Uy photo

@mikacuy

Zhenyu Jiang photo

@SteveTod1998

Kenny Jones photo

@RKennyJones

Juil Koo Reposted

🌟 Introducing GrounDiT, accepted to #NeurIPS2024! "GrounDiT: Grounding Diffusion Transformers via Noisy Patch Transplantation" We offer **precise** spatial control for DiT-based T2I generation. 📌 Paper: arxiv.org/abs/2410.20474 📌 Project Page: groundit-diffusion.github.io [1/n]

Tweet Image 1

Juil Koo Reposted

🔍 Our KAIST Visual AI group is seeking undergraduate interns to join us this winter. Topics include generative models for visual data, diffusion models/flow-based models, LLMs/VLMs, 3D/geometry, neural rendering, AI for science, and more. 🌐 Web: visualai.kaist.ac.kr/internship/

Tweet Image 1

Juil Koo Reposted

🎉 Excited to share that our work "SyncTweedies: A General Generative Framework Based on Synchronized Diffusions" has been accepted to NeurIPS 2024. Paper: arxiv.org/abs/2403.14370 Project page: synctweedies.github.io Code: github.com/KAIST-Visual-A… [1/8] #neurips2024 #NeurIPS


Juil Koo Reposted

🌟 Excited to present our paper "ReGround: Improving Textual and Spatial Grounding at No Cost" at #ECCV2024! 🗓️ Oct 3, Thu. 10:30 AM - 12:30 PM ⛳ Poster #104 ✔️ Website: re-ground.github.io ✔️ Slides: shorturl.at/toeB4 (from U&ME Workshop) Details in thread 🧵(1/N)


Juil Koo Reposted

🎉I'm thrilled that my first-authored paper, “PartSTAD: 2D-to-3D Part Segmentation Task Adaptation”, will be presented at #ECCV2024! If you are interested, come visit Poster #63 at the morning session tomorrow 10/1 (Tue) 10:30am-12:30pm! Project Page: partstad.github.io

Tweet Image 1

Juil Koo Reposted

Andy Warhol. Michelangelo. Rembrandt. They all had assistants. Let AI be your #Blender technician so that you can do more as the artist. Peek into this future on Tuesday at our BlenderAlchemy poster if you’re at #ECCV2024! youtube.com/watch?v=Uof4Ok… Work done @StanfordAILab


Juil Koo Reposted

🚀 Our SIGGRAPH 2024 course on "Diffusion Models for Visual Computing" introduces diffusion models from the basics to applications. Check out the website now. geometry.cs.ucl.ac.uk/courses/diffus… w/ Niloy Mitra, @guerrera_desesp, @OPatashnik, @DanielCohenOr1, Paul Guerrero, and @paulchhuang


Combining multiple denoising processes unlocks new possibilities for 2D diffusion models in mesh texturing, 360° panorama generation, and more. Then, what's the best way to "synchronize" these different processes? Check out our paper for details! The code is now available too!

🚀 Code for SyncTweedies is out! Code: github.com/KAIST-Visual-A… SyncTweedies generates diverse visual content, including ambiguous images, panorama images, 3D mesh textures, and 3DGS textures. Joint work with @63_days @KyeongminYeo @MinhyukSung .



Juil Koo Reposted

🚀 Excited to introduce our latest research paper: “DreamCatalyst: Fast and High-Quality 3D Editing via Controlling Editability and Identity Preservation”! 🌟 ⚡ Fast Mode: Within 25 minutes 💎 High-Quality Mode: Within 70 min dream-catalyst.github.io arxiv.org/abs/2407.11394


Juil Koo Reposted

🥳ReGround is accepted to #ECCV2024! 📌 re-ground.github.io Crucial text conditions are often dropped in layout-to-image generation. 🔑 We show a simple rewiring of attention modules in GLIGEN leads to improved prompt adherence! Joint work w/ @MinhyukSung

Tweet Image 1

Improving layout-to-image #diffusion models with **no additional cost!** ReGround: Improving Textual and Spatial Grounding at No Cost 📌 re-ground.github.io 🖌️ We show that a simple **rewiring** of #attention modules can resolve the description omission issues in GLIGEN.



Juil Koo Reposted

I presented at the 2nd Workshop on Compositional 3D Vision (C3DV) at CVPR 2024, where I introduced our recent work on 3D object compositionality. Check out the slides at the link below. Slides: 1drv.ms/b/s!AoICZLYjIF…

Tweet Image 1

#CVPR2024 "Posterior Distillation Sampling (PDS)" Looking for an alternative to SDS for editing? 🖼️ Come to Poster 358 "tomorrow" morning (Thu, 10:30am~noon)! 👉Project page: …erior-distillation-sampling.github.io


Juil Koo Reposted

Posterior Distillation Sampling (PDS) takes a step further from Score Distillation Sampling (SDS), enabling "editing" of NeRFs, Gaussian splats, SVGs, and more. Join our poster at #CVPR2024 on Thursday morning. 📌 Poster #358 📅 Thu 10:30 a.m. - noon 🌐 …erior-distillation-sampling.github.io

What would be the best optimization method for editing NeRFs, 3D Gaussian Splats and SVGs using 2D diffusion models? 🤔 We present Posterior Distillation Sampling (PDS) at #CVPR2024, a novel optimization method designed for diverse visual content editing. 1/N



Juil Koo Reposted

(1/N) CFG requires high guidance (>5) to "work", but comes with several issues 🤦‍♂️: reduced diversity, saturation, poor invertibility. Is this inevitable? 🤔 Presenting CFG++,🚀 a simple fix enabling small guidance: better sample quality + invertibility, smooth trajectory 🤟

Tweet Image 1

Loading...

Something went wrong.


Something went wrong.