@cykim1006 Profile picture

Changyeon Kim

@cykim1006

PhD Student at KAIST w/ @jinwoos0417 and @kimin_le2

Similar User
Dongkeun Yoon photo

@dongkeun_yoon

Jinheon Baek photo

@jinheonbaek

Younggyo Seo photo

@younggyoseo

Sangmin Bae photo

@raymin0223

Junsu Kim photo

@JunsuKim97

Sungnyun Kim photo

@kim_sungnyun

Hyeon min Yun photo

@hyeonmin_Lona

Jaehong Yoon photo

@jaeh0ng_yoon

Seohong Park photo

@seohong_park

Seonghyeon Ye photo

@SeonghyeonYe

Seokeon Choi photo

@SeokeonC

Hoyeon Chang photo

@hoyeon_chang

Sangwoo Mo photo

@sangwoomo

Chanwoo Park photo

@chanwoopark20

Sihyun Yu photo

@sihyun_yu

Pinned

🙌 At #NeurIPS to present my paper on Tue (12/12) 10:45AM (#1415): IL framework for generalization with VLM reward. I am working on reward learning / RLHF / generalization. DM me if you want to chat 👀 Also looking for an internship/visiting position. welcome recommendations 🙏🏻

Tweet Image 1

Excited to share Adaptive Return-conditioned Policy (ARP): a return-conditioned policy utilizing adaptive multimodal reward from pre-trained CLIP encoders! ARP can mitigate goal misgeneralization and execute unseen text instructions! sites.google.com/view/2023arp 🧵👇 1/N



Changyeon Kim Reposted

Curious whether video generation models (like #SORA) qualify as world models? We conduct a systematic study to answer this question by investigating whether a video gen model is able to learn physical laws. Three are three key messages to take home: 1⃣The model generalises…


Changyeon Kim Reposted

Excited to release RT-Affordance! We propose conditioning policies on visual affordance plans as an intermediate representation that allows us to learn new tasks without collecting any new robot trajectories. Website and paper: snasiriany.me/rt-affordance Here’s a short 🧵


Changyeon Kim Reposted

With the recent progress in large-scale multi-task robot training, how can we advance the real-world deployment of multi-task robot fleets? Introducing Sirius-Fleet✨, a multi-task interactive robot fleet learning framework with 𝗩𝗶𝘀𝘂𝗮𝗹 𝗪𝗼𝗿𝗹𝗱 𝗠𝗼𝗱𝗲𝗹𝘀! 🌍 #CoRL2024


Changyeon Kim Reposted

How can we scale up humanoid data acquisition with minimal human effort? Introducing DexMimicGen, a large-scale automated data generation system that synthesizes trajectories from a few human demonstrations for humanoid robots with dexterous hands. (1/n)


Changyeon Kim Reposted

D4RL is a great benchmark, but is saturated. Introducing OGBench, a new benchmark for offline goal-conditioned RL and offline RL! Tasks include HumanoidMaze, Puzzle, Drawing, and more 🙂 Project page: seohong.me/projects/ogben… GitHub: github.com/seohongpark/og… 🧵↓


Changyeon Kim Reposted

Mobile AI assistants (like Apple Intelligence) offer useful features using personal information. But how can we ensure they’re safe to use? Introducing MobileSafetyBench—a benchmark to assess the safety of mobile AI assistants. PDF & Code: mobilesafetybench.github.io 1/N 🧵


Changyeon Kim Reposted

🚀 First step to unlocking Generalist Robots! Introducing 🤖LAPA🤖, a new SOTA open-sourced 7B VLA pretrained without using action labels. 💪SOTA VLA trained with Open X (outperforming OpenVLA on cross and multi embodiment) 😯LAPA enables learning from human videos, unlocking…


Changyeon Kim Reposted

🤖 Want your robot to grab you a drink from the kitchen downstairs? 🚀 Introducing BUMBLE: a framework to solve building-wide mobile manipulation tasks by harnessing the power of Vision-Language Models (VLMs). 👇 (1/5) 🌐 robin-lab.cs.utexas.edu/BUMBLE


Amazing work from my colleague!

Introducing REPA! We show that learning high-quality representations in diffusion transformers is crucial for boosting generation performance. With REPA, we speed up SiT training by 17.5x (without CFG) and achieve state-of-the-art FID = 1.42 using CFG with the guidance interval.…

Tweet Image 1


Changyeon Kim Reposted

Is "offline RL" in offline-to-online RL really necessary? Surprisingly, we find that replacing offline RL with *unsupervised* offline RL often leads to better online fine-tuning performance -- even for the *same* task! Paper: arxiv.org/abs/2408.14785 🧵↓

Tweet Image 1

Changyeon Kim Reposted

Changyeon Kim Reposted

Presenting our #ICML2024 Position paper: “Automatic Environment Shaping is the Next Frontier in RL”. We argue that more reliable and principled techniques for RL environment shaping will pave the path towards generalist robots 🧵 [1/n] @gabe_mrgl Oral talk happening today…

Tweet Image 1

Changyeon Kim Reposted

Excited to present RSP: Representation learning with Stochastic frame Prediction, a new method that learns image representations from videos by training stochastic frame prediction model 🖼️ #ICML2024 Paper: arxiv.org/abs/2406.07398

Tweet Image 1

Changyeon Kim Reposted

Introducing CQN: Coarse-to-fine Q-Network, a value-based RL algorithm for continuous control🦾Initialized with 20~50 demonstrations, it learns to solve real-world robotic tasks within 10 mins of training, without any pre-training and shaped rewards! (1/4) younggyo.me/cqn


Changyeon Kim Reposted

5 GitHub repositories that will give you superpowers as an AI/ML Engineer:


Changyeon Kim Reposted

A ton of AI and Robotics developments this week. Big announcements from UCSD, Kyutai, Figure, Meta, Salesforce, Amazon, Adept, Runway, Sanctuary AI, Apple, OpenAI, and Clone Robotics. Here's everything that happened and how to make sense out of it:


Changyeon Kim Reposted

Introduce Open-𝐓𝐞𝐥𝐞𝐕𝐢𝐬𝐢𝐨𝐧🤖: ⁣ We need an intuitive and remote teleoperation interface to collect more robot data. 𝐓𝐞𝐥𝐞𝐕𝐢𝐬𝐢𝐨𝐧 lets you immersively operate a robot even if you are 3000 miles away, like in the movie 𝘈𝘷𝘢𝘵𝘢𝘳. Open-sourced!


Changyeon Kim Reposted

✨ Introducing 𝐎𝐩𝐞𝐧𝐕𝐋𝐀 — an open-source vision-language-action model for robotics! 👐 - SOTA generalist policy - 7B params - outperforms Octo, RT-2-X on zero-shot evals 🦾 - trained on 970k episodes from OpenX dataset 🤖 - fully open: model/code/data all online 🤗 🧵👇


Changyeon Kim Reposted

🤔 How can we detect texts generated from recent powerful LLMs such as GPT4 and Llama3? 🕵 Use your reward model! Arxiv: arxiv.org/abs/2405.17382 Project page: hyunseoklee-ai.github.io/reward_llm_det… (1/N)


Excited to release B-MoCA, a realistic benchmark for mobile device agents with diverse configurations. My collaborator Juyong will present this work in ICLR GenAI4DM workshop (5/11 Sat). Please stay tuned!

🤩Realistic benchmark --> practical AI agents Excited to share "B-MoCA", our new benchmark for evaluating mobile device control agents across diverse device configurations📱 Arxiv: arxiv.org/abs/2404.16660 Website (with code): b-moca.github.io 🧵1/N

Tweet Image 1


Loading...

Something went wrong.


Something went wrong.