@a__mousavian Profile picture

Arsalan Mousavian

@a__mousavian

Robotics Researcher; Robotics Research Manager at @NVIDIAAI. Making robots do useful things with AI. Opinions are my own.

Similar User
David Held photo

@davheld

Abhishek Gupta photo

@abhishekunique7

Shuran Song photo

@SongShuran

Yuke Zhu photo

@yukez

Ankit Goyal photo

@imankitgoyal

Jacky Liang photo

@jackyliang42

Danfei Xu photo

@danfei_xu

Vikash Kumar photo

@Vikashplus

Joseph Lim photo

@JosephLim_AI

Jeannette Bohg photo

@leto__jean

Lerrel Pinto photo

@LerrelPinto

Adithya Murali photo

@Adithya_Murali_

Kaichun Mo photo

@KaichunMo

Oier Mees photo

@oier_mees

Chris Paxton photo

@chris_j_paxton

This talk was my favorite talk in the whole CoRL 2024. Very insightful...

Here's a link to the recording for anyone that's interested! youtube.com/live/ELUMFpJCU…



Our internship positions are open: nvidia.wd5.myworkdayjobs.com/en-US/NVIDIAEx… Consider applying if you are doing research in foundational model for robotic manipulation such as pick and place, bimanual/dexterous manipulation, tactile and assembly. Feel free to reach out as well after you apply.


Arsalan Mousavian Reposted

Want to generate large-scale robot demonstrations automatically? We have released the full MimicGen code. Excited to see what the community will do with this powerful data generation tool! Code: github.com/NVlabs/mimicgen Docs: mimicgen.github.io/docs/introduct…

Tired of collecting demonstrations all day to train your robot? Introducing MimicGen, an autonomous data generation system for robotics. Using just 200 human demos we generated a large multi-task dataset of 50K demos! #CoRL2023 #NVIDIAResearch 👇 mimicgen.github.io 🧵 1/



Going from language description of objects or placemet location to points on the image. Even gpt4-v struggles with this task yet it's cruicial for robots. Co-trained with synthetic data and VQA data. Generalizes quite well to out of distribution domains. Led by @TonyWentaoYuan

Humans use pointing to communicate plans intuitively. Compared to language, pointing gives more precise guidance to robot behaviors. Can we teach a robot how to point like humans? Introducing RoboPoint 🤖👉, an open-source VLM instruction-tuned to point. robo-point.github.io



I will be at #CVPR2024 this week. Looking forward to catching up with everyone specially those who are doing research in robotics manipulation and embodied ai.


Quite impressive!

Can't trip Atlas up! Our humanoid robot gets ready for real work combining strength, perception, and mobility.



We need to see more works like this that are focusing on manipulation rather than focusing on language and solving simplified manipulation problems.

Introduce 𝐌𝐨𝐛𝐢𝐥𝐞 𝐀𝐋𝐎𝐇𝐀🏄 -- Learning! With 50 demos, our robot can autonomously complete complex mobile manipulation tasks: - cook and serve shrimp🦐 - call and take elevator🛗 - store a 3Ibs pot to a two-door cabinet Open-sourced! Co-led @tonyzzhao, @chelseabfinn



Our latest work on grasping and placing objects with and without language conditioning led by @TonyWentaoYuan

Is it possible to have a single model for primitive actions that work on real robots 🤖 and unseen objects 🛩️? Introducing M2T2: Multi-task Masked Transformer at #CoRL23. M2T2 achieves 0-shot sim2real transfer in rearranging unseen objects in clutter. m2-t2.github.io 🧵👇



We have research internship positions in our team for the Summer of 2024. If you are passionate (and experienced) in using learning, LLM/VLMs, and 3D reasoning to take the robot manipulation capabilities to the next level, apply to our team: nvidia.wd5.myworkdayjobs.com/NVIDIAExternal…


Arsalan Mousavian Reposted

@DisneyResearch introduces their new robot at #IROS2023! Trained in simulation with #reinforcementlearning! @ieeeiros


Arsalan Mousavian Reposted

Ever wondered how to train a correspondence model for robotic tasks? Delighted to present Doduo, our correspondence model learned from in-the-wild videos. Doduo can establish generalizable fine-grained correspondence and enable a variety of robotic tasks. ut-austin-rpl.github.io/Doduo/


Arsalan Mousavian Reposted

We are excited to share our recent work, IndustReal arxiv.org/abs/2305.17110 (@bingjietang07, Michael Lin et al.), for solving contact-rich assembly tasks in simulation and transferring them to the real world. sites.google.com/nvidia.com/ind… youtube.com/watch?v=wzcpkD…


Arsalan Mousavian Reposted

At @NVIDIAAI research, we have been working on general-purpose robotic rearrangement 🤖 Today, we are announcing CabiNet, our recent work on scaling object rearrangement in clutter with synthetic data: cabinet-object-rearrangement.github.io w/ @a__mousavian, @clembow, @fishbotics

Excited to share our ICRA’23 @ieee_ras_icra work by @Adithya_Murali_ We scale up neural collision detection for object rearrangement with procedurally generated synthetic data. Project: nvda.ws/3USKFlW Video: nvda.ws/3H0sdSI 🧵👇



Arsalan Mousavian Reposted

Excited to share our ICRA’23 @ieee_ras_icra work by @Adithya_Murali_ We scale up neural collision detection for object rearrangement with procedurally generated synthetic data. Project: nvda.ws/3USKFlW Video: nvda.ws/3H0sdSI 🧵👇


Small government advocate unless recession hits Open market advocate unless competition takes off Pro-science unless it starts to work really well. Never imagined these reactions toward ChatGPT and LLMs.


Impressive how far AVs have came along... Would love to know whether the progress is because of a) learned planning/prediction b) better perception c) all of the above or it's something else. I have not been quite following the details of latest papers in AV space.

I can’t count the times I’ve been told that AVs will *never* be able to handle certain situations. How about taking a passenger through North Beach on St. Patrick’s day? Absolutely wild. Thanks to this driverless @Cruise AV (named Twin Peaks) for the clip.



Happy #Nowruz (Persian new year which starts at the first day of Spring)! May the new year bring you happiness, health, and success.

Tweet Image 1

With all these layoffs happenning, maybe it's time for @LinkedIn to add marked safe from layoff button. Sad times...


Arsalan Mousavian Reposted

I will be virtually presenting our CoRL work, come to say hi and chat about robot🤖! Online poster session 1 on pheedloop (paper 223): Today at 6:35pm PST or NZ time 2:35pm.

Automatically generate 4000 novel objects and their grasps from only 20 objects? Come and check out our #CoRL22 paper ISAGrasp! Paper: arxiv.org/pdf/2210.13638… Website: sites.google.com/view/implicita…



Arsalan Mousavian Reposted

What is happening in Iran is heartbreaking and difficult to watch. With fellow roboticists I condemn any form of violence against innocent children, women and the people in Iran.

Automatically generate 4000 novel objects and their grasps from only 20 objects? Come and check out our #CoRL22 paper ISAGrasp! Paper: arxiv.org/pdf/2210.13638… Website: sites.google.com/view/implicita…



Loading...

Something went wrong.


Something went wrong.