@a1rb4ck Profile picture

Pierre Nagorny

@a1rb4ck

Machine Learning @artanim_mocap. #VR, real-time #animation, markerless #mocap. @[email protected] @a1rb4ck.bsky.social

Similar User
Alex Wang photo

@therealalexwang

shengze wang photo

@mct1224

dontae photo

@d0nsuemorr

أس• أمـ •بـي photo

@saraboshehry_

Hongyu Miao photo

@HongyuMiao

Nicolas Evrard photo

@N_Evrard

Pierre Nagorny Reposted

Please find more details through the following links: - Project website: sites.google.com/view/robot-key… - Supplementary video: youtube.com/watch?v=YpOABp… - Accepted paper: studios.disneyresearch.com/2024/11/04/rob…


Pierre Nagorny Reposted

Whole body retargeting is dead simple with 𝗺𝗶𝗻𝗸 and runs much faster than real time. Here is an example of an AMASS sequence retargeted to @UnitreeRobotics's H1 humanoid. Kudos to @arthurallshire for helping with this 🙂


Pierre Nagorny Reposted

We just released two small models, with 3B and 8B parameters. Ministral 3B is exceptionally strong, outperforming Llama 3 8B and our previous Mistral 7B on instruction following benchmarks. mistral.ai/news/ministrau…

Tweet Image 1
Tweet Image 2

Pierre Nagorny Reposted

Mechazilla has caught the Super Heavy booster!


Pierre Nagorny Reposted

Animation quality is a difficult subject that I think we've not quite got to grips with yet as a community. My latest blog post shares some of my own thoughts on the subject. My hope is it can kickstart some discussions... or at least new dance moves! theorangeduck.com/page/animation…


Pierre Nagorny Reposted

I am excited to share our recent work with @WladekPalucki , @vivek_myers, @Taddziarm , @tomArczewski, @LukeKucinski, and @ben_eysenbach! Accelerating Goal-Conditioned Reinforcement Learning Algorithms and Research Webpage: michalbortkiewicz.github.io/JaxGCRL/


Pierre Nagorny Reposted

(1) We needed to develop a biomechanical model that supported "recording" from all 50-muscles of the limb. We settled on MuJoCo (before deepmind bought it!) and Paul went to action on collecting high-res CT and MRI scans. Then, we built a NN approach to optimize insertion points…

Tweet Image 1

Pierre Nagorny Reposted

🎉 Diffusion-style annealing + sampling-based MPC can surpass RL, and seamlessly adapt to task parameters, all 𝘁𝗿𝗮𝗶𝗻𝗶𝗻𝗴-𝗳𝗿𝗲𝗲! We open sourced DIAL-MPC, the first training-free method for whole-body torque control using full-order dynamics 🧵 lecar-lab.github.io/dial-mpc/


Pierre Nagorny Reposted

📢 Check out 𝐒𝐚𝐩𝐢𝐞𝐧𝐬 (#ECCV2024, Oral)! Largest human-centric foundational models that natively supports 1K high-res inputs, achieving SOTA on 2D kpts, seg, depth and normal! Code and models (each task + pretraining) are ready to use TODAY! github.com/facebookresear… (1/6)


Pierre Nagorny Reposted

Excited to share our latest work! 🤩 Masked Mimic 🥷: Unified Physics-Based Character Control Through Masked Motion Inpainting Project page: research.nvidia.com/labs/par/maske… with: Yunrong (Kelly) Guo, @ofirnabati, @GalChechik and @xbpeng4 @SIGGRAPHAsia (ACM TOG). 1/ Read…

Tweet Image 1

Pierre Nagorny Reposted

Mistral just dropped a new vision multimodal model called Pixtral 12b! Also downloaded params json - GeLU & 2D RoPE are used for the vision adapter. The vocab size also got larger - 131072 Also Mistral's latest tokenizer PR shows 3 extra new tokens (the image, the start & end).

Tweet Image 1

magnet:?xt=urn:btih:7278e625de2b1da598b23954c13933047126238a&dn=pixtral-12b-240910&tr=udp%3A%2F%2Ftracker.opentrackr.org%3A1337%2Fannounce&tr=udp%3A%2F%https://t.co/2UepcMHjvL%3A1337%2Fannounce&tr=http%3A%2F%https://t.co/NsTRgy7h8S%3A80%2Fannounce



Pierre Nagorny Reposted

Why Would They Try and Do a Grey Rabbit?: Excerpt from Richard Williams' 'Adventures in Animation' | Animation Magazine animationmagazine.net/2024/08/why-wo…

Tweet Image 1

Pierre Nagorny Reposted

Control physics-based characters at interactive rates to climb over tables and more. No learning! Key ideas: (a) partwise MPC: switch between full-body and factored body-part MPC as convenient; (b) strategy guidance via sparse contact keyframes. SCA 2024 cs.ubc.ca/~van/papers/20…


Pierre Nagorny Reposted

Excited to share a new humanoid robot platform we’ve been working on. Berkeley Humanoid is a reliable and low-cost mid-scale research platform for learning-based control. We demonstrate the robot walking on various terrains and dynamic hopping with a simple RL controller.


Pierre Nagorny Reposted

Excited to finally open-source 𝐦𝐢𝐧𝐤, a library for differential inverse kinematics in Python based on the MuJoCo physics engine. github.com/kevinzakka/mink

Tweet Image 1

Pierre Nagorny Reposted

Pierre Nagorny Reposted

Ready for the Summer Olympics 2024? 🏀⚽️🏓🥊🎾🤺🏌🏻🏃 Want to control your humanoid to play your favorite sport? Look no further than SMPLOlympics: Sports Environments for Physically Simulated Humanoids! 🌐: smplolympics.github.io/SMPLOlympics 🧑🏻‍💻: github.com/SMPLOlympics/S… 📜:…


Pierre Nagorny Reposted

Introducing Omnigrasp: Grasping Diverse Objects with Simulated Humanoids. With Omnigrasp, we show that we can control a humanoid equipped with dexterous hands to grasp diverse objects (>1200) and follow diverse trajectories, with one policy! 🌐: zhengyiluo.com/Omnigrasp/ 📜:…


Pierre Nagorny Reposted

Flexible Motion In-betweening with Diffusion Models: We introduce a simple unified model for motion-inbetweening, that supports sparse keyframes, partial keyframes, and text conditioning. Great work led by @setarehcohan To be presented at SIGGRAPH 2024. setarehc.github.io/CondMDI/


Pierre Nagorny Reposted

Introduce HumanPlus - Shadowing part Humanoids are born for using human data. We build a real-time shadowing system using a single RGB camera and a whole-body policy for cloning human motion. Examples: - boxing🥊 - playing the piano🎹/ping pong - tossing - typing Open-sourced!


United States Trends
Loading...

Something went wrong.


Something went wrong.