@Niallhorn Profile picture

Niall Horn

@Niallhorn

AuDHD dude with a passion for Computer Graphics, Machine Learning and VR/AR. Ex VFX Industry: (Industrial Light and Magic, Scanline VFX, Framestore)

Joined November 2010
Similar User
Eric Haines photo

@pointinpolygon

Jacys, Lin photo

@Jacys

Jiayin Cao photo

@Jiayin_Cao

Peter Shirley 🔮🛡 photo

@Peter_shirley

Jacco Bikker photo

@j_bikker

Alexandre Lamure photo

@AlexandreLamure

Sergen Eren photo

@sergenern

dydx photo

@ouyangyaobin

Maximilian Tarpini photo

@max_tarpini

Ruben Mayor photo

@rubenmayorfx

Attila Áfra photo

@attila_afra

Tiago Magalhães photo

@Antypurus

Rob Pieké photo

@robpieke

bucktoothbilly photo

@dh04071

Anoop Ravi Thomas photo

@createthematrix

Pinned

I'm thrilled to announce that I've finally completed my MSc dissertation project, accelerating fluid simulations with deep learning. This has been a LOT of work, I'm super proud to have gotten to the finish line 😀 #deeplearning #MachineLearning #fluiddynamics

Niallhorn's tweet image. I'm thrilled to announce that I've finally completed my MSc dissertation project, accelerating fluid simulations with deep learning. 

This has been a LOT of work, I'm super proud to have gotten to the finish line 😀 #deeplearning #MachineLearning #fluiddynamics

This year was meant to be the start of my graphics R&D career, but sadly turned out to be a horrible year of various events that shook my world view. Hopefully 2025 can fight back and work on cool R&D once more!


Niall Horn Reposted

"Moving Frostbite to PBR" is 10 years old already but still an amazing physically based rendering and lighting resource, so much knowledge packed in there. seblagarde.wordpress.com/2015/07/14/sig…


Niall Horn Reposted

Following over 1.5 years of hard work (w/@njroussel& Rami Tabbara), we just released a brand-new version of Dr.Jit (v1.0), my lab's differentiable rendering compiler along with an updated Mitsuba (v3.6). The list of changes is insanely long—here is what we're most excited about🧵

wenzeljakob's tweet image. Following over 1.5 years of hard work (w/@njroussel& Rami Tabbara), we just released a brand-new version of Dr.Jit (v1.0), my lab's differentiable rendering compiler along with an updated Mitsuba (v3.6). The list of changes is insanely long—here is what we're most excited about🧵

Niall Horn Reposted

Just a heads up. This version of the movie is available for everyone to watch on Disney+ in the extras tab. Great for people interested in VFX who want to study what @quister and team at WetaFX pulled off.

From Dave Lee

Niall Horn Reposted

Thrilled to welcome @_tim_brooks to @GoogleDeepMind So excited to be working together to make the long-standing dream of a world simulator a reality!!

I will be joining @GoogleDeepMind to work on video generation and world simulators! Can't wait to collaborate with such a talented team. I had an amazing two years at OpenAI making Sora. Thank you to all the passionate and kind people I worked with. Excited for the next chapter!



Niall Horn Reposted

Pure Auto-Regressive LLMs are a dead end on the way towards human-level AI. But they are still very useful in the short term. Mark is also talking about the post-LLM future of AI systems.


Omg, It's finally happening Meta's Passthrough API :) #MetaConnect


I have huge respect for DeepMind, but when has a game engine been overfit to ONE game ? The work is awesome, but calling it a game engine seems odd.


I've never anticipated a film more than Ready Player Two. The franchise is huge inspiration to myself! I was thinking of possible directors (now we know Spielberg is producing): * Alex Garland * Jordan Vogt Roberts * Gareth Edwards [..] ?


Was supposed to go and see @Busted in Leeds yesterday but my screwed up brain decided no. Add that to all the other events I've missed :/ Really want to see @JamesBourne play 'Year 3000' live at some point in my life... #autismishard


Niall Horn Reposted

Who at #CVPR2024 wants to see Relightable Gaussian Codec Avatars live in VR? Come join Codec Avatar workshop on June 18th (Tue)!! We offer VR demo during the break! There's some exciting announcement (and also my talk) there as well!

Join us this Tuesday at CVPR for a full-day workshop about all things telepresence! We have amazing speakers @angelaqdai, @Michael_J_Black, @subail and Prof. Christian Theobalt; @psyth91, Shih-En Wei and myself will be sharing our latest work too! 👉 codec-avatars.github.io/cvpr24/

yoknapathawa's tweet image. Join us this Tuesday at CVPR for a full-day workshop about all things telepresence!

We have amazing speakers @angelaqdai, @Michael_J_Black, @subail and Prof. Christian Theobalt; @psyth91, Shih-En Wei and myself will be sharing our latest work too!

👉 codec-avatars.github.io/cvpr24/


Niall Horn Reposted

How exactly does @WetaFXofficial take on set performance capture & translate it into living, breathing apes? Find out in this in-depth @beforesmag VFX look behind the scenes of @wesball's #KingdomOfThePlanetOfTheApes. Plus a 🧵 of highlights. beforesandafters.com/2024/05/12/how…


Niall Horn Reposted

Introducing Veo: our most capable generative video model. 🎥 It can create high-quality, 1080p clips that can go beyond 60 seconds. From photorealism to surrealism and animation, it can tackle a range of cinematic styles. 🧵 #GoogleIO


I found this Cornell Box easter egg in the new @EmbarkStudios Finals update ! Super cool :)

Niallhorn's tweet image. I found this Cornell Box easter egg in the new @EmbarkStudios Finals update ! Super cool :)

Niall Horn Reposted

Introducing DiffusionLight---a simple yet effective technique to estimate lighting from any in-the-wild input image. How? ... by inpainting a chrome ball into the image with diffusion models! (1/3) paper: arxiv.org/abs/2312.09168 diffusionlight.github.io huggingface.co/DiffusionLight…


Loading...

Something went wrong.


Something went wrong.