@manuelmlmadeira Profile picture

Manuel Madeira

@manuelmlmadeira

PhD student @epfl | trying to boost science via machine learning

Similar User
Alessandro Favero photo

@alesfav

Abdellah Rahmani | عبدالله الرحماني photo

@arahmani_AR

Apostolos Modas photo

@amodas_

TML Lab (EPFL) photo

@tml_lab

Leonardo Petrini photo

@leopetrini_

Ali Hariri photo

@haririAli95

Fábio Cruz photo

@cruzfabios

Pratas (Luís) photo

@Im_Pratas

Skander Moalla @NeurIPS photo

@SkanderMoalla

Manuel Madeira Reposted

Python Optimal Transport (POT) 0.9.5 released: new solvers for Gaussian Mixture Model OT, unbalanced OT, semi-relaxed (F)GW barycenters, unbalanced FGW and COOT, partial GW. more details in 🧵1/7 github.com/PythonOT/POT/r…


Manuel Madeira Reposted

Fine-tuning pre-trained models leads to catastrophic forgetting, gains on one task cause losses on others. These issues worsen in multi-task merging scenarios. Enter LiNeS 📈, a method to solve them with ease. 🔥 🌐: lines-merging.github.io 📜: arxiv.org/abs/2410.17146 🧵 1/11


Manuel Madeira Reposted

Looking forward to your contributions!

📢 Deadline Extended! Submit your manuscript for the IEEE TSIPN Special Issue on Learning on Graphs for Biology & Medicine by Nov 1, 2024. Don’t miss your chance to contribute to cutting-edge research! hubs.la/Q02NH26N0 🧬📊 #IEEE #GraphLearning #Biomedicine #Research

Tweet Image 1


Manuel Madeira Reposted

It was fun to present our work at @GRaM_org_ yesterday :) Thanks to everyone who stopped by to discuss, and thanks to the organizers for making such an inspiring workshop happen!

Tweet Image 1
Tweet Image 2

Congratulations to @olgazaghen for having her Master's thesis accepted at @GRaM_workshop! 🎉 📜 Sheaf Diffusion Goes Nonlinear: Enhancing GNNs with Adaptive Sheaf Laplacians. 📎 openreview.net/pdf?id=MGQtGV5… With: Olga @steveazzolin @lev_telyatnikov @andrea_whatever @pl219_Cambridge

Tweet Image 1


Manuel Madeira Reposted

See you at @arlet_workshop @icmlconf Poster Session 1 between 1:30 - 2:30 pm (Schubert 1 - 3)!

Tweet Image 1

Have you ever been left puzzled by your PPO agent collapsing out of nowhere? 📈🤯📉 We’ve all been there... We can help you with a hint: monitor your representations!💡 🚀 We show that PPO suffers from degrading representations and that this breaks its trust region 💔

Tweet Image 1


Manuel Madeira Reposted

Good morning ICML🇦🇹 Presenting today "Localizing Task Information for Improved Model Merging and Compression" with @wangkeml @gortizji @francoisfleuret @pafrossard Happy to see you at poster #2002 from 11:30 to 13:00 if you are interested in model merging & multi-task learning!

Wouldn't it be great if we could merge the knowledge of 20 specialist models into a single one without losing performance? 💪🏻 Introducing our new ICML paper "Localizing Task Information for Improved Model Merging and Compression". 🎉 📜: arxiv.org/pdf/2405.07813 🧵1/9

Tweet Image 1


Manuel Madeira Reposted

I'm also at ICML -- excited to present our paper on training + LR schedules as a spotlight (!) at the workshop on the next gen of seq. models as well as ES-FOMO on Fri🤙 Reach out to discuss methods for training open models, scaling, efficiency, or the future of architectures :)

Tweet Image 1

Why exactly do we train LLMs with the cosine schedule, still?🤔 Maybe we do not actually have to -- and that would come with a lot of benefits :) 🧵Our paper on LR schedules, compute-optimality and more affordable scaling laws



Loading...

Something went wrong.


Something went wrong.