Similar User
Pierre-Luc Bacon photo

@pierrelux

Andrew McNutt | @mcnuttandrew@hci.social photo

@_mcnutt_

Adam Coscia photo

@AdamCoscia

Kiran Vodrahalli (kiranvodrahalli@mathstodon.xyz) photo

@kiranvodrahalli

Apoorv Vyas photo

@apoorv2904

Wesam 🇵🇸 photo

@Manassra

Harvard Club UK photo

@HarvardClubUK

Giuseppe Macri' photo

@_giuseppemacri

RobinGainer photo

@RobinGainer

Isaac Cho photo

@_FlyHigh_high

Arpit Narechania 🇮🇳 🏏♟️ photo

@arpitnarechania

Chris Chua photo

@chrisirhc

Lezhi Li Reposted

Representation matters. Representation matters. Representation matters, even for generative models. We might've been training our diffusion models the wrong way this whole time. Meet REPA: Training Diffusion Transformers is easier than you think! sihyun.me/REPA/(🧵1/n)

Tweet Image 1

Lezhi Li Reposted

PSA: I'm open to guest posts on @interconnectsai covering areas I'm not an expert in, video gen, image gen, architectures, etc. Will be a high bar though.


Lezhi Li Reposted

La Baie Area by @trbdrk


Lezhi Li Reposted

AI tools have no creative control; they're like slot machines. Yeah buddy, sure. This is ReshotAI. In the coming months, we will see many more tools like this. Future of AI is bright ✌️


Lezhi Li Reposted

Quick tests of CLIP directions with flux-schnell. The latent space is still entangled and jumpy, but I'm finding higher-level sliders like 'complexity' and 'playfulness' fun and useful for navigating. 🧭🎚️


Lezhi Li Reposted

People will be like, “generative AI has no practical use case,” but I did just use it to replace every app icon on my home screen with images of Kermit, soooo

Tweet Image 1
Tweet Image 2

Lezhi Li Reposted

I'm genuinely impressed by Kolors IP Adapter! 🎨 Just put out a demo so you can play with image variations and reference 🖼️ ▶️ huggingface.co/spaces/multimo…

Tweet Image 1

Lezhi Li Reposted

In the past few weeks, I deep dived into an exploration revolving around the use of physical interfaces to feed and interact with a real-time img2img diffusion pipeline using Stream Diffusion and SDXL Turbo. What really captivated me is to use my hands, objects, art supplies,…


Lezhi Li Reposted

New way to navigate latent space. It preservers the underlying image structure and feels a bit like a powerful style-transfer that can be applied to anything. The trick is to...


Lezhi Li Reposted

just released a paper with will berman on multimodal inputs for image generation main idea: describing things just in text is often hard. can you train a model that uses interleaved text/image prompts for image generation? the answer is yes. 🧵

Tweet Image 1

Lezhi Li Reposted

Image generation AI is a cognitive prosthetic for aphantasiacs. Opposing it is ableist.


Lezhi Li Reposted

The new 'Style References' is mind-blowing! I tried to transfer the styles of some movies. 🧵 Here is how and the results:

Tweet Image 1

Lezhi Li Reposted

If you have questions about why Meta open-sources its AI, here's a clear answer in Meta's earnings call today from @finkd

Tweet Image 1

Lezhi Li Reposted

Google just announced a new image generator! ImageFX (It's also available in Bard) 🧵 Comparisons and more in the thread:


In a world where new things to learn never fall short, finding an effective learning path is essential. I've not found a better Diffusion models tutorial than arxiv.org/abs/2208.11970. It explains things better than a whole quarter of Stanford course I took.


What a fruitful #kdd week! Learned so much and also of course witnessed how LLM is the hottest way to do recommendation systems / causal inferencing / outlier detection…… Plus, sharing the best experiment result I saw:

Tweet Image 1

I will be attending the #kdd2023 conference next week. Come say hi if you’re around! You are also welcome to come visit the Apple booth and learn about our latest research publications and career opportunities in AI and ML. See you there!


My latest side project on the topic of small LLMs! Thanks amazing collaborators for making this happen. You’re welcome to leave a comment on it on Kaggle if you find it helpful: kaggle.com/code/mistyligh…

Too expensive to train #LLMs like ChatGPT? Check out our recent survey on small LMs! Outline: • What are "small" LMs? • How to make them small? • Comparison between recent #opensource small LMs • Applications in the real world Survey: tinyurl.com/mini-giants

Tweet Image 1


Daily dose of life lessons from stats-teaching youtubers: * [Markov chain]: "the future is not independent of the past, but the future is conditionally independent of the past given the present". * [Ergodic therum]: "anything that can happen, will happen".


Lezhi Li Reposted

After 2 years, Practical Deep Learning for Coders v5 is finally ready! 🎊 This is a from-scratch rewrite of our most popular course. It has a focus on interactive explorations, & covers @PyTorch, @huggingface, DeBERTa, ConvNeXt, @Gradio & other goodies 🧵 course.fast.ai


Loading...

Something went wrong.


Something went wrong.