@XiangPan8 Profile picture

Xiang Pan

@XiangPan8

NLP

Joined October 2019
Similar User
Tiantian Feng photo

@tiantiaf

Zhe Zeng photo

@zhezeng0908

Tian Li photo

@litian0331

Chacha Chen photo

@chachaachen

Lin Gui photo

@ybnbxb

Yu Gui photo

@YuChicago1234

Ying Jin photo

@YingJin531

Kexun Zhang✈️NeurIPS 2024 photo

@kexun_zhang

Florida, yes photo

@funandgames333

Zihao Wang photo

@wzihao12

Limeng Cui photo

@lmcui

Pengyu Cheng photo

@cheng_pengyu

Cong (Clarence) Jiang photo

@statsCong

Xuxing Chen photo

@XuxingChen3

Yibo Jiang photo

@yibophd

I have a deep learning joke, but CUDA out of memory.


I am wondering how much token we need for Meow-Language:)

Tweet Image 1

In the linear world, the optimization still sucks...


Xiang Pan Reposted

can a neural network learn to walk as a physical object in a physics simulation? here I train walking neural nets with an evolutionary algorithm. The input nodes/feet are activated by sine waves at learned phases & connections between two neurons extend based on their difference


Xiang Pan Reposted

This is a baby GPT with two tokens 0/1 and context length of 3, viewing it as a finite state markov chain. It was trained on the sequence "111101111011110" for 50 iterations. The parameters and the architecture of the Transformer modifies the probabilities on the arrows. E.g. we…

Tweet Image 1

youtube.com/watch?v=GSV5UD… Top Comment: Lesson: don't argue when you became parents, or your kids will install Arch.


Just wondering why all the linux mail clients are so old-fashioned. They are powerful, plugin-enriched, but ugly.


Xiang Pan Reposted

Want a remote Data Science / ML job? Here are 5 jobs that are remote and pay in USD, -- Thread --


Xiang Pan Reposted

📢 A 🧵on the future of NLP model inputs. What are the options and where are we going? 🔭 1. Task-specific finetuning (FT) 2. Zero-shot prompting 3. Few-shot prompting 4. Chain of thought (CoT) 5. Parameter-efficient finetuning (PEFT) 6. Dialog [1/]

Tweet Image 1
Tweet Image 2

Xiang Pan Reposted

The term 'spurious correlations' is often used informally in NLP to denote any undesirable feature-label correlations. But are all spurious features alike? #EMNLP2022 paper tries to address this question through a causal lens - arxiv.org/abs/2210.14011 (w/ Xiang & @hhexiy)


Loading...

Something went wrong.


Something went wrong.