@rollitomaki Profile picture

marq.ts

@rollitomaki

nn.Sequential(humor, malo)

Similar User
Royale with cheese photo

@cringe_salad

Satelec 2025 photo

@satelec_etsit

EstPau 💨 photo

@paulaesteban47

Carlitos (farm arc) 🍉🌲 photo

@peanutsfreezone

Skairipa photo

@dhdezcorral

samba 🌹 photo

@TitoSamu13

Olga photo

@Olgyyybf

Angel Gomez photo

@angelit0_7

Hertz photo

@Hertz_io

rocío 🍄 photo

@nightmxller

DTB  photo

@WillyDontLie

RazerRedFox photo

@RazerRedFox1

Pulpo 🐙 photo

@pulpotorro

☆ photo

@lwjist

marq.ts Reposted

Spanish Ramen.

Tweet Image 1

marq.ts Reposted

Crazy down grade

Tweet Image 1
Tweet Image 2

nice so installing and maintaining torch has just become more fucking difficult

We are announcing that PyTorch will stop publishing Anaconda packages on PyTorch’s official anaconda channels. For more information, please refer to the following post on dev-discuss: dev-discuss.pytorch.org/t/pytorch-depr…



????? wow

Our NeurIPS paper is published on arXiv. In this paper, we propose a new optimizer ADOPT, which converges better than Adam in both theory and practice. You can use ADOPT by just replacing one line in your code. arxiv.org/abs/2411.02853

Tweet Image 1


marq.ts Reposted

marq.ts Reposted

Mfs will turn on a LED and then say you can’t make it in CS without hardware😭

We are going to make it

Tweet Image 1


marq.ts Reposted

brutal rejection from NeurIPS

Tweet Image 1

marq.ts Reposted
Tweet Image 1

marq.ts Reposted

Hardware level CNN in system verilog Achieving good performance and low latency in simulation Fully synthesizable

Tweet Image 1

marq.ts Reposted
Tweet Image 1

my thirteenth reason

Tweet Image 1

marq.ts Reposted

😭😭😭😭😭😭😭😭😭😭😭😭😭😭😭😭😭😭😭😭😭😭😭😭😭😭😭😭😭😭

Tweet Image 1

marq.ts Reposted

2-pin 2.54mm headers are great test points for oscilloscope probe connections. However, it's a bit unsafe because you can easily reverse the probe accidentally, which connects the ground spring incorrectly. You might damage your oscilloscope input 😬 It's a great low inductance…

Tweet Image 1
Tweet Image 2

jaja osea q ahora resulta que attention is not all you need de hecho parece q ni lo necesitas ???

"What Matters In Transformers?" is an interesting paper (arxiv.org/abs/2406.15786) that finds you can actually remove half of the attention layers in LLMs like Llama without noticeably reducing modeling performance. The concept is relatively simple. The authors delete attention…

Tweet Image 1


marq.ts Reposted

Acabo de leer que “los 20s se sienten como si estuvieras yendo tarde a algo importante pero no sabes a qué” y wey no mms I felt that.


marq.ts Reposted
Tweet Image 1

marq.ts Reposted

being called smart because you have a variety of information on different subjects but in reality it’s all surface level intelligence and you don’t feel like you’re really good at anything


marq.ts Reposted

Consulting

how do i get a higher paying job as a low functioning / useless person



Loading...

Something went wrong.


Something went wrong.