@myainotez Profile picture

Sinatras

@myainotez

Bs CS&EE , AI/ML Engineer in Automotive

Pinned

We got a new smolLM x Entropix update ! Now you can display your own token statistics in 3D space and inspect how your parameters fitting into model entropy characteristics. Comes with 2 experimental configs now and a data export module to let you utilize external tools.


Sinatras Reposted

Releasing two trillion tokens in the open. huggingface.co/blog/Pclanglai…

Tweet Image 1

We need to be faster or else philolgy will be automated before math

So long as we're automating math, let's automate philology as well.

Tweet Image 1


Sinatras Reposted

The qwen 2.5 models seem to have lower overall entropy vs llama models which is one reason i gravitate to llamas. qwen 2.5 coders have the lowest average entropy of any model family i've tested. that said ... qwen 2.5 coder 32B + entropix is looking like an absolute beast and…

Tweet Image 1
Tweet Image 2

Really excited for public release of this !

Today we are launching the Forge Reasoning API Beta, an advancement in inference time scaling that can be applied to any model or a set of models, for a select group of people in our community. nousresearch.com/introducing-th… The Forge Reasoning engine is capable of dramatically…

Tweet Image 1


User : I want you to act as a software quality assurance tester. LLM : Sure User : I want you to help me arrange couple flowers for a bouquet. LLM : Im dying !! HELP !!

Tweet Image 1

I'm entitled win on both worlds as it seems

more generally, i am feeling good about a bright future for cryptocurrency!



Planning one more push on pre-training before it reaches the limits

About the limits of LLMs. Ilya gave Reuters an interview. Reasoning is the future. Reuters published an important article today that refers to the discussion started by The Information yesterday. The following aspects are essential. - Pre-training is reaching its limits. "Ilya…

Tweet Image 1


Opus probably was having fun enjoying some quality time with other llm friends and got distracted by that fact

LLMs play Connect 4!



Sinatras Reposted

fun game for learning lean/math proofs: adam.math.hhu.de/#/

Tweet Image 1
Tweet Image 2

Sinatras Reposted

it's like the same model

Tweet Image 1

Many such cases

all token entropy wants to be low.

Tweet Image 1


Sinatras Reposted

Entropy is the ultimate boss battle


Great references actually im impressed

Tweet Image 1

Ask ChatGPT “based on what you know about me. draw a picture of what you think my current life looks like” past your responses below. thanks again @mreflow & @danshipper



Sinatras Reposted

this is peak blogging...nothing can come close to this...every part and process explained in detail...you can move and control things and see the processes from different views...ciechanow.ski/archives/

Tweet Image 1
Tweet Image 2
Tweet Image 3
Tweet Image 4

Sinatras Reposted

dear president @realDonaldTrump, because of few sf elites, we are forced to use chinese models like qwen-2.5 locally; however we really want to use american-made models such as openai-o1 or claude-sonnet-3.5 can you please issue an EO and open-source these models?

Tweet Image 1

Sinatras Reposted

maximum a posteriori (map) estimation is a key concept in bayesian statistics, used to estimate the most probable value of an unknown parameter given observed data and prior knowledge. it combines the principles of maximum likelihood estimation with bayesian inference, providing…

Tweet Image 1
Tweet Image 2

Frog made Entropix sentient we are doomed, she almost got mad at me for asking simple question with "step-by-step" instruction.

Tweet Image 1

unburdening language, one step at a time.



Sinatras Reposted

unburdening language, one step at a time.

its become too powerful

Tweet Image 1


United States Trends
Loading...

Something went wrong.


Something went wrong.