@munhitsu Profile picture

Lapsa-Malawski

@munhitsu

Tweets on Technology and Art. Views my own @[email protected]

Joined September 2007
Similar User
Simon Everest photo

@simoneverest

Jeremy Gould 🇺🇦 photo

@jeremygould

Stuart Hollands photo

@stuart_hollands

Al Davidson photo

@drsnooks

TraceyWilliamsAllred photo

@tewilliam

I loved gevent. It was bringing all the benefits of event loop I needed and leaving me with a straightforward API on monkey patched threads. I never could understand why it was treated as an ugly child

“I'm now convinced that async/await is, in fact, a bad abstraction for most languages, and we should be aiming for something better instead and that I believe to be thread.” lucumr.pocoo.org/2024/11/18/thr…



Nice, I might be eventually able to use letter “m” in passwords for some, ancient services. But then again if they are already ancient, will their CISO actually care about the new NIST guidance? mastodon.social/@LukaszOlejnik


Lapsa-Malawski Reposted

Explains why I found myself forced to not just block Musk, but also mute the terms “Elon”, “Musk”, “Elonmusk” to get a Twitter experience where I wouldn’t have every second tweet of his on my timeline. Case study worthy on how you degrade a social network long-term

The most pathetic billionaire that ever lived.

danaroach's tweet image. The most pathetic billionaire that ever lived.


Swift.org - Announcing Swift Homomorphic Encryption swift.org/blog/announcin…


Lapsa-Malawski Reposted

As Apple Intelligence is rolling out to our beta users today, we are proud to present a technical report on our Foundation Language Models that power these features on devices and cloud: machinelearning.apple.com/research/apple…. 🧵


Lapsa-Malawski Reposted

Convolutional Neural Networks in action


Lapsa-Malawski Reposted

Yann LeCun says he is working to develop an entirely new generation of AI systems that he hopes will power machines with human-level intelligence. It could take up to 10 years to achieve, he tells the @FT in an interview on.ft.com/3KbShLF

FT's tweet image. Yann LeCun says he is working to develop an entirely new generation of AI systems that he hopes will power machines with human-level intelligence. It could take up to 10 years to achieve, he tells the @FT in an interview <a style="text-decoration: none;" rel="nofollow" target="_blank" href="https://t.co/Nw9Vaexe3d">on.ft.com/3KbShLF</a>

I'm playing with G-Eval to test the LLM outputs using LLM. It roughly works until it doesn't. How am I supposed to reason with test result: "the actual output's prompt is in Polish which mismatches the language-prompt specified as Polish, aligning correctly" #llm #gpt #deepeval


honestly, Word, we have enough CPU to keep the table of contents updating automatically


Lapsa-Malawski Reposted

For anyone interested, I've just written up my 'AI form extractor' experiment from a few weeks ago as a blog post timpaul.co.uk/posts/using-ai…

timpaul's tweet image. For anyone interested, I've just written up my 'AI form extractor' experiment from a few weeks ago as a blog post

<a style="text-decoration: none;" rel="nofollow" target="_blank" href="https://t.co/gjvhp3s9Qh">timpaul.co.uk/posts/using-ai…</a>

I’ve just been just told by the staff at Pret A Manger that there is no water in espresso 🙈


Lapsa-Malawski Reposted

How to be as "smart" as Auto-Regressive LLMs: - memorize lots of problem statements together with recipes on how to solve them. - to solve a new problem, retrieve the recipe whose problem statement superficially matches the new problem. - apply the recipe blindly and declare…

There’s an art to distilling these to the absolute minimal necessary text. The human brain can’t comprehend how stupid these things are without practice.

colin_fraser's tweet image. There’s an art to distilling these to the absolute minimal necessary text. The human brain can’t comprehend how stupid these things are without practice.


Lapsa-Malawski Reposted

At this point I feel like we understand pretty well what's going on with LLMs: - Outputs are roughly equivalent to kernel smoothing over positional embeddings (arxiv.org/pdf/1908.11775…) - The learned computation model is *probably* bounded by RASP-L (arxiv.org/pdf/2310.16028…) -…


Loading...

Something went wrong.


Something went wrong.