Similar User
xuhaiyang-mPLUG photo

@xuhaiya2483846

Brainfoster Tech Private Limited photo

@BrainFosterTech

Ahmad Jarif Yeasir photo

@YeasirAhmad

Uzima_empowerment photo

@Uzimaempower1

Rhett Route photo

@Rhett_Route

Daniel Hull photo

@hulldaniel67

Mikasa photo

@DPedrisco2

Akhabue Ehichoya Odianosen photo

@EngrDrAkhabue

Cris.py_AD photo

@Crispy_Catto_AD

Hyojoon (Joon) Kim photo

@joonkim_cs

beeツ photo

@mimiskookie

Jiacen Xu photo

@JiacenXu

BkWireless2 photo

@BkWireless2

XiV(えくしぶ) photo

@A1lis

Bence Halpern photo

@BenceHalpern

JackieChang Reposted

How do LLMs learn to reason from data? Are they ~retrieving the answers from parametric knowledge🦜? In our new preprint, we look at the pretraining data and find evidence against this: Procedural knowledge in pretraining drives LLM reasoning ⚙️🔢 🧵⬇️

Tweet Image 1

JackieChang Reposted
Tweet Image 1

JackieChang Reposted
Tweet Image 1

JackieChang Reposted

Model Garden by Google Cloud is badass, just be careful with your API key security lmao You can register for free and get $300 worth of credits. That's a lot to play with and they have all models as shown in the picture below. I recommend getting the free trial, no payment.

Tweet Image 1

JackieChang Reposted

🚀 Big news! We’re thrilled to announce the launch of Llama 3.2 Vision Models & Llama Stack on Together AI. 🎉 Free access to Llama 3.2 Vision Model for developers to build and innovate with open source AI. api.together.ai/playground/cha… ➡️ Learn more in the blog…

Tweet Image 1

JackieChang Reposted

Abstract

Tweet Image 1

JackieChang Reposted

How to write a Research Proposal (1/4)

Tweet Image 1

JackieChang Reposted

What is a 𝗩𝗲𝗰𝘁𝗼𝗿 𝗗𝗮𝘁𝗮𝗯𝗮𝘀𝗲? With the rise of Foundational Models, Vector Databases skyrocketed in popularity. The truth is that a Vector Database is also useful outside of a Large Language Model context. When it comes to Machine Learning, we often deal with Vector…


JackieChang Reposted

Writing a scientific article: A step-by-step guide for beginners (1/7)

Tweet Image 1

JackieChang Reposted

我研发了一款沉浸式英语跟读应用,通过 AI 声音检测实现了全新的跟读体验。用户跟读完一句后,系统会自动播放下一句。真正做到一气呵成、沉浸跟读。在过去的 20 个小时里,我持续打磨细节、修复 Bug,终于网站成功上线!希望大家多多评论、转发,您的支持是我不断迭代动力!@blackanger

Tweet Image 1
Tweet Image 2
Tweet Image 3

JackieChang Reposted

Run evals—directly from the OpenAI dashboard. Use your test data to compare model performance, iterate on prompts, and improve outputs. platform.openai.com/docs/guides/ev… Here's a quick walkthrough:


JackieChang Reposted

BREAKING

Tweet Image 1

JackieChang Reposted

Can LLMs reason effectively without prompting? Great paper by @GoogleDeepMind By considering multiple paths during decoding, LLMs show improved reasoning without special prompts. It reveals LLMs' natural reasoning capabilities. LLMs can reason better by exploring multiple…

Tweet Image 1

JackieChang Reposted

在 GitHub 上发现一款强大可离线的开源 AI 桌面应用:ScreenPipe。 它能够对你的电脑进行 24 小时监控,通过屏幕录制、OCR、音频输入和转录收集信息,并保存到本地数据库。 GitHub:github.com/mediar-ai/scre… 最后,利用 LLMs 直接对话、总结、回顾,你所在电脑上做过的事情。有点猛呀!


JackieChang Reposted

最近电脑代理工具都换 Mihomo Party 了,强烈推荐下。 1. 开源免费,支持 Windows、macOS 和 Linux,下载直接给你中文版本选择,对新手非常友好 2. 基于 Clash 二次开发,无缝支持 Clash 配置文件 3. 界面美观易用,打开就有非常详细的引导 4. 内置 Sub-Store 官网下载:mihomo.party

Tweet Image 1
Tweet Image 2
Tweet Image 3
Tweet Image 4

JackieChang Reposted

LLaVA-o1 is the first visual language model capable of spontaneous, systematic reasoning, similar to GPT-o1! 🤯 11B model outperforms Gemini-1.5-pro,GPT-4o-mini, and Llama-3.2-90B-Vision-Instruct on six multimodal benchmarks.

Tweet Image 1

JackieChang Reposted

Transformer by hand ✍️ in Excel ~ I just released my first-ever "Full-Stack" implementation of the Transformer model. 👇Download xlsx to give it a try!


JackieChang Reposted

"AI Travel Agent"

Tweet Image 1

JackieChang Reposted

语音理解性能逼近GPT-4o的一款开源多模态实时语音模型:Ultravox v0.4.1,直接理解文本和人类语音,无需单独的ASR,目前支持文本输出 首次响应时间150毫秒,生成速度约60token/秒 基于Llama3.1-8B、whisper构建 github:github.com/fixie-ai/ultra… @Gradio huggingface.co/spaces/freddya… #AI实时语音


JackieChang Reposted

Audio LMs scene is heating up! 🔥 @FixieAI Ultravox 0.4.1 - 8B model approaching GPT4o level, pick any LLM, train an adapter with Whisper as Audio Encoder, profit 💥 Bonus: MIT licensed checkpoints > Pre-trained on Llama3.1-8b/ 70b backbone as well as the encoder part of…


Loading...

Something went wrong.


Something went wrong.