@ZiqiCharles Profile picture

Ziqi Zhang

@ZiqiCharles

CS Postdoc at @UofIllinois. Graduated from @PKU1898. Software engineering, AI, and computer security.

Joined January 2016
Similar User
Ting Su photo

@su_tingsu

CC photo

@cc52354447

Yiling Lou photo

@yiling__LOU

Yun Lin photo

@llmhyy

Zhou Yang photo

@Zhou_Yang_X

Chunyang Chen photo

@chun_yang_chen

Wei Yang photo

@davidyoung8906

Pei Liu photo

@peiliu17160192

Abhik Roychoudhury photo

@AbhikRoychoudh1

Shilin HE photo

@ShilinHe

Ding Wang photo

@DingPKU

Tejas Bhakta photo

@tejasybhakta

Xusheng Xiao photo

@xs_xiao

Chengpeng Wang photo

@Chasen86341870

Danning Xie | OpenToWork photo

@danning_x

Ziqi Zhang Reposted

Intern position at @brave : brave.com/careers/ My team is looking for strong students interested in private, secure and trustworthy ML. Feel free to email me with the subject line "Brave Internship 2025" and highlight your 3 most significant publications on these topics.


Ziqi Zhang Reposted

How to leverage the white-box info (i.e. source code) for fuzzing compilers? Check out our work “WhiteFox 🦊: White-Box Compiler Fuzzing Empowered by Large Language Models” at OOPSLA 2024! w/ @yinlin_deng, @lry89757, JIayi Yao, @JiaweiLiu_, @Reyhaneh, and @LingmingZhang (1/N)

Tweet Image 1

I'm on the way to USENIX Security'24. We will present one paper about privacy-preserving app authentication. I'm also happy to discuss TEE-based AI security and other AI-related security topics. If you're interested, please get in touch with me! #usenix #USESEC2024

Tweet Image 1
Tweet Image 2

😺Agentless! Do we need complicated agents to solve real-world SE tasks? No!

Introducing OpenAutoCoder-Agentless😺: A simple agentless solution solves 27.3% GitHub issues on SWE-bench Lite with ~$0.34 each, outperforming all open-source AI SW agents! It's fully open-source, try it out: 🧑‍💻github.com/OpenAutoCoder/… 📝huggingface.co/papers/2407.01…

Tweet Image 1


An elegant idea for MoE!

🚀 Introducing “XFT: Unlocking the Power of Code Instruction Tuning by Simply Merging Upcycled Mixture-of-Experts” at #ACL2024 XFT is a novel training scheme for instruction tuning to achieve upcycled MoE performance with dense-model compute in inference arxiv.org/abs/2404.15247

Tweet Image 1


Loading...

Something went wrong.


Something went wrong.