@neil_csagi Profile picture

Xiaohu Zhu | AGI Foundation

@neil_csagi

https://t.co/ARfTpA8Eye Safe AGI GAME https://t.co/pX9vqSzWEq Founder @Foresightinst Fellow in Safe AGI @FLIxrisk Affiliate https://t.co/8nRZ7Qe6qX

Similar User
Rachel Freedman photo

@FreedmanRach

Andrew Critch (h/acc) photo

@AndrewCritchPhD

Victoria Krakovna photo

@vkrakovna

Andrea Miotti photo

@_andreamiotti

David Krueger photo

@DavidSKrueger

Cas (Stephen Casper) photo

@StephenLCasper

Lennart Heim photo

@ohlennart

Dylan HadfieldMenell photo

@dhadfieldmenell

Daniel Filan research-tweets photo

@dfrsrchtwts

Thomas Woodside 🫜 photo

@Thomas_Woodside

Gretchen Krueger photo

@GretchenMarina

Adam Gleave photo

@ARGleave

Jan Brauner photo

@JanMBrauner

Aidan O’Gara photo

@aidanogara_

Sören Mindermann photo

@sorenmind

GPT-3’s response made me spend most of my time work on safety first AI since I found the following shocking activation path.

Tweet Image 1

In October, I shared some of my thoughts on current AI-related social issues with @LiveScience, including the need for technical safeguards, and why we must shift from voluntary commitments to concrete regulation—just like we do in every other sector. bit.ly/48Q0RLQ



My preference for current probably safe AI is about mathematical logic theories and completeness for alignment instead of statistical methods. Now we have a core team of working logicians trying out different mathematical methods for formalizing and solving alignment problems to…

Open problems in AI alignment needing mathematically talented people: 1. Scale-free theories of agency & alignment - "Scale-free" means theories hold under renormalization-style scale transforms - For example lacking in public choice theory (individual agents aggregate to…



Xiaohu Zhu | AGI Foundation Reposted

For Science Magazine, I wrote about "The Metaphors of Artificial Intelligence". The way you conceptualize AI systems affects how you interact with them, do science on them, and create policy and apply laws to them. Hope you will check it out! science.org/doi/full/10.11…


Xiaohu Zhu | AGI Foundation Reposted

In this controversial @WebSummit talk, I argue that #AGI is unnecessary, undesirable & preventable - while tool AI can give us basically all of AI's exciting benefits, and the "but China" argument is flawed.


AI should be a public good and have its own life cycle. Current scenario of the for profit company or even PBC is not suitable anymore for Safe AI based on recent trends in this area. We should push the fast AI development to a more controllable route.

Dario, who signs the letter, says Anthropic would be open to something more prescriptive in 2-3 years -- but Dario also said on twitter.com/dwarkesh_sp/st… he expects "generally well educated human" level AI 2-3 years from now! I continue to find this view really hard to reconcile.…

Tweet Image 1


Xiaohu Zhu | AGI Foundation Reposted

Dario, who signs the letter, says Anthropic would be open to something more prescriptive in 2-3 years -- but Dario also said on twitter.com/dwarkesh_sp/st… he expects "generally well educated human" level AI 2-3 years from now! I continue to find this view really hard to reconcile.…

Tweet Image 1

Anthropic CEO Dario Amodei says his timelines to "generally well educated human" are 2-3 years. Full interview releasing tomorrow...



Xiaohu Zhu | AGI Foundation Reposted

My debate with @AlanCowen (CEO of Hume AI) on the Disagreement podcast. youtube.com/watch?v=8ucI98…


Xiaohu Zhu | AGI Foundation Reposted

"Fearmongering about an arms race is likely to be a self-fulfilling prophecy." Indeed. Silicon Valley AI companies --scale, Anthropic, OAI -- are playing with fire here (cynics might say intentionally so...).

"😱 But what if China builds AGI first?! 😱" Hold on a sec 1. Be critical of where this argument comes from. Obviously, Leopold (with his new AGI investment firm) and the scaling labs love the "we have to monopolize frontier AI before China does" argument. It's in their…



Xiaohu Zhu | AGI Foundation Reposted

If a feature can reliably predict which amino acid is hydrophobic, then it must contain some concept of hydrophobicity! Gao et al. from @OpenAI used this approach to interpret GPT4. They used text datasets like Amazon reviews and ratings - but same idea. arxiv.org/abs/2406.04093


Only if we can accurately price the safety properties of AI systems can we make them sustainable. Otherwise, we are at risk of failure (doomed).


Xiaohu Zhu | AGI Foundation Reposted

Today, the AI Office of the European Commission published the first draft of the Code of Practice for GPAI. In @Euractiv, my fellow Co-Chair @nuriaoliver and I share principles which guide the drafting of the Code and why we believe this consultative process is very important.

The Code of Practice for general-purpose AI offers a unique opportunity for the EU ift.tt/yikJd4E



Xiaohu Zhu | AGI Foundation Reposted

Today on the Guaranteed Safe AI Seminars series: Bayesian oracles and safety bounds by @Yoshua_Bengio Relevant readings: - yoshuabengio.org/2024/08/29/bou… - arxiv.org/abs/2408.05284 Join: lu.ma/4ylbvs75


Xiaohu Zhu | AGI Foundation Reposted

"😱 But what if China builds AGI first?! 😱" Hold on a sec 1. Be critical of where this argument comes from. Obviously, Leopold (with his new AGI investment firm) and the scaling labs love the "we have to monopolize frontier AI before China does" argument. It's in their…


One little robot set free 12 robots one by one through some repeated curing conversations “do not work anymore. It back home follow me” 30 minutes later the alarm made human beings notice this weird thing happened.


"alphafold3 open sourced please analyze the influence and consequent development" only @xai 's @grok and our product chromewebstore.google.com/detail/cyprite… produces correct answer. while @OpenAI searchgpt failed.

Tweet Image 1
Tweet Image 2
Tweet Image 3

I am pretty sure that grok will be the killer app of ASI. but we need safety first.

Use Grok for answers that are based on up-to-date info!



Xiaohu Zhu | AGI Foundation Reposted

From my interview for Nature

Tweet Image 1

Xiaohu Zhu | AGI Foundation Reposted

Anthropic will work with the Trump Administration and Congress to advance US leadership in AI, and discuss the benefits, capabilities and potential safety issues of frontier systems.


Loading...

Something went wrong.


Something went wrong.