@kasl_ai Profile picture

Krueger AI Safety Lab

@kasl_ai

We are a research group at the University of Cambridge led by @DavidSKrueger, focused on avoiding catastrophic risks from AI

Krueger AI Safety Lab Reposted

"hot take" (((shouldn't in fact be a hot take, but in the context of current AI policy discussions anything other than "do some evals" is a hot take, sadly....)))

A lot of safety-critical industries manage risk by estimating it and agreeing to keep it below a certain number. Should developers of powerful AI systems do the same? Our take: They should, but with caution, given the uncertainty of risk estimates. Also: well done, Leonie! :)



Krueger AI Safety Lab Reposted

Could you help us build @Cambridge_Uni's #AI research community? We are looking for a Programme Manager who can deliver key programmes, scope new opportunities & ensure that our mission embeds agile project management. 📅 Deadline: 8 July Read more ⬇️ ai.cam.ac.uk/opportunities/…


Krueger AI Safety Lab Reposted

New paper on sandbagging and password-locked models, concurrent with our work arxiv.org/abs/2405.19550

We need trustworthy capability evaluations to ensure the safety of AI systems.🛡️ But what if AI systems can hide (dangerous) capabilities during evaluations? 🕵️ This is the problem of *sandbagging*, which we explore in our new paper: arxiv.org/abs/2406.07358

Tweet Image 1


Krueger AI Safety Lab Reposted

Super proud to have contributed to @AnthropicAI's new paper. We explore whether AI could learn to hack its own reward system through generalization from training. Important implications as AI systems become more capable.

New Anthropic research: Investigating Reward Tampering. Could AI models learn to hack their own reward system? In a new paper, we show they can, by generalization from training in simpler settings. Read our blog post here: anthropic.com/research/rewar…

Tweet Image 1


Krueger AI Safety Lab Reposted

Super proud to have been able to make my little contribution to this monumental work. Huge credit to @usmananwar391 for recognizing the need for this paper and pulling everything together to make it happen

I’m super excited to release our 100+ page collaborative agenda - led by @usmananwar391 - on “Foundational Challenges In Assuring Alignment and Safety of LLMs” alongside 35+ co-authors from NLP, ML, and AI Safety communities! Some highlights below...

Tweet Image 1


New paper from Krueger Lab alum @MicahCarroll Congrats 🎉

Excited to share a unifying formalism for the main problem I’ve tackled since starting my PhD! 🎉 Current AI Alignment techniques ignore the fact that human preferences/values can change. What would it take to account for this? 🤔 A thread 🧵⬇️

Tweet Image 1


Krueger AI Safety Lab Reposted

We released a recent paper about AI risk management and affirmative safety! AI experts have suggested that AI developers should have to make a "positive" or "affirmative" case that their models are safe. What might this actually look like? 🧵 👇

Tweet Image 1

Krueger AI Safety Lab Reposted

Real privilege today to get scholars from @LeverhulmeCFI ,@CSERCambridge, @BennettInst, & @kasl_ai together today for a discussion of Concordia's State of AI Safety in China report with Kwan Yee Ng. Important work, buzzing exchange. concordia-ai.com


Krueger AI Safety Lab Reposted

It's great that governments and researchers are finally waking up to the extreme risks posed by AI. But we're still not doing nearly enough! Our short-but-sweet Science paper, with an all-star author list, argues for concrete steps that urgently need to be taken.

Out in Science today: In our paper, we describe extreme AI risks and concrete actions to manage them, including tech R&D and governance. “For AI to be a boon, we must reorient; pushing AI capabilities alone is not enough.”

Tweet Image 1


Krueger AI Safety Lab Reposted

Out in Science today: In our paper, we describe extreme AI risks and concrete actions to manage them, including tech R&D and governance. “For AI to be a boon, we must reorient; pushing AI capabilities alone is not enough.”

Tweet Image 1

Congrats to @_achan96_ , @DavidSKrueger , @Manderljung and the rest of the team on this @FAccTConference accepted paper

AI agents, which could accomplish complex tasks with limited human supervision, are coming down the pipe. How do we manage their risks? Our new @FAccTConference paper argues that we need visibility---information about the use of agents---and investigates how to obtain it. 🧵

Tweet Image 1


Krueger AI Safety Lab Reposted

Working to make RL agents safer and more aligned? Using RL methods to engineer safer AI? Developing audits or governance mechanisms for RL agents? Share your work with us at the RL Safety workshop at @RL_Conference 2024! ‼️ Updated deadline ‼️ ➡️ 24th of May AoE

Tweet Image 1

Catch Samyak, @DavidSKrueger and others at our @iclr_conf poster tomorrow🚀

"Mechanistically analyzing the effects of fine-tuning on procedurally defined tasks" 🎓 Samyak Jain, et al. 📅 May 8, 4:30 PM 📍 Poster Session 4



We will be at ICLR again this year! 🎉 Catch our poster next week in Vienna @iclr_conf We’ll be in Hall B, booth #228 on Wed 8 May from 4:30-6:30 PM.

🚀Excited to share new work analysing how fine-tuning works mechanistically: arxiv.org/abs/2311.12786 We show that fine-tuning only produces limited “wrappers” on pretrained model capabilities, and these wrappers are easily removed through pruning, probing or more fine-tuning!



Congrats to our affiliate @FazlBarez whose paper has won best poster at Tokyo Technical AI Safety Conference @tais_2024 We have had the pleasure of working with Fazl since February

New Paper 🎉: arxiv.org/pdf/2401.01814… Can language models relearn removed concepts? Model editing aims to eliminate unwanted concepts through neuron pruning. LLMs demonstrate a remarkable capacity to adapt and regain conceptual representations which have been removed 🧵1/8

Tweet Image 1


Watch our alumnus @jesse_hoogland presenting his work on singular learning theory

At #TAIS2024, @jesse_hoogland is about to show how transformers exhibit discrete developmental stages during in-context learning, when trained on language or linear regression tasks. Watch live now: youtube.com/watch?v=6n-kyG…

Tweet Image 1


Krueger AI Safety Lab Reposted

The #AISeoulSummit is just a month away 🇬🇧 🇰🇷 Jointly hosted by the UK & the Republic of Korea, the summit will focus on: 🤝 international agreements on AI safety 🛡️ responsible development of AI by companies 💡 showcasing the benefits of safe AI

Tweet Image 1

Krueger AI Safety Lab Reposted

Big congrats to my student @usmananwar391 for this!

We released this new agenda on LLM-safety yesterday. This is VERY comprehensive covering 18 different challenges. My co-authors have posted tweets for each of these challenges. I am going to collect them all here! P.S. this is also now on arxiv: arxiv.org/abs/2404.09932



Krueger AI Safety Lab Reposted

I'm delighted to have contributed to this new Agenda Paper on AI Safety * Governance of LLMs can be a v powerful tool in helping assure their safety and alignment. It could complement and *substitute* for technical interventions. But LLM governance is currently challenging! 🧵⬇️

I’m super excited to release our 100+ page collaborative agenda - led by @usmananwar391 - on “Foundational Challenges In Assuring Alignment and Safety of LLMs” alongside 35+ co-authors from NLP, ML, and AI Safety communities! Some highlights below...

Tweet Image 1


Krueger AI Safety Lab Reposted

We released this new agenda on LLM-safety yesterday. This is VERY comprehensive covering 18 different challenges. My co-authors have posted tweets for each of these challenges. I am going to collect them all here! P.S. this is also now on arxiv: arxiv.org/abs/2404.09932

I’m super excited to release our 100+ page collaborative agenda - led by @usmananwar391 - on “Foundational Challenges In Assuring Alignment and Safety of LLMs” alongside 35+ co-authors from NLP, ML, and AI Safety communities! Some highlights below...

Tweet Image 1


United States Trends
Loading...

Something went wrong.


Something went wrong.