Jeff Clune
Professor, Computer Science, University of British Columbia. CIFAR AI Chair, Vector Institute. Senior Advisor, DeepMind. ML, AI, deep RL, deep learning, AI-Generating Algorithms (AI-GAs), open-endedness.
- Reposted by Jeff CluneUBC Computer Science invites applications for up to two full-time tenure-track positions with the following priority areas: visualization, robotics, reinforcement learning, data management, and data mining. Applications are due Wed Dec 10, 2025. Learn more: www.cs.ubc.ca/our-departme...
- Reposted by Jeff CluneAI is evolving too quickly for an annual report to suffice. To help policymakers keep pace, we're introducing the first Key Update to the International AI Safety Report. 🧵⬇️ (1/10)
- Reposted by Jeff CluneFor all the details, please give the paper a read! Paper: arxiv.org/abs/2507.06466 Infinite thanks to @jeffclune.com and @cong-ml.bsky.social for all their guidance!
- Reposted by Jeff CluneRead this article on how AI is contributing to its own development, featuring UBC Computer Science Professor @jeffclune.com!
- Reposted by Jeff Clune💡The fall SRI Seminar Series kicks off on Wednesday with @jeffclune.com (UBC / Vector / DeepMind): “Open-ended and AI-generating algorithms in the era of foundation models” Wed 12:30 ET Free, online🔗https://srinstitute.utoronto.ca/events-archive/seminar-2025-jeff-clune-2
- Thrilled to introduce Foundation Model Self-Play, led by Aaron Dharna. FMSPs combine the intelligence & code generation of foundation models with the curriculum of self-play & principles of open-endedness to explore diverse strategies in multi-agent games. Thread x.com/jeffclune/st...
- I'd post exclusively here if @bsky.app would get rid of the silly character limit
- Nice summary, with coverage of the Darwin Gödel Machine. www.youtube.com/watch?v=C1ku...
- A great summary of the latest in open-endedness research! The next big wave in AI? 🤔📈🚀 richardcsuwandi.github.io/blog/2025/op...
- How it felt to know AGI is coming soon long before the world was paying attention. officechai.com/ai/building-...
- Very nice summary of the Darwin Gödel Machine in Fortune, and an interesting tie-in to Sam's excellent recent blog post. fortune.com/2025/06/19/o...
- I am excited to be a part of @Yoshua_Bengio's new non-profit focused on AI Safety and Existential Risk, joining the great team of Scientific Advisors. This is a critically important mission for humanity.
- Today marks a big milestone for me. I'm launching @law-zero.bsky.social, a nonprofit focusing on a new safe-by-design approach to AI that could both accelerate scientific discovery and provide a safeguard against the dangers of agentic AI.
- Reposted by Jeff CluneIt's Friday Night, which means Trump's weekly science bloodbath. BUT WHY DOES HE DO THIS ON FRIDAY NIGHTS? Simple: He's a coward! Americans overwhelmingly support science! His attack on science is unpopular and he wants to bury it in the news cycle. Not on our watch. #StandUpForScience
- Excited to introduce the Darwin Gödel Machine: Open-Ended Evolution of Self-Improving Agents. We harness the power of open-ended algorithms to search for agentic systems that get better at coding, including improving their own code.
- Introducing The Darwin Gödel Machine sakana.ai/dgm The Darwin Gödel Machine is a self-improving agent that can modify its own code. Inspired by evolution, we maintain an expanding lineage of agent variants, allowing for open-ended exploration of the vast design space of such self-improving agents.
- Is there a cancer at the heart of modern AI, lurking just beneath the surface of its dazzling performance? Our research suggests maybe, but also shows elegant solutions are possible (though how to get them at scale remains a mystery). Check out the eye-opening, riveting paper below!
- Could a major opportunity to improve representation in deep learning be hiding in plain sight? Check out our new position paper: Questioning Representational Optimism in Deep Learning: The Fractured Entangled Representation Hypothesis. Paper: arxiv.org/abs/2505.11581
- Note: This is not a fundamental dismissal of deep learning. In fact, we have worked improving and leveraging deep learning for years, and are optimistic about its present and future. These results point to a way it could be substantially improved, which is exciting.
- Why do we tolerate loud motorcycles? I can't walk around with a crazy loud speaker, yet we allow people to have (and companies to sell) insanely loud machines that torture everyone else. They can be made quiet or silent...we should start heavily taxing and fining noise polluting machines.
- Very cool work! Fun to see an algorithm inspired by MAP-Elites help advance numerous open problems in math, computer science, AND improve Google's production infrastructure (data centers, TPU design, and AI training)! (Paraphrasing their paper and tweet).
- Great work AlphaEvolve team! MAP-Elites co-authored with Jean-Baptiste Mouret
- I greatly enjoyed “The Spectrum of AI Risks” panel at the Singapore Conference on AI. Thanks @teganmaharaj.bsky.social for great moderating, Max Tegmark for the invitation, and the organizers and other panelists for a great event! PS. Do I really have sad resting panel face? 😐
- I'm giving a talk at 11:30 today in the #ICLR World Models workshop. "Open-ended Agent Learning in the Era of Foundation Models and Foundation World Models." Drop by if you are interested! sites.google.com/view/worldmo...
- Tom @rockt.ai did a great job in his #ICLR2025 keynote on open-endedness of explaining the ideas we are all so passionate about. A huge thanks for the kind words and for featuring our work, including Jenny Zhang's OMNI. cc @kennethstanley.bsky.social @joelbot3000.bsky.social
- Very excited for this keynote by @_rockt! Awesome to see open-endedness go from a niche (😉) area to a keynote at #ICLR ! 🌱🌿🌳🌲🍀🌍✨ 📈 🧬🧪 cc @joelbot3000.bsky.social @kennethstanley.bsky.social
- Since the dawn of my career I've heard scientists joke after every AI advance “Now we just need to get AI to write the paper. Now The AI Scientist does all the research AND writes the paper! Wild times! 🧪🔬👩🔬🤖
- Introducing The AI Scientist-v2, which produced the 1st fully AI-generated paper to pass peer review at a workshop level (at #ICLR2025) ‼️ Tech Report: pub.sakana.ai/ai-scientist... GitHub: github.com/SakanaAI/AI-... This work is a proud collaboration between Sakana AI, UBC, and Oxford University.
- Reposted by Jeff CluneIntroducing The AI Scientist-v2, which produced the 1st fully AI-generated paper to pass peer review at a workshop level (at #ICLR2025) ‼️ Tech Report: pub.sakana.ai/ai-scientist... GitHub: github.com/SakanaAI/AI-... This work is a proud collaboration between Sakana AI, UBC, and Oxford University.
- Reposted by Jeff CluneCollecting a diamond in #Minecraft is “a very hard task”, says Dr. Jeff Clune (@jeffclune.com) of @cs.ubc.ca, who was part of a team that trained a program to find diamonds using videos of human play2. “This represents a major step forward for the field.” www.nature.com/articles/d41...
- Reposted by Jeff CluneTrump is the greatest gift to China in history.
- Awesome work. Great to see Dreamer in Nature! Congrats Danijar Hafner et al. Nature news article on it with quotes from yours truly. Below is the quote I provided that did not make it into the article. www.nature.com/articles/d41...
- I love this work. In fact, the Dreamer work is some of my favorite research in all of AI in the last few years. In 2018 I moderated a panel and asked Satinder Singh, a leading RL researcher, the following: 1/
- “Just as Fermi asked, ‘Where are all the aliens?’, I’ll ask you: Where are all the world models? We know they should work well, we know humans use them, yet no one has convincingly shown them to work. Why not?” 2/
- Danijar and his colleagues answered the call, finally delivering on the longstanding expectations we as a community had for the value of this type of approach. I congratulate them!
- Reposted by Jeff CluneI may be tired and a little hoarse, but as I said again and again on the Senate floor, this is a moment where we cannot afford to be silent, when we must speak up.
- Reposted by Jeff CluneWhen whites and minorities drive at identical speeds (according to objectively measured data from Lyft) Florida police are 24-33% more likely to issue speeding citations to minority drivers and charge them 23-34% greater fines. These are not small effects!
- It is an honor to receive a Killam Accelerator Research Fellowship. Thank you to everyone involved!
- Congratulations to the Science recipients of the 2024 Faculty Research Awards: Jeff Clune @cs.ubc.ca Alannah Hallas & Alison Lister @ubcphas.bsky.social Takamasa Momose & Tao Huan @ubcchem.bsky.social Andrew Trites @ubcoceans.bsky.social prizes.research.ubc.ca/news-announc...
- What a remarkable milestone in history! Since at least 2010 I heard the joke amongst AI scientists "Now if only we could get AI to write the paper." Fast forward 14 years and it's possible! Exciting times! 🧪 🧫 🔬 🚀 📈
- The AI Scientist Generates its First Peer-Reviewed Scientific Publication We’re proud to announce that a paper produced by The AI Scientist-v2 passed the peer-review process at a workshop in ICLR, a top AI conference. Read more about this experiment → sakana.ai/ai-scientist...
- Your chance to work with one of the all time great minds in science! 🚀🚀🚀🚀🚀🚀
- I’m thrilled to share that I just joined @LilaSciences as SVP of Open-Endedness! You can join my team here: For Research Scientist: job-boards.greenhouse.io/lila/jobs/78... For Research Engineer: job-boards.greenhouse.io/lila/jobs/78...
- Honored to serve on the Government of Canada's Safe & Secure AI Advisory Group to keep it "well informed on risks associated with AI systems" w/ AI luminaries, e.g. @yoshuabengio.bsky.social Joelle Pineau, Elissa Strome, @davidduvenaud.bsky.social & many others ised-isde.canada.ca/site/advisor...
- My guest lecturing for the Stanford CS course Self-Improving AI Agents. The talk is online, titled "Open-ended Agent Learning in the Era of Foundation Models" (w/ more emphasis on the AI Scientist and ADAS than prior versions). Thanks profs @Azaliamirh & @achowdhery! www.youtube.com/watch?v=EZBu...
- It was an honor to be on Quirks and Quarks (the CBC science show) with @cong-ml.bsky.social talking about The AI Scientist and the impact of AI on science. Science is being transformed by the AI revolution cbc.ca/listen/live-...
- Great input from CIFAR's Elissa Strome & others. Thanks to producer Amanda Buckiewicz! More on The AI Scientist: sakana.ai/ai-scientist with excellent collaborators @_chris_lu_ @RobertTLange @jfoerst.bsky.social and @hardmaru.bsky.social
- Introducing Automated Capability Discovery! ACD automatically identifies surprising new capabilities and failure modes in foundation models, via "self-exploration" (models exploring their own abilities). Led by @cong-ml.bsky.social & @shengranhu.bsky.social 🔬🤖🧠🔎 [1/9]
- ACD automatically creates a concise "Capability Report" of discovered capabilities and failure modes, enabling quick inspection and easier dissemination of results or flagging issues pre-deployment. [2/9]
- ACD mimics community exploration: endlessly generating tasks (in code with automated scoring) probing for new capabilities or weaknesses—covering topics from string games to complex puzzles. In a GPT-4o self-eval, ACD uncovered thousands of capabilities (visualized here)! [3/9]
-
View full threadPlease check out our paper: arxiv.org/abs/2502.07577 Website: www.conglu.co.uk/ACD/ All code is open-source: github.com/conglu1997/ACD Please let us know what you think! [9/9]
- Reposted by Jeff Clune🤔 What if AI could explain its thought process before taking action? Our latest ANDERS blog covers Vector Researchers @jeffclune.com & @shengranhu.bsky.social’s wrk on "thought cloning" - teaching AI to express its reasoning process in language we can understand vectorinstitute.ai/thought-clon...
- Nice to see international collaborations of this scale on AI safety! Also nice that our work (four papers) contributed to the discussion (full disclosure: that's both on safety, but also on creating powerful general agents, including mentions of The AI Scientist and SIMA). tldr; it's complicated
- Today, we are publishing the first-ever International AI Safety Report, backed by 30 countries and the OECD, UN, and EU. It summarises the state of the science on AI capabilities and risks, and how to mitigate those risks. 🧵 Full Report: assets.publishing.service.gov.uk/media/679a0c... 1/21
- Operator is the descendant of VPT (work our team did at OpenAI on learning to use a mouse+keyboard to perform long-horizon tasks (demonstrated in Minecraft). As we wrote in that blog post, VPT is "a step towards general computer-using agents." Released ~2.5 years later. openai.com/index/introd...
- We live in interesting times! VPT: openai.com/index/vpt