hardmaru
Co-Founder & CEO, Sakana AI 🎏 → @sakanaai.bsky.social
sakana.ai/careers
- Reposted by hardmaru[Not loaded yet]
- Our journey at Sakana AI is just getting started. We are looking for people to help us pioneer the next generation of AI—building from Japan to the world. Join us: sakana.ai/careers
- I founded Sakana AI after my time at Google, so it is incredibly meaningful to be able to partner with them now. It feels like a special connection to be working together again to advance the AI ecosystem in Japan. sakana.ai/google#en
- Reposted by hardmaru[Not loaded yet]
- Reposted by hardmaru[Not loaded yet]
- Reposted by hardmaru[Not loaded yet]
- Reposted by hardmaru[Not loaded yet]
- Reposted by hardmaru[Not loaded yet]
- Reposted by hardmaru[Not loaded yet]
- Reposted by hardmaru[Not loaded yet]
- One of my favorite findings: Positional embeddings are just training wheels. They help convergence but hurt long-context generalization. We found that if you simply delete them after pretraining and recalibrate for <1% of the original budget, you unlock massive context windows. Smarter, not harder.
- Introducing DroPE: Extending Context by Dropping Positional Embeddings We found embeddings like RoPE aid training but bottleneck long-sequence generalization. Our solution’s simple: treat them as a temporary training scaffold, not a permanent necessity. arxiv.org/abs/2512.12167 pub.sakana.ai/DroPE
- Reminded me of my older NeurIPS 2021 paper, where we removed the positional encoding entirely, and by doing so, an agent can process an arbitrarily long list of noisy, sensory inputs, in an arbitrary order. I even made a fun browser demo to play with the agent back then: attentionneuron.github.io
- Reposted by hardmaruIntroducing DroPE: Extending Context by Dropping Positional Embeddings We found embeddings like RoPE aid training but bottleneck long-sequence generalization. Our solution’s simple: treat them as a temporary training scaffold, not a permanent necessity. arxiv.org/abs/2512.12167 pub.sakana.ai/DroPE
- Reposted by hardmaruWe are taking our technology far beyond competitive programming to unlock a new era of AI-driven discovery. We are hiring. Join our team in Tokyo. sakana.ai/careers/#sof...
- We’re hiring. sakana.ai/careers/#sof...
- We are taking our technology far beyond competitive programming to unlock a new era of AI-driven discovery. We are hiring. Join our team in Tokyo. sakana.ai/careers/#sof...
- When agents compete for limited resources, intelligence reorganizes around survival, not elegance.
- Survival of the fittest code! Our paper explores LLMs driving an evolutionary arms race in Core War, where assembly programs fight each other. We task LLMs with evolving "Warriors" in a virtual machine, producing chaotic, self-modifying code dynamics. Blog: sakana.ai/drq Paper: pub.sakana.ai/drq/
- Reposted by hardmaru[Not loaded yet]
- So proud of Team Sakana AI for pulling this off! We managed to get an agent to rank #1 in a difficult heuristic optimization contest. We leaned heavily into test-time inference using a mix of frontier models. The agent spent $1,300 to autonomously discover an algorithm that beat the human baseline.
- Reposted by hardmaru[Not loaded yet]
- Happy New Year! ⛩️
- Sakana AI’s office looks like this.
- Software Engineering as a profession will continue to fundamentally change in 2026. Humans will need to learn to co-adapt to this evolving “alien technology” which comes with no real manual, and figure out how to operate it. What a time to be alive ✨ twitter.com/karpathy/sta...
- Reposted by hardmaru[Not loaded yet]
- I doubt that anything resembling genuine AGI is within reach of current AI tools—Terence Tao mathstodon.xyz/@tao/1157223...
- “iRobot Corp., the company that revolutionized robot vacuum cleaners in the early 2000s with its Roomba model, filed for bankruptcy and proposed handing over control to its main Chinese supplier.” 😥 www.bloomberg.com/news/article...
- 『日経ビジネス』のインタビュー記事が公開されました。 日本の組織構造は長年の知恵の結晶であり、無理にフラット化すべきではありません。特性の異なるAIを組み合わせ、既存の蓄積を代替するのではなく、人に寄り添う「コンパニオン」であるべきです。組織の強みを活かすAIの在り方を語りました。
- Reposted by hardmaru[Not loaded yet]
- “Why AGI Will Not Happen” by Tim Dettmers. timdettmers.com/2025/12/10/w... This essay is worth reading. Discusses diminishing returns (and risks) of scaling. The contrast between West and East: “Winner takes all” approach of building the biggest thing vs a long-term focus on practicality.
- “The purpose of this blog post is to address what I see as very sloppy thinking, thinking that is created in an echo chamber, particularly in the Bay Area, where the same ideas amplify themselves without critical awareness.”
- Reposted by hardmaru[Not loaded yet]
- Reposted by hardmaru[Not loaded yet]
- Reposted by hardmaru[Not loaded yet]
- We’re in an “LLM bubble” not an AI bubble.
- This is my 10th time attending #NeurIPS conference. The first time was in 2016 ✈️
- Reposted by hardmaru[Not loaded yet]
- When participating in peer review, always aim to provide high-quality, constructive feedback designed to improve the work. Write reviews that you can proudly stand behind; authors will respect valuable feedback—even when their paper is rejected or your identity is revealed.
- Reposted by hardmaru[Not loaded yet]
- Reposted by hardmaru[Not loaded yet]
- I’m co-organizing an “AI for Science: Algorithms to Atoms” social event during #NeurIPS2025 with Yann LeCun, Anima Anandkumar, Bill Dally, and Max Welling. If you want to talk about AI Scientist, World Models, the future of AI-driven discovery, come by on Dec 5 3:30pm PT! luma.com/AI-for-Scien...
- “In my view, AI is ultimately going to be a normal technology. In 20 years, our kids will just be using a chat bot like it’s a fax machine. It won’t be magical anymore. This will just be integrated and adapted into our collective system.” Gave my 2c at #BloombergNewEconomy Forum
- Just realized I knew all about hedging ‘AI risks’ >15 years ago 😅
- Excited to announce our book “Neuroevolution: Harnessing Creativity in AI Agent Design” by Sebastian Risi, Yujin Tang, Risto Miikkulainen, and myself. We explore decades of work on evolving intelligent agents and shows how neuroevolution can drive creativity in deep learning, RL, LLMs and AI Agents!
- Excited to announce Sakana AI’s Series B! 🐟 sakana.ai/series-b From day one, Sakana AI has done things differently. Our research has always focused on developing efficient AI technology sustainably, driven by the belief that resource constraints—not limitless compute—are key to true innovation.
- Reposted by hardmaru[Not loaded yet]
- Great to see Tarin Clanuwat featured for her amazing work. She has a deep love for Japanese classical literature and is using AI to build bridges to that past for everyone. www.tokyoupdates.metro.tokyo.lg.jp/post-1670/ We’re lucky to have her driving this at Sakana AI.
- To me, the field of A.I. is a branch of Philosophy, not Science. I would even call it “Applied Philosophy”.
- The US government should subsidize Open AI rather than OpenAI
- Excited to release our new work: Petri Dish Neural Cellular Automata! pub.sakana.ai/pdnca We investigate how multi-agent NCAs can develop into artificial life 🦠 exhibiting complex, emergent behaviors like cyclic dynamics, territorial defense, and spontaneous cooperation.
- Proud to release ShinkaEvolve, our open-source framework that evolves programs for scientific discovery with very good sample-efficiency! 🐙🧠 Paper: arxiv.org/abs/2509.19349 Blog: sakana.ai/shinka-evolve/ GitHub Project: github.com/SakanaAI/Shi...
- Just received my copy of “What Is Intelligence?” by @blaiseaguera.bsky.social 🧠🪱 Thanks for sending it to Japan! 🗼 whatisintelligence.antikythera.org
- Reposted by hardmaru[Not loaded yet]
- Reposted by hardmaru[Not loaded yet]
- Reposted by hardmaru[Not loaded yet]
- Reposted by hardmaru[Not loaded yet]
- Why Greatness Cannot Be Planned Both the English and Japanese editions now found a home in the Sakana AI library ✨ @sakanaai.bsky.social
- Our new GECCO’25 paper builds on our past work, showing how AI models can be evolved like organisms. By letting models evolve their own merging boundaries, compete to specialize, and find ‘attractive’ partners to merge with, we can create adaptive and robust AI ecosystems. arxiv.org/abs/2508.16204
- Reposted by hardmaru[Not loaded yet]
- Reposted by hardmaru[Not loaded yet]
- 1910: The Year the Modern World Lost Its Mind Good piece comparing the anxieties of the early 1900s, an era of great and rapid technological change, to the present time. www.derekthompson.org/p/1910-the-y...
- Reposted by hardmaru[Not loaded yet]
- Reposted by hardmaru[Not loaded yet]
- Reposted by hardmaru[Not loaded yet]
- Andrew Ng’s piece on 🇺🇸 vs 🇨🇳 competition in AI worth reading: Full article: www.deeplearning.ai/the-batch/is...
- Reposted by hardmaru[Not loaded yet]
- Reposted by hardmaru[Not loaded yet]
- ICML’s Statement about subversive hidden LLM prompts We live in a weird timeline… icml.cc/Conferences/...