Dileep George @dileeplearning
AGI research @DeepMind.
Ex cofounder & CTO Vicarious AI (acqd by Alphabet),
Cofounder Numenta
Triply EE (BTech IIT-Mumbai, MS&PhD Stanford). #AGIComics
blog.dileeplearning.com
- Reposted by Dileep George @dileeplearning🧵1/12 What if the DMN, limbic system, hippocampus, neural oscillations, gradients, dementia syndromes, mixed pathology, and aphantasia all fall out of the same generative brain computation? 🤯#endalz Introducing #SLOD (preprint): a new #NeuroAI framework w/ @drbreaky.bsky.social

-
View full threadReposted by Dileep George @dileeplearning6/12 The clone-structured cognitive graph (CSCG) framework pioneered by @dileeplearning.bsky.social offers a natural computational mapping for how latent diffusion could be implemented biologically along hippocampal circuitry.
- Reposted by Dileep George @dileeplearningMy latest on Substack -- a write-up of the talk I gave at NeurIPS in December. aiguide.substack.com/p/on-evaluat...
- Reposted by Dileep George @dileeplearning4/4 “Language is a mechanism to control other people’s mental simulations.” Dileep George, @dileeplearning, of @GoogleDeepMind at the Simons Institute workshop on The Future of Language Models and Transformers. Video: simons.berkeley.edu/talks/dileep...
- Reposted by Dileep George @dileeplearning1/4 Do LLMs understand? "They understand in a way that’s very different from how humans understand," Dileep George, @dileeplearning.bsky.social, of Google DeepMind at the Simons Institute workshop on The Future of Language Models and Transformers. Video: simons.berkeley.edu/talks/dileep...
- the one where #AGIComics figures out what academic training is all about … www.agicomics.net/c/artificial...
- Check out my blog on the risks of *not* building AGI! blog.dileeplearning.com/p/if-no-one-...
- Can AIs be conscious? Should we consider them as persons? Here are my current thoughts..... blog.dileeplearning.com/p/ai-conscio...
- blog.dileeplearning.com/p/quick-note... TLDR: It was fun and the process felt 'magical' at times. If you have lots of small project ideas you want to prototype, vibe-coding is a fun way to do that as long as you are willing to settle for 'good enough'.
- Those who think there's an AI bubble are unaware of a recent breakthrough.... www.agicomics.net/c/ag-breakth...
- New and improved and 10000% vibe-coded! Check out www.agicomics.net
- Reposted by Dileep George @dileeplearning1/4) I’m excited to announce that I have joined the Paradigms of Intelligence team at Google (github.com/paradigms-of...)! Our team, led by @blaiseaguera.bsky.social, is bringing forward the next stage of AI by pushing on some of the assumptions that underpin current ML. #MLSky #AI #neuroscience
- Reposted by Dileep George @dileeplearningJesus Christ.
- Reposted by Dileep George @dileeplearning1/ 🚨 New preprint! 🚨 Excited and proud (& a little nervous 😅) to share our latest work on the importance of #theta-timescale spiking during #locomotion in #learning. If you care about how organisms learn, buckle up. 🧵👇 📄 www.biorxiv.org/content/10.1... 💻 code + data 🔗 below 🤩 #neuroskyence
- #AGIComics now has a website! And it is 100% vibe coded! Check out agicomics.net
- Reposted by Dileep George @dileeplearning12 leading neuroscientists tackle a big question: Will we ever understand the brain? Their reflections span philosophy, complexity, and the limits of scientific explanation. www.sainsburywellcome.org/web/blog/wil... Illustration by @gilcosta.bsky.social & @joanagcc.bsky.social
- 🎯
- Hmm…I don’t think it’s impossible. Evolution could create structures in the brain that are in correspondence with structure in the world.
- Ohh ok I realize that @tyrellturing.bsky.social mentioned evolution. Fine then. But then which neuroscientist believes this?
- This paper turned up on a feed, I was intrigued by it and started reading... ..but then I was quite baffled because our CSCG work seem to have tackled many of these problems in a more general setting and it's not even mentioned! So I asked ChatGPT... ...I'm impressed by the answer1. 1/🧵
- It is quite impressive that chatGPT picked up these nuances, picks up a relevant quote from the paper and even emphasizes portions of the response. 2/
- I didn't mention partial observability specifically, so it is impressive that this was picked up. Looks like we did something right in our CSCG paper in making this explicit? 3/
-
View full threadHere's the CSCG paper: www.nature.com/articles/s41... And here' the CML paper: www.nature.com/articles/s41...
- Wow, very cool to see this work from Alla Karpova's lab. She had shown me the results when I visited @hhmijanelia.bsky.social and I was blown away. www.biorxiv.org/content/10.1... 1/
- Some of our work could explain this kind of latent graph learning and schema-like abstraction. 2/ arxiv.org/abs/2302.07350
- Reposted by Dileep George @dileeplearning𝗛𝗼𝘄 𝘀𝗵𝗼𝘂𝗹𝗱 𝘄𝗲 𝗱𝗲𝗳𝗶𝗻𝗲 𝗮𝗻𝗱 𝗱𝗲𝘁𝗲𝗿𝗺𝗶𝗻𝗲 𝗮 𝗯𝗿𝗮𝗶𝗻 𝗿𝗲𝗴𝗶𝗼𝗻'𝘀 𝗶𝗺𝗽𝗼𝗿𝘁𝗮𝗻𝗰𝗲? We introduce the idea of "importance" in terms of the extent to which a region's signals steer/contribute to brain dynamics as a function of brain state. Work by @codejoydo.bsky.social elifesciences.org/reviewed-pre...
- It's kinda obvious. #AGIComics has already figured out which brain region is the most important. 😇
- 𝗛𝗼𝘄 𝘀𝗵𝗼𝘂𝗹𝗱 𝘄𝗲 𝗱𝗲𝗳𝗶𝗻𝗲 𝗮𝗻𝗱 𝗱𝗲𝘁𝗲𝗿𝗺𝗶𝗻𝗲 𝗮 𝗯𝗿𝗮𝗶𝗻 𝗿𝗲𝗴𝗶𝗼𝗻'𝘀 𝗶𝗺𝗽𝗼𝗿𝘁𝗮𝗻𝗰𝗲? We introduce the idea of "importance" in terms of the extent to which a region's signals steer/contribute to brain dynamics as a function of brain state. Work by @codejoydo.bsky.social elifesciences.org/reviewed-pre...
- ohh...yes...this is exactly what I think after reading some of the "deep research" reports. ....written by a committee
- Reposted by Dileep George @dileeplearningjumping on the Gemini 2.5 bandwagon... it's an incredible model. really feels like an(other) inflection point. talking to Claude 3.7 feels like talking to a competent colleague who knows about everything, but makes mistakes. Gemini 2.5 feels like talking to a world-class expert with A+ intuitions
- Give me 10 billion dollars and I’ll do it. 1 billion for developing the hardware and 9 billion to pay for my opportunity cost 😇
- Nope. It is an engineering problem. Give me an algorithm you think is not being scaled because of a hardware mismatch, I can make the hardware (chip + interconnect + datacenter) given enough money. Purely an engineering problem.