Milad Khademi Nori
Postdoctoral Fellow, Deep Learning, @TorontoMet, PhD from @QueensU
- "We found several sparse autoencoder features suggestive of internal representations of emotion active on cases of answer thrashing and other instances of apparent distress during reasoning. Page 162: www-cdn.anthropic.com/0dd865075ad3...
- OpenAI connected GPT-5 to an autonomous lab, so it could propose experiments, run them at scale, learn from the results, and decide what to try next. That closed loop brought protein production cost down by 40%. Read more: openai.com/index/gpt-5-...
- Reposted by Milad Khademi NoriReproducibility and open science are great, but they don't necessarily equate to rigor. You can perfectly share a study and still draw weak conclusions. True rigor lives in the questions we ask, the designs we choose, and the inferences we make.
- Reposted by Milad Khademi Nori[Not loaded yet]
- Regarding Al Scientists, a 2024 nature paper says: "we are far from automating theoretical discovery" and "We believe that although the theorist is not in danger of being replaced by Al systems in the near future, the combination of human expertise and Al algorithms will doi.org/10.1038/s422...
- Reposted by Milad Khademi Nori[Not loaded yet]
- Reposted by Milad Khademi NoriI know this is yesterday’s news and feels relatively unimportant now but just want to note that this is ongoing and X and xAI continue to do absolutely nothing about it despite being able to end it by pressing a single button.
- Google researcher:
- Reposted by Milad Khademi NoriApple’s real 'edge' might be Edge AI. On-device continual learning is still a tough nut to crack (catastrophic forgetting, energy consumption, etc.), but Apple may have the right pieces in place to get there first.
- This post exceptionally isn't about AI. Being elected with the NoWar campaign, Trump is now threatening to turn my homeland Iran into another Libya and Iraq. This is definitely not good. In the last decade, USA spent trillions of American taxpayers' dollars that should've spent on American
- Happy 2026 New year wish: Meaningful contributions in Continual Learning
- Higher education in Canada 🇨🇦 in need of alignment: www.theglobeandmail.com/business/com...
- Haven't visited my homeland in the last 7 years! (Nasir al-Mulk Mosque, Iran.)
- One stark difference between how a human processes such situations versus how AI does is that a human doesn't need to rely on so much internal monologue before uttering the response: they just know the situation without writing two paragraphs in their head to set themselves up for the final answer.
- It's questionable to conflate weights and synapses, but still an interesting comparison:
- Cool AI history from 2019:
- Reposted by Milad Khademi Nori[Not loaded yet]
- Reposted by Milad Khademi NoriOne of the underrated papers this year: "Small Batch Size Training for Language Models: When Vanilla SGD Works, and Why Gradient Accumulation Is Wasteful" (arxiv.org/abs/2507.07101) (I can confirm this holds for RLVR, too! I have some experiments to share soon.)
- Llm coders are getting better to the point of self-improvement. Maybe there gotta be a benchmark for how much progress is being made on the way to build recursive self-improving AGI?!
- Reposted by Milad Khademi NoriI'd like to propose the following norm for peer review of papers. If a paper shows clear signs of LLM-generated errors that were not detected by the author, the paper should be immediately rejected. My reasoning: 1/ #ResearchIntegrity
- Reposted by Milad Khademi Nori[Not loaded yet]
- Gemini will close its gap with chatgpt in six months!
- I will begin taking machine intelligence seriously when they can write a good 300-page novel that gets a score of 4.0+ on Goodreads by 100 independent experts. These are called language models, yet they can't do linguistic tasks.
- "Shopify CEO Tobi Lütke noted that Claude Opus 4.5 'feels very different for coding than anything that came before'. Software engineer Boris Cherny provided a stark data point for this shift, stating: 'The last month was my first month as an engineer that
- I'm frankly cautious of this "everything's about to change" and "AI is gonna make programmers 10x more productive" takes! The reason: I see no new great products. Where are the dozens of great new products?
- Reposted by Milad Khademi Nori[Not loaded yet]
- Merry Christmas, and wishing you a happy 2026 in advance 😍! May 2026 be the year of AGI and continual learning!
- It feels great to be appreciated!
- I usually don't tweet about non-AI topics, but I couldn't resist! This is a textbook example of strawmaning consumer capitalism and scapegoating it for the struggles of the middle class. IMO, Professor has a communism bias and hence underestimates the demand side of the economy!
- Reposted by Milad Khademi Nori[Not loaded yet]
- Reposted by Milad Khademi NoriInterested in applying to come to Canada for a fully funded PhD or Postdoc? Canada Impact+ Research Training Awards: nserc-crsng.canada.ca/en/canada-i... Reach out if you’re interested in opportunities for using large-scale electrophysiology to understand neural computation and movement.
- I bet in two years swarms of people will make emotional bonds with AI! The evidence? 👇
- Glad to see Canada 🇨🇦 leap towards AI sovereignty, particularly when it's an endeavor by my alma mater:
- I have been there, multiple times. I usually don't write proposals if the grant amount doesn't justify.
- Goal selection through the lens of subjective functions: arxiv.org/abs/2512.15948 I welcome any feedback on these preliminary ideas.
- People are convinced llms are more than artificial intelligence:
- This is some world-class Eliza-effect-experiencing. www.persuasion.community/p/my-chatgpt...