Grigori Guitchounts

AI + biology, venture creation @FlagshipPioneer | Neuroscientist @Harvard | Writer, pianist, runner, painter, dilettante
Joined October 2023
  • I keep coming back to this: one protein sequence doesn’t cash out to one behavior. Even “the same” molecule can hop between shapes, with different timing. And the weird, rare states—the ones you’d bet against—can still matter.
    View on BlueskyShow all post labels
  • Your blood is full of cell-free DNA—millions of tiny shards, like confetti after a rough party. An Alzheimer’s classifier trained on them ends up leaning hard on a blunt signal: fragment length patterns, not just sequence or methylation calls.
    View on BlueskyShow all post labels
  • People say “emergence” in LLMs like it’s a magic trick: nothing… nothing… then—poof—capability. In complexity science it’s stricter. Emergence is when you can describe the system in a new, lower-dimensional way that makes the messy micro-details irrelevant.
    View on BlueskyShow all post labels
  • KG retrieval has an irritating failure mode: either you cast a wide net (nice coverage, but it’s all a bit mushy) or you commit to edge-walking (great multi-hop… unless you picked the wrong starting node and everything collapses). Real queries usually want both, in one pass.
    View on BlueskyShow all post labels
  • LLMs can do a decent impression of almost anyone… until they can’t, and you feel the rubber band snap back to “Helpful Assistant.” This paper tries to locate that snap-back in the model’s activations—and finds what looks like a single direction for “Assistant-ness.”
    View on BlueskyShow all post labels
  • Hypotheses are getting cheap. Lab time isn’t. A lot of “AI for science” feels like moving the traffic jam: from dreaming up ideas to the grimy work of checking them—what you test, how quickly, and what you do when the first run faceplants.
    View on BlueskyShow all post labels
  • Consciousness theories have a very dull way of failing. Either they can’t be falsified, or they end up “trivial”—basically restating whatever our test already measures (report, behavior). Hoel argues a lot of popular theories land on one of those horns.
    View on BlueskyShow all post labels
  • One coding agent can be perfectly competent and still be the wrong “unit of work” for a big project. It moves like a single person trying to renovate a house alone: slow, snag-prone, and it forgets what it was doing. So: can you run many agents without summoning chaos? Not really—not yet, at least.
    View on BlueskyShow all post labels
An unhandled error has occurred. Reload 🗙