phil-fradkin
Born too late to explore the Earth
Too early to explore the Galaxy
Just in time to model the cell and climb some rocks
PhD @ UofT and Vector Institute
- Reposted by phil-fradkin📢 Interested in doing a PhD in generative models 🤖, AI4Science 🧬, Sampling 🧑🔬, and beyond? I am hiring PhD students at Imperial College London for the next application cycle. 🔗See the call below: joeybose.github.io/phd-positions/ ✨ And a light expression of interest: forms.gle/FpgTiuatz9ft...
- Reposted by phil-fradkinExcited for this to be out officially! It was a great team effort and has a lot of useful tidbits for studying isoform function. www.nature.com/articles/s41...
- Reposted by phil-fradkinVery excited that our most significant work, a collaboration w/ Dr. Can Cenik at UT Austin on translational gene regulation, was finally published in Nature Biotechnology in a dual set of studies: Paper 1 -- an AI model trained to predict translation rates from mRNA sequences: rdcu.be/exN1l
- Reposted by phil-fradkin[This post could not be retrieved]
- Reposted by phil-fradkinWe're excited to release 𝐦𝐑𝐍𝐀𝐁𝐞𝐧𝐜𝐡, a new benchmark suite for mRNA biology containing 10 diverse datasets with 59 prediction tasks, evaluating 18 foundation model families. Paper: biorxiv.org/content/10.1... GitHub: github.com/morrislab/mR... Blog: blank.bio/post/mrnabench
- Reposted by phil-fradkinWe are excited to introduce mRNABench, a comprehensive benchmarking suite that we used to evaluate the representational capabilities of 18 families of nucleotide foundation models on mature mRNA specific tasks. Paper: doi.org/10.1101/2025... Code: github.com/morrislab/mR... A 🧵
- Reposted by phil-fradkinNew work from the lab trying to wrap our heads around the massive complexity of the human transcriptome revealed by long-read RNA-seq! Fun collab with Gloria Sheynkman. www.biorxiv.org/content/10.1...
- Reposted by phil-fradkinPlease check out our new approach to modeling somatic mutation signatures. DAMUTA has independent Damage and Misrepair signatures whose activities are more interpretable and more predictive of DNA repair defects, than COSMIC SBS signatures 🧬🖥️🧪 www.biorxiv.org/content/10.1...
- Reposted by phil-fradkin#MLCB2025 will be Sept 10-11 at @nygenome.org in NYC! Paper deadline June 1st & in-person registration will open in May. Please sign up for our mailing list groups.google.com/g/mlcb/ for future announcements. More details at mlcb.github.io. Please RP!
- Reposted by phil-fradkinThe Illustrated DeepSeek-R1 Spent the weekend reading the paper and sorting through the intuitions. Here's a visual guide and the main intuitions to understand the model and the process that created it. newsletter.languagemodels.co/p/the-illust...
- Reposted by phil-fradkinWhere RNA Science Meets AI, May 4–8, 2025, Ascona. Invited speakers: @evamarianovoa.bsky.social @fabiantheis.bsky.social @rivaselenarivas.bsky.social, Sterling Churchman, Barbara Treutlein, Rahul Satijia, Registration open www.rna-ai.org @hagentilgner.bsky.social @quaidmorris.bsky.social
- Thanks to the FM4Science workshop at #Neurips for recognizing MolPhenix as best paper! We had so much fun working on this with Puria (co-first author), @karush17.bsky.social, Frederik and co-supervised by Maciej and @dom-beaini.bsky.social arxiv.org/abs/2409.08302 @valenceai.bsky.social
- Excited to be presenting Orthrus with Ruain Shi and Keren Isaev @karini925.bsky.social today! We will be presenting our spotlight at the workshop on AI for new drug modalities #NeurIPS2024 Come chat about a new approach to mRNA representation learning!
- Link to the updated pre-print! www.biorxiv.org/content/10.1...
- I’ll be at #NeurIPS presenting two new papers on self-supervised approaches for cellular representation learning! 1. MolPhenix (main track): Multi-modal learning learning joint representations between molecular structures & phenomic data
- 2. Orthrus (spotlight @ AIDrugX): Contrastive learning for mRNA representations with biologically inspired augmentations Looking forward to seeing friends and meeting new folks. Happy to chat about these mythically named methods and other ideas for cellular rep. & gen. learning!
- Reposted by phil-fradkinMy conclusion: We should pay attention to train/test splits, not blindly follow standard benchmarks which are often very flawed in many applied ML areas, not hype up early results. We should be more collaborative, be generous with credit, give benefit of the doubt & be less adversarial