Austin Wang
Stanford CS PhD student working on ML/AI for genomics with @anshulkundaje.bsky.social
austintwang.com
- (1/10) Excited to announce our latest work! @arpita-s.bsky.social, @amanpatel100.bsky.social , and I will be presenting DART-Eval, a rigorous suite of evals for DNA Language Models on transcriptional regulatory DNA at #NeurIPS2024. Check it out! arxiv.org/abs/2412.05430
- [Not loaded yet]
- I think that’ll be interesting to look more into! The profile information does not convey overall accessibility since it’s normalized, but maybe this sort of multitasking could help.
- [Not loaded yet]
- Thank you for the kind words! Yes, ChromBPNet uses unmodified models, which includes profile data and a bias model. However these evaluations use only the count head.
- (9/10) How do we train more effective DNALMs? Use better data and objectives: • Nailing short-context tasks before long-context • Data sampling to account for class imbalance • Conditioning on cell type context These strategies use external annotations, which are plentiful!
- (10/10) Come check out our poster (tomorrow Dec 11 at 11 AM) or read the paper for more details! arxiv.org/abs/2412.05430 github.com/kundajelab/D... neurips.cc/virtual/2024... #machinelearning #NeurIPS2024 #genomics
- (7/10) DNALMs struggle with more difficult tasks. Furthermore, small models trained from scratch (<10M params) routinely outperform much larger DNALMs (>1B params), even after LoRA fine-tuning! Our results on the hardest task - counterfactual variant effect prediction.
- (8/10) This indicates that DNALMs inconsistently learn functional DNA. We believe that the culprit is not architecture, but rather the sparse and imbalanced distribution of functional DNA elements. Given their resource requirements, current DNALMs are a hard sell.
- (5/10) Rigorous evaluations of DNALMs, though critical, are lacking. Existing benchmarks: • Focus on surrogate tasks tenuously related to practical use cases • Suffer from inadequate controls and other dataset design flaws • Compare against outdated or inappropriate baselines
- (6/10) We introduce DART-Eval, a suite of five biologically informed DNALM evaluations focusing on transcriptional regulatory DNA ordered by increasing difficulty.
- (3/10) However, DNA is vastly different from text, being much more heterogeneous, imbalanced, and sparse. Imagine a blend of several different languages interspersed with a load of gibberish.
- (4/10) An effective DNALM should: • Learn representations that can accurately distinguish different types of functional DNA elements • Serve as a foundation for downstream supervised models • Outperform models trained from scratch
- (2/10) DNALMs are a new class of self-supervised models for DNA, inspired by the success of LLMs. These DNALMs are often pre-trained solely on genomic DNA without considering any external annotations.