Will this recipe work for other organisms? We think it depends on genome size and proportion of nucleotides under selection, which drives the value of the self-supervised stage and training data scale. An exciting question for future work!
Will this recipe work for other organisms? We think it depends on genome size and proportion of nucleotides under selection, which drives the value of the self-supervised stage and training data scale. An exciting question for future work!
This was a massive effort, driven by the incredible work of Calico intern Kuan-Hao Chao (@kuanhaochao.bsky.social
). Huge thanks to him, Majed Mohamed Magzoub, and Johannes Linder!
My take: While MPRAs are powerful, they lose vital genomic context like local chromatin and post-transcriptional regulation. For modeling complex gene regulation in vivo, models trained on endogenous sequences are essential.
Each wins on its βhome fieldβ:
* MPRA-trained models excel at predicting MPRA data, including variant sequences.
* Shorkie excels at predicting expression from promoters in their natural genomic context and eQTLs.
How does Shorkie compare to models trained on massively parallel reporter assays (MPRAs)?
This translates to variant effect prediction where Shorkie accurately predicts the impact of cis-eQTLs, outperforming alternative models at classifying influential regulatory variants.
Shorkie also captures dynamic regulatory changes. Using new time-course RNA-seq data from TF inductions, we showed Shorkie can track how the importance of specific TF motifs changes over time.
This pre-training strategy makes a huge difference. Shorkie substantially outperforms the same model trained from scratch, boosting gene-level expression prediction from a Pearson's R of 0.74 to 0.88.
But which genomes work best? We trained on different phylogenetic levels, from close S. cerevisiae strains to the fungal kingdom. The Saccharomycetales order was the sweet spot, providing the right balance of diversity and conserved regulatory grammar for the model to learn from.
Our hypothesis: Jumpstart supervised learning with self-supervision--before predicting chromatin and expression, we first asked our model to predict masked-out nucleotides across many related genomes, so it learns conserved elements like genes and their promoters.
However, yeast's small genome provides limited data, making it tough for deep learning models to learn complex regulatory rules from scratch.
At Calico, we've been studying S. cerevisiae for years to understand replicative aging. Along the way, we've generated rich datasets to probe its regulatory networks, which helped make this work possible.
Excited to share our new paper on predicting gene expression in yeast! We introduce "Shorkie," a supervised ML model that builds off a self-supervised foundation to interpret regulatory DNA.
Preprint: www.biorxiv.org/content/10.1...
The poster abstract deadline for the @keystonesymposia.bsky.social AI in Molecular Biology meeting in Santa Fe is coming up on August 21st, so get your submissions in!
www.keystonesymposia.org/conferences/...
Weβve done some experiments, but the metrics arenβt conclusive, so choose your own adventure! Weβve released these models open source, open weight for all to use. github.com/calico/borzo...
We hypothesized that training with cell-type-specific and 3' data might make these models particularly effective for transfer to datasets with similar properties.
Transfer learning has emerged as a key application for multitask sequence models like these. For more, check out another recent paper from Han Yuan, whose analysis explores various transfer strategies and shows how powerful this approach can be. www.biorxiv.org/content/10.1...
Hence the name: Borzoi Prime to emphasize their 3β expertise!
Indeed, he discovered the new models better predict alternative polyadenylation and QTL variants that affect where transcripts get cleaved and polyadenylated. This key regulatory layer influences cell type-specific protein production.
Drawing on his expertise and interest in isoform regulation, Johannes hypothesized that single-cell RNA-seqβs 3β sequencing protocols might reveal additional capabilities in these models.
Using single cell eQTL studies, he evaluated the cell type specific variant effect predictions and found good concordance.
As cell-type-specific applications emerged, Johannes Linder took a fresh look.
We trained these models in early 2023 (which is why theyβre algorithmically similar to the originals), but initial metrics were underwhelming, so we shelved them.
Side noteβwant your amazing data included in future training runs of open source, open weight models? Make and release BigWig tracks!
We curated several cell atlas collections to produce pseudobulk coverage tracks. Thank you to the CZI Tabula projects and the BICCN Brain Cell Atlas for making this possible!
A limitation of the first Borzoi training run was the absence of cell type specific RNA-seq tracks; most are heterogeneous bulk samples.
Weβre excited to share a follow-up Borzoi training run and an analysis of the capabilities that emerged. www.biorxiv.org/content/10.1...
Alongside the manuscript and analysis, we released Borzoi predictions for 19.5 million common and low-frequency UK Biobank variants. Code for scoring additional variants with Borzoi is available here: github.com/calico/baske...
Moving forward, we suspect there are further improvements available. The Borzoi predictions cover most body tissues, but they arenβt yet zoomed into specific cell types. Alternative nonlinear heritability models may usurp S-LDSC for fitting variant priors.
Generally, we found that Borzoi predictions improve fine-mapping clarity and gene prioritization. Weβre using Sniff to better analyze aging-related trait GWAS at Calico.