
Goodfire Announces Collaboration to Advance Genomic Medicine with AI Interpretability
Published
September 9, 2025
Goodfire is excited to announce a collaboration with Mayo Clinic seeking to unlock new frontiers in genomic medicine through AI interpretability. This collaboration aims to combine Goodfire's work in interpretability of AI models with Mayo Clinic's medical expertise and investment in AI.
AI interpretability is a field devoted to understanding what AI models learn and how they produce their outputs, rather than treating them as black boxes.
A New Paradigm for Scientific Discovery
This collaboration centers on a fundamentally new approach to scientific research: reverse-engineering advanced genomics foundation models to understand the biological insights they've captured. Rather than simply generating sequences or making predictions, Goodfire is focused on peering inside these models to understand what they've learned about genomic relationships, disease mechanisms, and biological processes.
Unlike text-based models whose outputs are human-readable, genomics models operate in the “language” of DNA, making both their inputs and internal representations less immediately interpretable. Interpretability techniques have so far demonstrated applications with scientific foundation models like Evo 2, focusing on extracting novel insights from their rich internal representations.
Goodfire's interpretability researchers, in tandem with Mayo Clinic's medical AI team, are attempting to reveal the conceptual frameworks these models have developed. These frameworks may capture biological relationships and patterns beyond current human understanding, for example novel biomarkers for disease. A better understanding of how medical AI models produce their outputs may also help validate model predictions and improve their accuracy.
“Generative AI systems have made incredible strides in modeling complex biological systems, but many clinical use cases remain blocked due to a disconnect to real-world understanding,” said Dan Balsam, CTO of Goodfire. “We are excited to apply interpretability to bridge the understanding gap and potentially unlock a new generation of diagnostic tools and personalized treatments.”
Responsible Innovation at the Forefront
This collaboration operates under rigorous data privacy protocols and Mayo Clinic's established data governance frameworks. Beyond privacy protections, this work seeks to advance responsible AI by making model decision-making transparent and explainable. By revealing how genomics models arrive at conclusions, we aim to identify spurious correlations, reduce algorithmic bias, train better models, and ensure AI-driven insights are scientifically sound and clinically relevant—all centered on improving patient outcomes.
Looking Ahead
This collaboration has the potential to position Goodfire to unlock biological insights that could reshape our understanding of disease and treatment. By combining Mayo Clinic's clinical expertise with Goodfire's interpretability innovations, we're attempting to advance both scientific discovery and responsible AI development in service of human health.
Mayo Clinic has a financial interest in the technology referenced in this press release. Mayo Clinic will use any revenue it receives to support its not-for-profit mission in patient care, education and research.