Blog

Goodfire Announces Collaboration to Advance Genomic Medicine with AI Interpretability

Goodfire is excited to announce a collaboration with Mayo Clinic seeking to unlock new frontiers in genomic medicine through AI interpretability. This collaboration aims to combine Goodfire's work in interpretability of AI models with Mayo Clinic's medical expertise and investment in AI.

AI interpretability is a field devoted to understanding what AI models learn and how they produce their outputs, rather than treating them as black boxes.

A New Paradigm for Scientific Discovery

This collaboration centers on a fundamentally new approach to scientific research: reverse-engineering advanced genomics foundation models to understand the biological insights they've captured. Rather than simply generating sequences or making predictions, Goodfire is focused on peering inside these models to understand what they've learned about genomic relationships, disease mechanisms, and biological processes.

Unlike text-based models whose outputs are human-readable, genomics models operate in the “language” of DNA, making both their inputs and internal representations less immediately interpretable. Interpretability techniques have so far demonstrated applications with scientific foundation models like Evo 2, focusing on extracting novel insights from their rich internal representations.

Goodfire's interpretability researchers, in tandem with Mayo Clinic's medical AI team, are attempting to reveal the conceptual frameworks these models have developed. These frameworks may capture biological relationships and patterns beyond current human understanding, for example novel biomarkers for disease. A better understanding of how medical AI models produce their outputs may also help validate model predictions and improve their accuracy.

“Generative AI systems have made incredible strides in modeling complex biological systems, but many clinical use cases remain blocked due to a disconnect to real-world understanding,” said Dan Balsam, CTO of Goodfire. “We are excited to apply interpretability to bridge the understanding gap and potentially unlock a new generation of diagnostic tools and personalized treatments.”

Responsible Innovation at the Forefront

This collaboration operates under rigorous data privacy protocols and Mayo Clinic's established data governance frameworks. Beyond privacy protections, this work seeks to advance responsible AI by making model decision-making transparent and explainable. By revealing how genomics models arrive at conclusions, we aim to identify spurious correlations, reduce algorithmic bias, train better models, and ensure AI-driven insights are scientifically sound and clinically relevant—all centered on improving patient outcomes.

Looking Ahead

This collaboration has the potential to position Goodfire to unlock biological insights that could reshape our understanding of disease and treatment. By combining Mayo Clinic's clinical expertise with Goodfire's interpretability innovations, we're attempting to advance both scientific discovery and responsible AI development in service of human health.



Mayo Clinic has a financial interest in the technology referenced in this press release. Mayo Clinic will use any revenue it receives to support its not-for-profit mission in patient care, education and research.

Read more from Goodfire

October 2, 2025

You and Your Research Agent: Lessons From Using Agents for Interpretability Research

Mark Bissell

Michael Byun

Daniel Balsam

July 30, 2025

Partnering with Radical AI to Advance Materials Science With Interpretability

No items found.
July 17, 2025

On Optimism for Interpretability

Eric Ho

Research

Painting With Concepts Using Diffusion Model Latents

May 27, 2025

Under the Hood of a Reasoning Model

April 15, 2025

Interpreting Evo 2: Arc Institute's Next-Generation Genomic Foundation Model

February 20, 2025

Finding the Tree of Life in Evo 2

August 28, 2025

Discovering Undesired Rare Behaviors via Model Diff Amplification

August 21, 2025

The Circuits Research Landscape: Results and Perspectives

August 5, 2025

Towards Scalable Parameter Decomposition

June 28, 2025

Replicating Circuit Tracing for a Simple Known Mechanism

June 11, 2025

Understanding and Steering Llama 3 with Sparse Autoencoders

September 25, 2024

Mapping the Latent Space of Llama 3.3 70B

December 23, 2024
No items found.