Understanding
the nature of intelligence

Goodfire is an AI interpretability research lab focused on understanding and intentionally designing advanced AI systems. We believe that advances in interpretability will unlock the next frontier of safe and powerful foundation models.

Training or fine-tuning an AI model?
Get in touch

Research

Fundamental mechanistic interpretability research to understand and steer models

Mapping the Latent Space of Llama 3.3 70B

December 23, 2024

Understanding and Steering Llama 3 with Sparse Autoencoders

September 25, 2024

Blog

Progress updates and product releases from the Goodfire team
February 20, 2025

Interpreting Evo 2: Arc Institute's Next-Generation Genomic Foundation Model

Myra Deng

Daniel Balsam

Liv Gorton

Nicholas Wang

Nam Nguyen

January 10, 2025

Announcing Open-Source SAEs for Llama 3.3 70B and Llama 3.1 8B

Daniel Balsam

Thomas McGrath

Liv Gorton

Nam Nguyen

Myra Deng

December 23, 2024

Goodfire Ember: Scaling Interpretability for Frontier Model Alignment

Daniel Balsam

Myra Deng

Nam Nguyen

Liv Gorton

Thariq Shihipar

Careers

We're looking for agentic, mission-driven, and kind people to help us build the future of interpretability. If you believe understanding AI systems is critical for our future, join us.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Text LinkApply
This is some text inside of a div block.

Department

Contact Us

Developing an AI model? We partner with companies to interpret foundation models across architectures and modalities.