Google DeepMind Researchers introduce Gemma Scope 2, an open suite of interpretability instruments that exposes how Gemma 3 language fashions course of and signify info throughout all layers, from 270M to 27B parameters.
Its core aim is easy, give AI security and alignment groups a sensible technique to hint mannequin habits again to inside options as an alternative of relying solely on enter output evaluation. When a Gemma 3 mannequin jailbreaks, hallucinates or reveals sycophantic habits, Gemma Scope 2 lets researchers examine which inside options fired and the way these activations flowed by way of the community.
What’s Gemma Scope 2?
Gemma Scope 2 is a complete, open suite of sparse autoencoders and associated instruments educated on inside activations of the Gemma 3 mannequin household. Sparse autoencoders, SAEs, act as a microscope on the mannequin. They decompose excessive dimensional activations right into a sparse set of human inspectable options that correspond to ideas or behaviors.
Coaching Gemma Scope 2 required storing round 110 Petabytes of activation information and becoming over 1 trillion whole parameters throughout all interpretability fashions.
The suite targets each Gemma 3 variant, together with 270M, 1B, 4B, 12B and 27B parameter fashions, and covers the complete depth of the community. That is vital as a result of many security related behaviors solely seem at bigger scales.
What’s new in comparison with the unique Gemma Scope?
The primary Gemma Scope launch centered on Gemma 2 and already enabled analysis on mannequin hallucination, figuring out secrets and techniques identified by a mannequin and coaching safer fashions.
Gemma Scope 2 extends that work in 4 predominant methods:
- The instruments now span your entire Gemma 3 household as much as 27B parameters, which is required to check emergent behaviors noticed solely in bigger fashions, such because the habits beforehand analyzed within the 27B measurement C2S Scale mannequin for scientific discovery duties.
- Gemma Scope 2 consists of SAEs and transcoders educated on each layer of Gemma 3. Skip transcoders and cross layer transcoders assist hint multi step computations which might be distributed throughout layers.
- The suite applies the Matryoshka coaching approach in order that SAEs be taught extra helpful and steady options and mitigate some flaws recognized within the earlier Gemma Scope launch.
- There are devoted interpretability instruments for Gemma 3 fashions tuned for chat, which make it potential to investigate multi step behaviors similar to jailbreaks, refusal mechanisms and chain of thought faithfulness.
Key Takeaways
- Gemma Scope 2 is an open interpretability suite for all Gemma 3 fashions, from 270M to 27B parameters, with SAEs and transcoders on each layer of each pretrained and instruction tuned variants.
- The suite makes use of sparse autoencoders as a microscope that decomposes inside activations into sparse, idea like options, plus transcoders that monitor how these options propagate throughout layers.
- Gemma Scope 2 is explicitly positioned for AI security work to check jailbreaks, hallucinations, sycophancy, refusal mechanisms and discrepancies between inside state and communicated reasoning in Gemma 3.
Take a look at the Paper, Technical particulars and Mannequin Weights. Additionally, be at liberty to observe us on Twitter and don’t neglect to affix our 100k+ ML SubReddit and Subscribe to our E-newsletter. Wait! are you on telegram? now you possibly can be part of us on telegram as nicely.










