Open Source AI Research News

SAEs trained on the same data don’t learn the same features

Blog on Eleuth…
2024-12-13 1 min read

In this post, we show that when two TopK SAEs are trained on the same data, with the same batch order but with different random initializations, there are many latents in the first SAE that don't have...

In this post, we show that when two TopK SAEs are trained on the same data, with the same batch order but with different random initializations, there are many latents in the first SAE that don't have a close counterpart in the second, and vice versa. Indeed, when training only about 53% of the features are shared Furthermore, many of these unshared latents are interpretable. We find that narrower SAEs have a higher feature overlap across random seeds, and as the size of the SAE increases, the overlap decreases.
Source: Blog on EleutherAI Blog Word count: 429 words
Published on 2024-12-13 00:00