Only Portion of pretraining How to avoid collapese: ToEdit (token level edit)arxiv.orghttps://arxiv.org/pdf/2412.14689Phi Cosmopedia for SmolAgents LM useful to On-device AI small llmCosmopedia: how to create large-scale synthetic data for pre-training Large Language ModelsWe’re on a journey to advance and democratize artificial intelligence through open source and open science.https://huggingface.co/blog/cosmopediaUnconditional GenerationFaithfulSAE: Towards Capturing Faithful Features with Sparse...Sparse Autoencoders (SAEs) have emerged as a promising solution for decomposing large language model representations into interpretable features. However, Paulo and Belrose (2025) have highlighted...https://www.arxiv.org/abs/2506.17673arxiv.orghttps://arxiv.org/pdf/2510.18554