TORA

Creator
Creator
Seonglae ChoSeonglae Cho
Created
Created
2025 Oct 9 23:17
Editor
Edited
Edited
2025 Oct 20 23:55
Refs
Refs

Token Spacing and Residual Alignment

Rare concepts appear infrequently in training data, causing models to fail at visualization. However, their meaning is already latent within the text embeddings. By scaling up text embedding variance, the semantic spacing between tokens widens, making rare meanings more clearly visible. PCA-based principal component spaces are separated for "Token Spacing + Residual Alignment" to achieve balanced adjustment. TORA is a method that naturally extracts hidden rare meanings in Diffusion Transformers through simple embedding adjustments.
 
 
 
 
 

Recommendations