An ASIC developed by Google, optimized for large-scale matrix multiplication processing and energy efficiency.
- Systolic Array + Pipelining to minimize memory access
- Ahead-of-Time compilation (XLA) to predetermine memory access patterns, utilizing scratchpads instead of caches
TPU Versions
TPU Deep Dive
I've been working with TPUs a lot recently and it's fun to see how they had such different design philosophies compared to GPUs.
https://henryhmko.github.io/posts/tpu/tpu.html
Apple says its AI models were trained on Google's custom chips
Apple is using chips designed by Google in building its advanced AI models, according to a paper published on Monday.
https://www.cnbc.com/2024/07/29/apple-says-its-ai-models-were-trained-on-googles-custom-chips-.html

Project Suncatcher
Expanding machine learning computation in space by building a satellite network equipped with TPUs that directly harness solar energy to perform large-scale AI computations
Meet Project Suncatcher, a research moonshot to scale machine learning compute in space.
Artificial intelligence is a foundational technology that could help us tackle humanity's greatest challenges. Now, we're asking where we can go next to unlock its fulle…
https://blog.google/technology/research/google-project-suncatcher/


Seonglae Cho




