Texonom
Texonom
/
Engineering
Engineering
/Data Engineering/Artificial Intelligence/AI Industry/AI Accelerator/
TPU
Search

TPU

Creator
Creator
Seonglae Cho
Created
Created
2020 May 17 9:25
Editor
Editor
Seonglae Cho
Edited
Edited
2025 Jul 6 18:29
Refs
Refs
An
ASIC
developed by Google, optimized for large-scale matrix multiplication processing and energy efficiency.
  • Systolic Array + Pipelining to minimize memory access
  • Ahead-of-Time compilation (XLA) to predetermine memory access patterns, utilizing scratchpads instead of caches
TPU Versions
TPU v4
TPU v5e
TPU v5p
TPU v6 Trillium
 
 
 
https://jax-ml.github.io/scaling-book/
https://jax-ml.github.io/scaling-book/tpus/
https://jax-ml.github.io/scaling-book/tpus/
https://jax-ml.github.io/scaling-book/tpus/
Compute primitives and memory layouts for different hardware backends. Image by Chen et al., 2018.
Compute primitives and memory layouts for different hardware backends. Image by Chen et al., 2018.
https://jax-ml.github.io/scaling-book/tpus/
 
 
 
TPU Deep Dive
I've been working with TPUs a lot recently and it's fun to see how they had such different design philosophies compared to GPUs.
https://henryhmko.github.io/posts/tpu/tpu.html
github.com
https://github.com/kimbochen/md-blogs/tree/main/tpuv4_v5e
Apple says its AI models were trained on Google's custom chips
Apple is using chips designed by Google in building its advanced AI models, according to a paper published on Monday.
Apple says its AI models were trained on Google's custom chips
https://www.cnbc.com/2024/07/29/apple-says-its-ai-models-were-trained-on-googles-custom-chips-.html
Apple says its AI models were trained on Google's custom chips
 
 

Backlinks

GCP Componentjax.profilerAI Compiler OptimizationAI Framework

Recommendations

Texonom
Texonom
/
Engineering
Engineering
/Data Engineering/Artificial Intelligence/AI Industry/AI Accelerator/
TPU
Copyright Seonglae Cho