AI Accelerator

Creator
Creator
Seonglae ChoSeonglae Cho
Created
Created
2022 Aug 10 14:46
Editor
Edited
Edited
2025 May 22 23:32
Refs
FP8/LogFMT native quantization and high-precision accumulation register support is needed. Adaptive Routing, Virtual Output Queuing (VOQ), and end-to-end lossless load control are required. Hardware error detection beyond ECC (Error-Correcting Code), Hardware-level acquire/release consistency and ordering guarantees improve memory-semantic communication by removing fence overhead.
AI Accelerator Companies
 
 
 
AI Accelerators
 
 
 
 

Geekbench AI

New Geekbench AI benchmark can test the performance of CPUs, GPUs, and NPUs
Performance test comes out of beta as NPUs become standard equipment in PCs.
New Geekbench AI benchmark can test the performance of CPUs, GPUs, and NPUs
How We’ll Reach a 1 Trillion Transistor GPU
Advances in semiconductors are feeding the AI boom
How We’ll Reach a 1 Trillion Transistor GPU

Chip is core part

Nvidia On the Mountaintop
Nvidia has gone from the valley to the mountain-top in less than a year, thanks to ChatGPT and the frenzy it inspired; whether or not there is a cliff depends on developing new kinds of demand that…
Nvidia On the Mountaintop
The Hardware Lottery
How hardware and software determine what research ideas succeed and fail.
The Hardware Lottery
Korean researchers power-shame Nvidia with new neural AI chip — claim 625 times less power draw, 41 times smaller
Claim Samsung-fabbed chip is the first ultra-low power LLM processor.
Korean researchers power-shame Nvidia with new neural AI chip — claim 625 times less power draw, 41 times smaller
 
 

Backlinks

MLC LLM

Recommendations