GPUs (graphics processing units) are commonly used for training GPU programs. The faster processing power and higher speeds of GPUs make this task easier. A California startup has introduced the trillion transistor chip to train artificial intelligence programs. It is the world's first trillion transistor chip.
Cerberus Systems, a San Francisco company, has built a 46,225 square millimeter processing chip called the Cerberus Wafer Scale Engine (WSE).
This chip is 56 times larger than the largest GPU. According to Cerberus, WSE's on-chip memory is 3000 times faster and its bandwidth memory is 10,000 times higher than GPU-based AI acceleration. WSE has 4 million computing cores. It has 18 gigabytes of local, distributed memory. The network of these cores enables an average bandwidth of 100 beats bits.
Comments
Post a Comment