CPU and GPU SRAM caches are usually not shrinking, which might improve chip price or cut back efficiency

Why it issues: An attention-grabbing article posted at WikiChip discusses the severity of SRAM shrinkage issues within the semiconductor trade. Producer TSMC is reporting that its SRAM transistor scaling has fully flatlined to the purpose the place SRAM caches are staying the identical measurement on a number of nodes, regardless of logic transistor densities persevering with to shrink. This isn’t ideally suited, and it’ll power processor SRAM caches to take up more room on a microchip die. This in flip might improve manufacturing prices of the chips and forestall sure microchip architectures from changing into as small as they might doubtlessly be.

Almost all processors depend on some type of SRAM caching. Caches act as a excessive velocity storage answer with very quick entry occasions because of their strategic placement proper subsequent to the processing cores. Having quick and accessible storage can considerably improve processing efficiency and end in much less wasted time for the cores to do their work.

On the 68th Annual IEEE Worldwide EDM convention, TSMC revealed enormous issues with SRAM scaling. The corporate’s subsequent node it’s growing for 2023, N3B, will embrace the identical SRAM transistor density as its predecessor N5, which is utilized in CPUs like AMD’s Ryzen 7000 sequence.

One other node at the moment in improvement for 2024, N3E shouldn’t be that significantly better, that includes a measly 5% discount in SRAM transistor measurement…

CPU and GPU SRAM caches are usually not shrinking, which might improve chip price or cut back efficiency

For a broader perspective, WikiChip shared a graph of TSMC’s SRAM scaling historical past from 2011 to 2025. The primary half of the graph — representing TSMC’s 16nm and 7nm days — exhibits how SRAM scaling was not a subject and the way it was getting smaller at a fast tempo. However as soon as the graph hits 2020, scaling mainly flatlines, with three generations of TSMC logic nodes utilizing almost similar SRAM sizes: N5, N3B and N3E.

With logic transistor density nonetheless growing at a fast tempo — as much as 1.7x within the case of N3E — however with out SRAM transistor density following the identical path, SRAM will begin consuming a whole lot of die house as time goes on. Wikichip demonstrated this with a hypothetical 10 billion transistor chip, working on a number of nodes. On N16 (16nm), the die is massive with simply 17.6% of the die space composed of SRAM transistors, on N5, this goes as much as 22.5%, and 28.6% on N3.

WikiChip additionally studies that TSMC is not the one producer with comparable issues. Intel has additionally seen noticeable slowdowns in SRAM transistor shrinkage on its Intel 4 course of.

Except that is someway remedied, we might quickly see SRAM caches consuming as a lot as 40% of a processor’s die house. This may result in chip architectures having to be reworked and add to improvement prices. One other approach producers would possibly cope is to decrease cache capability altogether, which would scale back efficiency. Nevertheless, there are various reminiscence replacements being checked out, together with MRAM, FeRAM, and NRAM, to call a couple of. However for now, it stays a problem with no clear reply within the rapid future.

Leave a Reply