. Compared to what's currently being used , Micron says its forthcoming tech is"expected to achieve greater than 30% improvement in frames per second for ray tracing and rasterization workloads."
Take 24, increase it by 30%, and you get a value of 31.2 so it's obvious where Micron is getting its performance claims from. But let's say you could take that GDDR7 and add it to a current graphics card —would games and benchmarks be 30% faster, like Micron says?, changing the clocks on its VRAM across the widest range I could manage. Everything else remained the same, the performance differences visible are purely from altering the memory clocks.
So does that mean Micron is lying through its back teeth over these claims? Not really because I strongly suspect that there are"ray tracing and rasterization workloads" that will show a significant increase in frame rates when using faster VRAM, and once GDDR7 is out in the real world, there's bound to be at least one game that's used a marketing tool to highlight the gains that GDDR7 brings.