High bandwidth memories HBM . AMD revealed details about its interface GPU high bandwidth memory ( GPU) , starting with the company’s next-generation Radeon 300 series products, and finally APU that combines central processor processing and graphics.
[ hide ]
- 1 Advantages
- 2 News
- 3 Manufacturing
- 4 See also
- 5 Source
AMD Chief Technology Officer Joe Macri said the technology had been in development for seven years in manufacturing,. The great advantage for the rival chip manufacturer Nvidia is “at least a year behind” in the high bandwidth memory, so there will be a big difference when this product is on the market.
The technology involves “stacking” RAM chips to provide up to three times the performance of GDDR5 memory arranged and used by the GPU alone . AMD can also squeeze HBMs much more into the space occupied by conventional memory GDDR5 , as an example, 1GB of HBM fits into a 5mm by 7mm cell, whereas in conventional memories there was a need for a 24mm footprint by 28mm for 1GB of GDDR5 .
AMD’s new memory technology should prove to be an almost immediate advantage for the company in the consumer market with these types of advancements in technology. AMD has worked for years to make HBMs a reality, and I believe they will have an advantage in time to market for end products with the technology, HBMs help solve what will become a big problem in the performance of the graphics memory of scale and power.
Applications on graphics cards, PC APUs , and even some business workloads. The software for consumer workloads, but AMD or its partners will have to work hard to make a dent in the HPC market. ” The new memory interface requires a new specification and the development of a “new type of memory chip with low power consumption and ultra-wide bus width”, which [[AMD [[carries out in collaboration with Hynix and other partners, according to what Hot Hardware reported .
Chips DRAM HBM are stacked vertically, and through silicon vias “(TSVs) and ‘? Bumps” that are used to connect a chip DRAM to the next, and then a logical matrix, and ultimately to the interposer. TSVs and? Bumps are also used to connect the SoC / GPU to the interleaver and the whole set is connected on the same package substrate. The end result is a single package in which the GPU / SoC reside in a single high bandwidth memory module,