Micron technology announced that have begun to deliver its first 8 - high 24 Gb HBM3 Gen2 memory products, bandwidth is greater than 1.2 TB/s, pin at a faster rate than 9.2 Gb/s, than the launch HBM3 solution increased by 50%. Its performance per watt than previous generations of products increased 2.5 times, a key indicator for artificial intelligence (AI) data center performance, capacity and energy efficiency to create the new record. Meguiar's these improvements reduced the GPT - 4 large language model and other models such as the training time, provide efficient infrastructure used for AI reasoning, and provide excellent total cost of ownership (TCO).
Meguiar HBM solution using 1 beta (1 - beta) DRAM technology node, the node will allow 24 gb DRAM chip assembly to the industry standard package size within 8 - high cube. In addition, meguiar's 12 high stack (capacity of 36 gb) will begin in the first quarter of 2024 to provide samples. Compared with the existing competitive solution, meguiar's under a given stack height to provide capacity increased by 50%.
Meguiar's HBM3 Gen2 performance than power consumption and pin speed improvement for management in today's artificial intelligence data center extreme power consumption demand is very important. Micron silicon hole (TSV) double HBM3 products than its competitors, through the metal density increased five times to reduce the thermal resistance.
As HBM3 Gen2 product development part of the job, the cooperation between light and TSMC design for AI and HPC applications smooth introduction and integration of the computing system laid a foundation. TSMC has received a micron HBM3 Gen2 memory samples, and are working closely with micron for further evaluation and testing, which will be conducive to the customer the next generation of innovation of HPC applications.
Meguiar HBM3 Gen2 solutions satisfy the field generated type AI to multi-mode model and the parameters of the trillions of AI growing demand. Each cube capacity of 24 Gb, pin at a faster rate than 9.2 Gb/s, large language model training time reduced by more than 30%, thus reduce the TCO. In addition, meguiar's products significantly increase the amount of queries a day, so that they can more effectively use the trained model. Meguiar HBM3 Gen2 memory first-class performance per watt for the modern artificial intelligence data center brings real cost savings. To install 10 million GPU, each HBM cube can be energy saving 5 watts, is expected to save as much as $550 million over five years of operating expenses.
According to official introduction, meguiar HBM3 Gen2 in the United States in terms of design and process development, made in Japan for memory, advanced packaging in Taiwan.
Meguiar's previously announced 24 gb 1 alpha (1 - alpha) based on single chip 96 gb DDR5 module of DRAM chips, server solution for large capacity demand, is introduced based on 1 today