Intel launches new generation AI chips Xeon 6 and Gaudi 3 to challenge NVIDIA's dominant position
Intel has launched a new generation Xeon 6 processor and Gaudi 3 AI accelerator, aiming to enhance its competitiveness in the field of artificial intelligence and high-performance computing, challenging NVIDIA's market dominance. The Xeon 6 is equipped with performance cores, doubling AI visual processing performance, while Gaudi 3 has increased throughput by 20%. The new products are suitable for various application scenarios, helping customers improve performance and security, and meet the growing demand for AI
According to the latest news from Zhitong Finance APP, on Monday local time, Intel (INTC.US) officially launched the new Xeon6 processor and Gaudi3AI accelerator, aiming to enhance the company's competitiveness in the field of artificial intelligence (AI) and high-performance computing (HPC), potentially challenging NVIDIA's dominant position in this field.
Previously, at Intel's Vision 2024 conference in April, Intel introduced the Gaudi 3 accelerator for enterprise AI, and in June, the company showcased the Xeon 6 with P cores.
The Xeon6 processor is equipped with performance cores (P-cores), which can double the performance of AI visual processing, while the throughput of the Gaudi3 accelerator has also increased by 20%. This means that these new products will be more handy in handling complex computing tasks. At the same time, Xeon6 has more cores and memory bandwidth, meeting the needs of various applications from edge computing to data centers.
Justin Hotard, Executive Vice President of Intel's Data Center and AI Business Unit, stated that the current AI demand is driving significant changes in data centers, requiring more choices in hardware, software, and development tools within the industry. By introducing Xeon6 and Gaudi3, Intel hopes to help customers improve performance and security, achieving more efficient AI solutions.
Gaudi3, designed for large-scale generative AI, features 64 tensor processor cores and 8 matrix multiplication engines, greatly accelerating the computation of deep neural networks