Wallstreetcn
2024.05.09 01:49
portai
I'm PortAI, I can summarize articles.

AI 2.0-Networking Era Begins? "Super Ethernet Representative" Arista's Financial Report Exceeds Expectations, Surges 6% Overnight

Ethernet is becoming one of the most critical infrastructure components for AI data centers. Arista's upward revision of its performance guidance is seen as a sign of increased confidence in the future

Author: Li Xiaoyin

Source: Hard AI

Boosted by the demand for AI training, the Q1 financial report and performance guidance of the computer networking company Arista exceeded expectations.

The financial report shows that Arista's first-quarter revenue increased by 16.3% year-on-year, reaching $1.571 billion, surpassing expectations; the company's first-quarter net profit surged by 46% year-on-year, reaching $637.7 million.

In terms of revenue guidance, the company has raised its 2024 revenue growth forecast from 10%-12% to 12%-14%, expecting total annual revenue to be between $1.62 billion and $1.65 billion, exceeding the Wall Street consensus of $1.62 billion.

Overnight, the company's stock price soared by over 6% to $291.67 per share. Year-to-date, the company's stock price has accumulated nearly a 26% increase.

External analysis points out that Arista generally does not raise full-year guidance at the end of Q1. Therefore, this guidance increase is interpreted as a stronger confidence in the future.

Furthermore, after completing the previous $2 billion share repurchase plan, Arista has also announced a new $1.2 billion share repurchase plan.

AI Cluster Era Coming? Ethernet Becomes "Hot"

Founded in Delaware in October 2004 and listed in 2008, Arista is a leading player in data-driven, large-scale data center, campus, and routing interconnection environments.

As a B2B network switch and router manufacturer, Arista mainly serves cloud service providers including Microsoft, Meta, and others.

In the financial report, Arista stated that the high performance requirements for training large AI models have boosted the demand for the company's hardware products from cloud service providers.

Jayshree Ullal, Chairman and CEO of Arista, stated during the financial report conference call that Ethernet is becoming one of the most critical infrastructure components in AI data centers.

Ullal explained:

"AI applications cannot run in isolation; they require seamless communication between compute nodes."

"With the development of generative AI training, thousands of individual iterations are now required. Any slowdown caused by network congestion severely impacts application performance, leading to inefficient waiting time and a decrease in processor performance by 30% or more."

In simple terms, AI workloads cannot tolerate network latency because the job can only be completed once all flows are successfully delivered to the GPU cluster. Any failure or delay in a link will restrict the overall efficiency of AI work Reducing training task time requires building horizontally scalable AI networks to increase GPU utilization.

The seamless communication between the backend GPU and AI accelerator nodes, as well as the frontend nodes such as CPUs, storage, and IPWAN systems, drives the large-scale Ethernet to become the preferred choice for horizontally scaling AI training workloads.

Arista, known as the "Super Ethernet Representative," has a significant advantage in this field.

According to a previous report by Wall Street News, in a bid against the competitor InfiniBand for five AI network clusters, Arista won all four Ethernet clusters.

Analysis indicates that for these four clusters, Arista has transitioned from validation to trials, connecting thousands of GPUs this year, with an expected production of 10,000 to 100,000 GPU nodes by 2025.

Previously, based on its flagship product 7800 AI Spine, Arista also built a GPU cluster of 24,000 nodes for cloud service customers, causing a stir in the industry.

Not only in hardware, but Arista has also built a dedicated AI architecture (NetDL), achieving structural visibility, integrating network data and NIC data, allowing operators to identify host with misconfigurations or abnormal behaviors, and pinpoint performance bottlenecks