A look at several typical high-speed internet scenarios from Credo
Credo, as a core supplier of AWS, reported in its latest financial report that its second-quarter revenue grew by 64% year-on-year, reaching $72 million, exceeding analysts' expectations. The adjusted earnings per share were $0.07, and the expected revenue for the third quarter is between $115 million and $125 million. The company's CEO stated that the growth in AI demand has driven a turning point in revenue, and the stock price surged over 30% in after-hours trading. Credo's positioning in the artificial intelligence infrastructure sector is considered more attractive, with further revenue growth expected in future quarters
“AWS AEC Core Supplier Credo's Performance Exceeds Expectations”
AWS core supplier Credo released its new quarterly performance, with results and outlook significantly exceeding expectations, as data center interconnect enters a rapid Ethernet construction acceleration phase.
Financial Report Highlights:
Revenue Growth: Second-quarter revenue increased by 64% year-on-year, reaching $72 million, far exceeding analysts' expectations of $66.5 million.
Earnings Performance: Adjusted earnings per share (EPS) were $0.07, in line with market expectations, but the strong revenue performance further boosted market confidence.
Third Quarter Guidance: Revenue is expected to be between $115 million and $125 million, far exceeding analysts' expectations of $86.04 million. Adjusted gross margin is expected to be between 61%-63%, demonstrating strong profitability.
Market Demand: Company President and CEO Bill Brennan stated that with the growth in AI deployment demand and deepening customer relationships, the company has reached a turning point in revenue growth, with demand even exceeding initial expectations.
After the earnings report was released, Credo's stock price surged over 30% in after-hours trading, with the market being very optimistic about its performance and outlook. Investors are generally optimistic about Credo's future development, especially as its layout in the artificial intelligence infrastructure field is considered more attractive than some giants (such as Nvidia). Company executives clearly pointed out that this growth is mainly driven by AI deployment and deepening customer relationships. With the widespread application of artificial intelligence technology, the demand for fiber and Ethernet connectivity solutions has surged, and Credo has successfully seized this market opportunity. In addition, the company expects revenue to further grow in future quarters, indicating that the continued expansion of AI-related infrastructure demand will provide stronger momentum for its business.
01 In-depth Analysis of the AEC (Active Electrical Cable) Industry and Supply Chain
1. What is AEC? Active Electrical Cable (AEC) is a solution for high-performance data transmission achieved by embedding electronic components (such as signal amplifiers and equalizers) in traditional copper cables.
Core Value: Provides high bandwidth and low latency transmission capabilities. Compared to fiber optics, AEC is more cost-effective for short-distance transmission and is more efficient and reliable. It is particularly suitable for short-distance high-density connections in data centers, AI clusters, and high-performance computing (HPC).
2. AEC Industry Chain Key Links
(1) Upstream: Raw Materials and Electronic Component Supply
-
High-quality copper cables: The physical medium used for data transmission.
-
Semiconductor chips: Including embedded signal amplifiers, equalizers, and SerDes chips
-
Connectors and packaging: Support for structured and standardized interfaces for cables.
Core Participants:
-
Semiconductor Manufacturers: Such as Broadcom, Marvell, etc., providing chip technology support.
-
Copper Cable Suppliers: Such as Prysmian Group, Belden, etc.
-
Connector Manufacturers: Such as Amphenol, Molex, etc.
Competitive Barriers: High-purity copper materials and high-performance chip technology are key barriers, requiring highly specialized manufacturing capabilities and R&D investment.
(2) Midstream: AEC Product Design and Manufacturing
Main Functions: In short-distance transmission, signal amplification, equalization, and error correction functions are achieved through embedded chips to ensure data integrity. Provides plug-and-play modular design for quick deployment by customers.
Core Participants:
-
Credo Technology: Market leader with innovative technologies such as "ZeroFlap."
-
Broadcom: Entering the market through mature SerDes technology.
-
Spectra7 Microsystems: Focused on consumer-grade and data center-grade AEC solutions.
-
Other Emerging Manufacturers: Participants include startups like Silicon Valley Analog in China.
Competitive Barriers: Technical barriers: AEC design requires highly optimized signal processing technology, involving collaborative development of hardware and software. Manufacturers need to deeply engage in the customer design process to provide customized solutions for specific scenarios.
(3) Downstream: Applications and Deployment, Main Application Scenarios Include Data Centers, AI Clusters, High-Performance Computing (HPC), etc.
-
Supports large-scale parallel computing demands.
-
Supports high-density AI training and inference scenarios, reducing network latency.
-
Connects high-speed communication between GPUs.
-
Short-distance interconnection from GPU to switch.
-
Consumer Electronics (less common): Such as ultra-high-definition displays and virtual reality devices.
-
Core Participants: Hyperscale cloud service providers (such as Amazon AWS, Microsoft Azure, Google Cloud). AI startups (such as xAI). Network equipment manufacturers (such as Cisco, Arista).
-
Competitive Barriers: Cost and performance: AEC needs to demonstrate significant cost and performance advantages over fiber optics in short-distance deployments. Network integration: Customers require high compatibility with their existing network architecture, and any performance or quality issues will affect adoption rates.
3. Core Functions Achieved by the AEC Industry
-
Signal Integrity: Embedded signal amplification and equalization technologies ensure that signals do not degrade during high-speed transmission.
-
Low Power Consumption: Compared to fiber optic solutions, AEC has lower power consumption, making it suitable for high-density deployments
-
High Reliability: For example, Credo's "ZeroFlap" technology can avoid the common "link break" problem in short-distance connections.
-
Cost Advantage: Copper cables are low-cost and easy to maintain, offering significant economic benefits in short-distance transmission.
4. Ranking of Each Link Based on Certainty of Competitive Barriers
The midstream link has the highest competitive barrier, with companies like Credo dominating the design and manufacturing end due to strong R&D capabilities and customer collaboration advantages.
The upstream link has a lower entry barrier due to the abundance of raw materials and standardized suppliers, but core chip manufacturing still requires highly specialized capabilities.
The competitive barriers in the downstream link mainly lie in customer stickiness and network integration needs, with successfully entering the supply chains of ultra-large customers being key.
5. Competitive Comparison of Core Participants in the AEC Industry
6. Future Development Trends and Opportunities
Technological Trends: High-speed (800G and above) and low-power designs are the future development focus. In short-distance networks, the mixed application of copper cables and optical fibers will continue, but the reliability advantages of AEC may prompt its replacement of optical fibers in more scenarios.
Market Opportunities: With the acceleration of AI clusters and high-density data center construction, the demand for the AEC market will significantly increase. The rise of emerging AI companies provides greater market penetration opportunities for AEC.
The AEC industry relies on the development of data centers and AI networks, and the technological accumulation and depth of customer cooperation of leading companies like Credo in the midstream link are key to their success. As market demand grows, AEC is expected to capture a larger market share in short-distance high-performance networks, while also needing to address the dual challenges of technological innovation and industry competition.
02 Several Typical High-Speed Interconnection Scenarios in Data Centers
High-speed interconnection is the foundation of data centers and AI clusters, primarily aimed at supporting high bandwidth, low latency, and high reliability data transmission. In data centers, AI clusters, and high-performance computing (HPC), interconnection scenarios can be categorized into GPU interconnection, GPU to switch interconnection, switch interconnection, and storage node interconnection. Below is an in-depth analysis of these typical interconnection scenarios and their main tools and participants.
1. GPU Interconnection (GPU-to-GPU)
-
GPU interconnection mainly occurs within a single computing node or between nodes for GPU collaborative communication.
-
In AI training and high-performance computing, GPUs need to share large-scale data and perform high-speed communication, and the performance of interconnection directly affects computing efficiency.
Main Interconnection Tools
- NVLink (NVIDIA proprietary technology):
-
High-speed interconnection bus that provides high bandwidth and low latency support.
-
The fourth generation NVLink has a bandwidth of up to 900GB/s (for multi-GPU communication).
-
Used for direct connections between NVIDIA GPUs, forming a computing unit within a single node.
-
- PCIe (Peripheral Component Interconnect Express):
-
High-speed interface standard that supports communication between GPUs and CPUs, as well as between GPUs.
-
The latest generation PCIe Gen 5/Gen 6 supports 128GB/s bandwidth, suitable for short-distance interconnection.
-
Widely used in multi-vendor GPU solutions.
-
- InfiniBand:
-
Network protocol and hardware standard that provides ultra-low latency GPU interconnection solutions.
-
Commonly used for inter-GPU connections across nodes, widely adopted in AI training clusters.
-
- Credo's AEC:
-
Provides short-distance, high-density GPU interconnection, supporting 800G and higher bandwidth.
-
Low power consumption and high signal integrity, suitable for GPU interconnection needs within nodes.
-
Main Participants
-
NVIDIA: Provides GPU hardware (such as H100) and proprietary interconnection technology (NVLink).
-
AMD: Supports GPU interconnection through Infinity Fabric and PCIe technology.
-
Credo: Provides high-performance cable solutions for multi-GPU interconnection scenarios.
-
Mellanox (now a subsidiary of NVIDIA): Core supplier of InfiniBand network solutions.
2. GPU to Switch Interconnection (GPU-to-TOR Switch)
-
The connection between GPU nodes within a single rack and the top-of-rack (TOR) switch is key for AI clusters and data center communication.
-
Such connections need to support high-density deployment, low power consumption, and low latency.
Main Interconnection Tools
- Active Electronic Cable (AEC):
Typical Example: Credo's 800G ZeroFlap AEC. Low power consumption, high reliability, and more suitable for short-distance high-density connections compared to traditional optical fibers
-
DAC (Direct Attach Copper): Low cost, but limited by signal integrity, with a transmission distance typically not exceeding 5 meters. Used in power-sensitive small deployment scenarios.
-
Optical Transceivers: Used for longer distance interconnections between racks (>5 meters). Typical rates support 400G/800G.
Key Participants
-
Credo: Provides high-reliability solutions for short-distance interconnection from GPU to switch through AEC products.
-
Broadcom: Major supplier of optical modules and Ethernet switch chips.
-
Arista Networks: Core supplier of TOR switches, offering high-density Ethernet solutions.
-
Cisco: Top-tier network equipment provider, offering comprehensive support for GPU to TOR connections.
-
Amphenol: One of the main suppliers of DAC.
3. Switch-to-Switch Interconnection
-
Switch-to-switch interconnection occurs between TOR and Spine switches or between Spine switches.
-
The Spine-Leaf architecture is the core of modern data centers, supporting large-scale horizontal expansion.
Key Interconnection Tools
-
Optical Modules (Fiber Connections): Used for medium to long-distance (10 meters to several kilometers) switch interconnections. Mainstream rates support 400G, 800G, and 1.6Tbps.
-
DAC: Used for short-distance interconnections within or between racks (typically <5 meters). Low cost but easily limited by distance.
-
AEC: Gradually replacing DAC and fiber in short-distance applications, providing higher signal integrity and lower power consumption solutions.
Key Participants
-
Cisco, Arista Networks, Juniper Networks: Provide core switch solutions supporting various interconnection methods.
-
Credo: Optimizes short-distance switch interconnections through high-bandwidth AEC solutions.
-
Broadcom, Marvell: Supply high-performance optical modules and switch chips.
4. GPU-to-Storage Interconnection
-
GPUs require fast access to distributed storage, especially in AI training, where real-time processing of large-scale datasets is needed.
-
Storage nodes and GPUs achieve rapid data transfer through high-speed interconnections.
Key Interconnection Tools
-
NVMe-oF (NVMe over Fabrics): Achieves high-performance storage access via Ethernet or InfiniBand. Supports low-latency and high-bandwidth transmission
-
InfiniBand: Provides ultra-low latency interconnection from GPU to storage. Widely used in high-performance computing and AI clusters.
-
Optical Modules: Used for long-distance connections, supporting high-speed access to distributed storage and cloud storage.
Key Participants
-
NetApp, Pure Storage: Providers of distributed storage systems.
-
NVIDIA: Optimizes GPU to storage connections through InfiniBand.
-
Credo, Broadcom: Support data transmission needs through high-speed interconnection solutions.
Core Functions and Technology Trends of Interconnection Architecture
Core Functions
-
High Bandwidth: Supports the extremely high data throughput requirements of AI and HPC.
-
Low Latency: Optimizes the efficiency of real-time computing and training.
-
High Reliability: Ensures stability during long-term operation, reducing link fluctuations or breaks.
-
Cost-Effectiveness: Reduces the cost and power consumption of interconnection devices in high-density deployments.
Technology Trends
-
Higher Speeds: Progressing from 800G to 1.6Tbps.
-
Low Power Design: Power optimization becomes key, especially in high-density short-distance connections.
-
Modularity and Compatibility: Supports interconnection of different devices and protocols, enhancing system flexibility.
-
Typical Interconnection Scenario Analysis indicates that leading companies like Credo play a significant role in GPU interconnection and GPU to switch connections, especially providing optimal solutions for short-distance interconnection through AEC products.
-
Key Participants cover a range from hardware suppliers (such as Broadcom, Credo) to system integrators (such as NVIDIA, Cisco), forming an efficiently coordinated industrial chain.
-
Future Directions will focus on breakthroughs in higher speeds and lower power technologies, as well as more efficient distributed interconnection architectures, driving the continuous upgrade of data centers and AI clusters.
Source: Bayesian Beauty, Original Title: "A Look at Several Typical High-Speed Internet Scenarios from Credo"
Risk Warning and Disclaimer
The market has risks, and investment requires caution. This article does not constitute personal investment advice and does not take into account the specific investment goals, financial situation, or needs of individual users. Users should consider whether any opinions, views, or conclusions in this article are suitable for their specific circumstances. Investment based on this is at one's own risk