Jensen Huang predicts "unprecedented" new chip products, the next-generation Feynman architecture may become the focus

Wallstreetcn
2026.02.19 07:32
portai
I'm PortAI, I can summarize articles.

Jensen Huang previewed the release of a "world's first" new chip product at this year's GTC conference. Analysts believe the new product may involve derivatives of the Rubin series or a more revolutionary Feynman architecture chip. The market expects the Feynman architecture to be deeply optimized for reasoning scenarios

NVIDIA CEO Jensen Huang revealed in an interview with media outlet wccftech that the company will launch a "completely new chip product that the world has never seen" at this year's GTC conference. This statement has sparked significant market interest in NVIDIA's next-generation product roadmap, with analysts suggesting that the new products may involve derivatives of the Rubin series or more revolutionary Feynman architecture chips.

Jensen Huang stated:

"We have prepared several completely new chips that the world has never seen. This is no easy task, as all technologies are approaching physical limits."

Considering that NVIDIA just showcased the fully production-ready Vera Rubin AI series products at CES 2026, including six new designed chips, the market expects that this GTC may introduce more cutting-edge technological solutions. For investors closely monitoring the AI infrastructure race, this means NVIDIA may once again set new industry technology standards.

The NVIDIA GTC keynote speech will be held on March 15 in San Jose, California, where the next phase of the AI infrastructure race will become a core topic.

New Products Point to Two Major Directions

According to Wccftech, although Jensen Huang did not specify the details of the products, market analysis points to two main directions based on the description of "never seen before."

The first possibility is derivative chips from the Rubin series, such as the previously exposed Rubin CPX. NVIDIA recently launched the Vera Rubin AI series at CES 2026, which includes six chips, including the Vera CPU and Rubin GPU, that have entered full production.

The second possibility is more disruptive—NVIDIA may unveil the next-generation Feynman architecture chip ahead of schedule. It is understood that Feynman is regarded as a "revolutionary" product in the industry, potentially adopting a more extensive SRAM integration scheme and even integrating LPU (Language Processing Unit) through 3D stacking technology, although this technological route has not been officially confirmed.

Shifting Computing Demands Drive Product Evolution

NVIDIA currently faces a market environment where computing demands change seasonally. Jensen Huang's statement reflects the company's clear judgment on the direction of technological evolution.

During the Hopper and Blackwell eras, pre-training was the primary demand; however, with the launch of Grace Blackwell Ultra and Vera Rubin, inference capability has become core, with latency and memory bandwidth becoming major bottlenecks. This shift in demand directly influences NVIDIA's product design direction.

For the Feynman architecture, the market expects it to be deeply optimized for inference scenarios. NVIDIA is exploring breakthroughs in existing performance bottlenecks through larger-scale SRAM integration and possible LPU integration, which will have a significant impact on cloud service providers and enterprise customers that rely on AI inference capabilities.

Additionally, Jensen Huang emphasized the importance of broader partnerships and investment strategies in the interview. He stated, "NVIDIA has excellent partners and outstanding startups, and we are investing across the entire AI stack. AI is not just a model; it is a complete industry encompassing energy, semiconductors, data centers, cloud, and applications built on top of it." This statement shows that NVIDIA is transforming from a pure chip supplier to an AI ecosystem builder. Through acquisitions and partnerships, the company is trying to maintain its leading position in the AI infrastructure race