Nvidia’s new business set to grow its share price

Nvidia's new business

New revenue streams

Decide Nvidia future share price

What this article is going to talk about is the layout outside semiconductors, that is, Nvidia’s new businesses that many people don’t know about outside the well-known and currently involved fields. They will be the company’s new revenue sources.

The importance of this article is: the Nvidia’s new business discussed in this article will determine the boundaries of Nvidia’s future market value growth! What kind of new records its stock price can continue to set depends entirely on these new businesses, not the existing old business areas.

What are Nvidia’s existing businesses?

The current business is mainly divided into four parts:

  • Data center: Including artificial intelligence, cloud computing, big data and data center; this is the focus of the current public attention, and it is also the reason why it has topped the market value of listed companies in the world.
  • Automobile: Including automobile chips, vehicle systems, self-driving systems, etc.
  • Games and graphics cards: Including online games, e-sports, etc., this is the company’s original product line that everyone knows today, and most of its other product lines in the future are derived from this part.
  • Professional graphics: Mainly the Quadro product series, high-end display products designed for professionals and not general public.
  • Others: Including game consoles, embedded systems, OEM business, cryptocurrency-specific chips, etc.

Get to know Nvidia first

My blog posts

If you really don’t understand these businesses of Nvidia, it is recommended that you refer to my following related articles before continuing to read below:

Nvidia in my two books

I have discussed the company Nvidia (ticker:NVDA) in two recent books; including:

In my book “The Rules of Super Growth Stocks Investing“:

  • Section 3-3, pp. 190-191
  • Section 3-7

In my book “The Rules of 10 Baggers“:

  • Section 1-1, pages 23-24
  • 5-6, pages 242-246, an entire subsection dedicated to this company
  • Section 6-2, pages 282-285
  • Section 7-1, pp. 348-285

What are Nvidia’s new businesses pipeline?

Please note that each of the following businesses is almost the main trend of the current and future technology industry. The most important thing is that they all have huge markets; but they also have strong opponents.

Venture capital

Nvidia’s Venture Capital Introduction

Since Nvidia began to monopolize the artificial intelligence chip industry, the company has changed its previous strategy and actively participated in venture capital. It also established a dedicated venture capital department Nventures to enter the venture capital market. It has invested in or acquired many related start-up companies, and the number is many times more than the number of companies since its establishment.

Nvidia invested in nearly 30 startups in 2023, which is more than double the number of investments in 2022. Since the generative AI (artificial intelligence) boom broke out in 2023, Nvidia has participated in a total of 74 financings, with a cumulative investment of more than US$10.9 billion. Nvidia provides funds to promote enterprises to purchase and use graphics processing units (GPUs) to consolidate its leadership in the field of AI semiconductors.

Operation highlighted

The key points to mention are: the new companies that Nvidia has invested in or acquired, except for ARM, Figure AI, and CoreWeave mentioned in this article, almost all have business dealings with its company or are related to artificial intelligence. company. Including biotechnology companies, healthcare companies, financial services companies, artificial intelligence developers, large language model developers, self-driving car companies, and robotics companies.

Current worth

Nvidia’s financial report shows that as of January 2024, the value of Nvidia’s investments is approximately US$1.55 billion, compared with only US$300 million in the same period in 2023.

Software and Cloud computing

How big is the market for software?

Remember: without software, it would be difficult for Nvidia’s stock price to skyrocket.

The company’s chief financial officer Kress once revealed that the company’s software business currently has annual revenue of hundreds of millions of dollars. Although tiny compared to its chip business, Nvidia is well positioned for long-term growth. By leasing computing power from cloud service providers, Nvidia can sell its own AI software.

NVIDIA’s own AI big model

In addition to being the leader in AI chips, Nvidia has also been launching various AI models for different industries or purposes. In addition to the AI ​​model Nemotron, which has been well received since its launch and crushed GPT-4o and Claude3.5, Nvidia also released a new AI model Fugatto, which can generate music and audio, and can not only modify but also generate new sounds. It even launched a Hindi AI model for India. Without forgetting its core business, it launched the ChipNeMo chip design AI model used by chip designers.

In January 2025, the Nemotron series of large language models built on Llama were launched to assist developers in building and deploying AI agents in a range of applications, including customer support, fraud detection, and product supply chain and inventory. manageoptimization.

In January 2025, the World Foundation Models (WFMs) developed specifically for the Cosmos platform can accept text, image or video prompts to generate virtual world states as a video for the unique needs of autonomous driving and robotics applications. Output, predict and generate “physically aware” videos. These models can be fine-tuned for specific applications and made open source for developers to use.

“Nvidia is launching the first wave of Cosmos WFM for physics-based simulation and synthetic data generation,” Nvidia said. “Researchers and developers, regardless of company size, can build and test the software under Nvidia’s permissive open model that allows commercial use.” The Cosmos model is freely available for use.

Cosmos Platform

The Cosmos platform, based on a world-based model, enables developers to generate large amounts of realistic and physically correct synthetic data to train and evaluate existing models. Developers can also fine-tune Cosmos custom models, such as simulating industrial environments such as warehouses or factories, and various road driving environments. NVIDIA will provide the Cosmos model in the form of open model licensing to accelerate the development of the robotics and self-driving car community.

AI Agent Platform AI Blueprints

AI Blueprints is NVIDIA’s agent-based AI application platform, built on NVIDIA AI Enterprise and NVIDIA’s Omniverse platform, allowing developers to create and launch their own custom AI agents.

AI agents are specialized AI programs that can perform multi-step tasks across different applications. Companies such as Google and Microsoft are betting big on AI agents as the next big shift in enterprise and consumer AI because they can automate more mundane tasks, such as importing information from an email into a spreadsheet.

Neural Inference Microservices (NIM)

Nvidia announced on March 19, 2024, the launch of a series of new generative artificial intelligence services──NIM (Neural Inference Microservices). Allows enterprises to quickly build and deploy their own AI assistant applications on internal Nvidia GPU hardware infrastructure.

This microservice called “NVIDIA NIM” is a pre-trained artificial intelligence model that, after optimization and adjustment, can be used on hundreds of millions of CUDA-enabled GPUs in the cloud, data centers, workstations and PCs. Run on the Internet; deployment can be completed in minutes, instead of traditional deployment times that often take weeks.

Developers only need to call standardized APIs, and then use the company’s own data to integrate with NIM to create highly customized, safe and controllable generative AI applications.

Nvidia AI Enterprise

In March 2021, Nvidia released Nvidia AI Enterprise, which is a comprehensive software suite of enterprise-level artificial intelligence tools and frameworks. It is used with VMware’s vSphere and is virtualized on VMware’s vSphere.

It transform artificial intelligence workloads. This product provides enterprises with a complete software environment needed to develop a wide range of artificial intelligence solutions, such as advanced diagnostics in healthcare, smart factories in manufacturing, and fraud detection in financial services.

Through Nvidia AI Enterprise, hundreds of thousands of enterprises using vSphere for computing virtualization can immediately use the same tools to manage large data centers and hybrid cloud environments to support artificial intelligence, powering high-end artificial intelligence at enterprise scale.

It provides compatibility for a variety of accelerated CUDA applications, AI frameworks, pre-trained models and software development kits running in the hybrid cloud for better workloads to support large-scale deep learning with full GPU virtualization train the model.

For CUDA, please see my post of “How does CUDA strengthen the moat of Nvidia’s monopoly?

DGX Cloud computing power rental

In June 2023, Nvidia began leasing its self-developed artificial intelligence solutions to customers who are eager to use its hardware and software. This cloud computing service, called DGX Cloud, will include its high-performance artificial intelligence hardware, including the H100 and A100, which are currently in short supply, and even the computing power of the latest yet-to-be-released GPUs in the future.

In May 2024, due to concerns that customers were not moving fast enough to build new data centers to accommodate the AI ​​chips they ordered from Nvidia, thereby delaying their purchase of chips from Nvidia, Nvidia announced a The investment plan of up to 9 billion US dollars is an attempt to get a piece of the cloud service market; it mainly targets large cloud service providers such as Amazon, Microsoft, Google and Oracle.

The investment will help it gain market share for its own cloud service, DGX Cloud. Chief Financial Officer Colette Kress said investments in cloud computing will help support DGX Cloud.

As of the end of 2023, Nvidia’s cloud business only contributed about US$1 billion, which is about 1% of the company’s total revenue. In comparison, Nvidia’s GPUs sold for US$47.5 billion in 2023!

Invest in CoreWeave

Nvidia not only launched its own DGX Cloud but also rented out Nvidia’s computing power to customers. Nvidia also invested heavily in CoreWeave. The main reason is that CoreWeave owns many Nvidia artificial intelligence chips. CoreWeave’s main business is to rent to customers the computing power of its huge data center, which is mainly composed of Nvidia artificial intelligence chips.

Software business

HPE announced a cooperation plan called “NVIDIA AI Computing by HPE” in mid-June. It will jointly develop enterprise-level AI solutions with Nvidia, combining NVIDIA AI computing software and HPE private cloud services to help enterprises accelerate the introduction of generative AI. On your own private cloud. This is the most typical case of Nvidia’s expansion of its software business. Please note that Nvidia is responsible for providing software and selling it to large enterprises such as Hewlett Packard Enterprise.

High-speed Ethernet

Acquisition of Mellanox

Nvidia not only produces data center GPUs for artificial intelligence (AI), but another key to its success is InfiniBand, a technology it acquired when it acquired Mellanox in early 2020

How important is the semiconductor company Mellanox acquired by Nvidia? About half of the world’s data centers and supercomputers use Mellanox technology.

In addition to InfiniBand networking technology that helps simplify the adoption of the latest and greatest AI hardware, Nvidia also acquired some Ethernet technology from Mellanox.

NVLINK

NVLink is a high-speed GPU interconnect protocol proposed by Nvidia, which is used to connect multiple GPUs or connect GPUs to other devices (such as CPUs, memories, etc.). It allows point-to-point communication between GPUs and has higher bandwidth and lower latency than the traditional PCIe bus, providing higher performance and efficiency for multi-GPU systems.

Despite its extremely high bandwidth, NVLink is much more energy efficient than PCIe in transmitting each bit of data. The introduction of NVLink and NVSwitch technologies brings higher communication bandwidth and lower latency to application scenarios such as GPU clusters and deep learning systems, thus improving the overall performance and efficiency of the system.

NVSWITCH

NVSwitch is a technology released by Nvidia in 2018, aiming to solve the full connection problem between multiple GPUs in a single server. NVSwitch allows up to 16 GPUs in a single server node to be fully interconnected, meaning each GPU can communicate directly with other GPUs without going through a CPU or other intermediary.

Will be expanded to high-speed Ethernet

In April 2024, it was reported in the market that Nvidia will also start producing its own Ethernet switches. The main reason is that Nvidia has the trump card of Mellanox. Nvidia’s Spectrum-X Ethernet platform hopes to build on the early success of InfiniBand to provide better transmission efficiency for its data center customers.

The technology possessed by Mellanox is the basis for Nvidia to expand to Ethernet, and it is also the reason why potential opponents in the market are uneasy.

Competitors

There are two main categories of competitors in this field. One is Cisco and Arista Networks, which sell enterprise communication systems, such as routers and switching equipment, and Huawei.

Another stronger opponent is Anwar (now Broadcom), which owns many wired and wireless communications technologies and previously acquired Broadcom’s huge technology patents and customers. Broadcom has decades of experience in Ethernet switching technology, and the top eight data center customers all use its solutions.

Robot

How big is the market

Digitimes believes that with technological breakthroughs and cost reductions in AI, sensors, materials, etc., coupled with the impact of labor shortages, the global humanoid robot industry is accelerating development, and humanoid robots will usher in explosive growth in the next five to ten years. It is estimated that it will create an output value of nearly US$26 billion by 2035.

NVIDIA’s view

Jen-Hsun Huang recently said in an interview that robotics technology will achieve major breakthroughs in the next 2-3 years, predicting that humanoid robots will become as common as today’s cars. He said that humanoid robots will be ubiquitous in 100 years and may become the most prolific machine system in human history.

Company’s solutions

At the GTC conference in March 2024, Huang Renxun demonstrated a variety of humanoid robots and announced the Project GR00T plan, which uses the new computing platform Jetson Thor to combine the latest architecture of the AI ​​chip Blackwell and professional robots. The basic model Isaac Manipulator designed for robotic arms and the AI ​​visual recognition Isaac Perceptor for autonomous mobile robots (AMR) improve the robot’s cognitive ability of human behavior to perform complex tasks. , and enhance the naturalness of human-computer interaction.

Jetson Thor, a small computer designed for humanoid robots, said it has entered “full production.” Jetson Thor was announced in March 2024. It is built on Nvidia’s Blackwell architecture and can provide 800 trillion 8-bit floating-point operations per second in terms of AI performance. It can run multimodal AI models that support humanoid robots.

Humanoid robot

Robots are a combination of AI, sensors, and actuators. Nvidia has established a research department, GEAR, dedicated to realizing intelligent applications across multiple modalities and multiple scenarios. It will accelerate research and development that can adapt to various environments, possess a wide range of skills, and be able to operate in virtual environments. and intelligent agents that operate effectively in the real world, promoting the continued progress and development of the entire artificial intelligence and robotics industry.

The company is delving deeper into the robot game with the launch of a new base model of a humanoid robot called Project GR00T. The basic model is an artificial intelligence system trained with a large amount of data and can be used for various tasks from generating sentences to videos and images. According to Nvidia, the GR00T program will help humanoid robots understand natural language and imitate movements by observing human behavior – quickly learning coordination, flexibility and other skills in order to navigate, adapt to and interact with the real world.

Invest in Figure AI

Bezos, Nvidia and other major technology companies will invest in humanoid robot startup Figure AI in order to find new applications for artificial intelligence.

Isaac platform

Nvidia released the latest version of NVIDIA Isaac Sim at the annual Computer Vision and Pattern Recognition Conference (CVPR). Zhuji Dynamics also announced that it will use the upgraded version of the Isaac platform to further improve reinforcement learning and build the generalization capabilities of general-purpose robots.​

Factory robot

Huang noted that manufacturers like Foxconn are using these tools to plan and operate factories more efficiently. He showed how Foxconn is using Nvidia’s Omniverse, Isaac and Metropolis to create digital twins, combining visual AI and robotics development tools to enhance robotic facilities.

Non-GPU processors

Customized chip ASIC

In February 2024, it was reported that Nvidia had established a new customized chip (ASIC) business unit to focus on providing ASIC services to large cloud computing companies such as Google, Amazon, Microsoft and Meta. This move is not surprising for two reasons: to capture new markets and to protect itself from being replaced. These large customers have developed their own ASICs, such as Google’s TPU, Microsoft’s AI accelerator Maia, and Cobalt designed for Azure. Nvidia must take action to avoid losing orders from these major customers.

Nvidia mainly relies on the advantages of its own high-speed transmission interfaces such as NV Link, CUDA and Omniverse software suites, but this move may conflict with the interests of customers. For manufacturers, Nvidia’s GPUs are expensive, and no one wants to be constrained by Nvidia . Moreover, ASICs are customized to meet their own needs, and their performance will be much better. Therefore, Nvidia must also consider whether it is worth investing in.

For the dedicated discussion on ASIC, please see my post of “ASIC market is getting bigger, and related listed companies in the US and Taiwan

Data center server rack

Nvidia began to design its own server rack for the first time in March 2024, launching GB200. This is equivalent to launching a data center product, directly with Nvidia’s OEM partner manufacturers, such as Supermicro, Dell, HPE and others are competing.

Just like Microsoft developing its own Surface computer hardware product line, producing its own Windows products and selling them to consumers, it competes face-to-face with Microsoft’s previous OEM partners in the market.

This move is one of several aimed at leveraging its strong position to open up new revenue and profit streams and could make it harder for its chip customers to switch to alternatives. Nvidia designed its own GB200 server rack, which may make it more difficult for customers to switch to alternative products, protect its own vested interests from being replaced, lock in customers, and increase customers’ switching costs.

The GB200 sells for US$70,000, and the server cabinet costs over one million US dollars. Differentiation strategies will be adopted on GB200 to promote customer purchases.

On the bright side, Nvidia can control the highest quality of its server products and build a better reputation for its chips in server production; it can focus on higher-end data center servers and provide customers with high-end needs. Provide better choices. Of course, the biggest benefit is that it can open up greater revenue sources for Nvidia .

CPU

Graphics chip manufacturer Nvidia has released the company’s first server CPU, Grace. This CPU is designed using ARM’s instruction set (please note that it is a CPU, not a GPU). This chip is designed to work closely with the Nvidia graphics chip to handle new computing problems with megaparameters with better performance.

The computing speed of a system using Nvidia Grace chips will be ten times faster than a system using Nvidia GPU chips combined with Intel CPU chips. The product will be launched in early 2023, and the Swiss National Supercomputer Center and the U.S. Department of Energy’s Los Alamos National Laboratory will be the first users of the product.

The latest NVIDIA data center server product GB200 is actually using NVIDIA’s Grace chip, plus NVIDIA’s Blackwell series GPU; The previous generation of GB200, the GH200, uses the H200 GPU, plus NVIDIA’s Grace chip.

Personal Computer

Latest developments

News from the supply chain confirms that after several failed attempts in the past decade, Nvidia will re-enter the PC market in 2025, launching a personal computer platform that integrates ARM architecture CPU and its own GPU, while also entering consumer and commercial applications. The market for personal computers. A consumer-grade platform will be launched in September 2025, followed by a commercial platform in March 2026.

According to Tom’shardware, Nvidia’s first ARM-based SoC for Windows devices will be launched in 2025. Nvidia will launch two chips for this series of products, namely the N1X to be launched at the end of 2025, and the N1 to be launched in 2026.

The report pointed out that Nvidia expects to ship 3 million N1X chips in the fourth quarter of 2025 and 13 million N1 chips in 2026, and Nvidia will cooperate with Taiwan’s IC design giant MediaTek to produce these chips.

This is not Nvidia’s first foray into the PC processor market. Although Nvidia’s GeForce series of GPUs have long become the standard configuration of high-end gaming PCs and were Nvidia’s largest revenue business before the advent of the AI ​​​​era, their many attempts at PC processors have ended in failure.

Multiple attempts to log into WOA PC failed

As early as 2011, Microsoft launched the WOA platform development plan, working with Qualcomm, Nvidia and Texas Instruments to create a Windows RT version based on the Arm architecture. In 2012, Microsoft launched its first Surface RT 2-in-1 notebook, equipped with Nvidia’s Tegra3 processor, which was alsoNvidia’s first 4-core processor.

However, this device was quickly liquidated by Microsoft due to its incompatibility with existing x86 applications, high price, frequent system lags, and many other defects. Microsoft also incurred a loss of US$900 million. The follow-up product Surface RT2 released in 2013 is equipped with Nvidia Tegra 4 processor, but the market response is still lackluster.

Join Google’s Chromebooks

Nvidia has also tried using Tegra in other notebooks, such as Google Chromebooks. In 2014, Acer launched a Chromebook equipped with the first Tegra processor. But this market is still dominated by Intel and AMD, and Tegra does not have much of a presence.

AI PC

Nvidia also tried to produce non-x86 Windows PCs in the early years, but it failed miserably. But today’s Nvidia is no longer the same.

At the end of May 2024, there were rumors in the industry that Nvidia was preparing to launch a chip that combines the next generation Arm core with the Blackwell GPU architecture, which may intensify competition in the Windows on Arm field.

AI Supercomputer PROJECT DIGITS

On Monday at CES 2025, Nvidia unveiled a desktop computer called Project DIGITS. Huang announced the launch of a desktop computer for artificial intelligence development, priced at US$3,000.

The machine uses Nvidia’s latest “Blackwell” AI chip, but it also contains a new central processor, or CPU, on which Nvidia worked with MediaTek to create.

Unlike conventional desktop computers, this product is mainly aimed at AI researchers, data scientists, students and other groups. Nvidia calls it Project DIGITS, a personal AI supercomputer. This computer is equipped with the new Nvidia GB10 Grace Blackwell super chip, which can provide up to 1 PFLOPS and is used for prototyping, fine-tuning and operation of large AI models.

Users can develop and run model inference using their own desktop systems and deploy models on accelerated cloud or data center infrastructure.

Metaverse

How big is the market?

Nvidia had estimated that selling software to companies developing AI or virtual reality (VR) applications was a potential $300 billion revenue opportunity.

Omniverse Platform

Long before Zuckerberg’s Meta announced that the company was transforming and investing in the metaverse, Nvidia launched the Omniverse platform.

Nvidia’s Omniverse platform is a product platform that directly and completely mirrors the metaverse. Nvidia’s industry-leading products in graphics computing (GPU), artificial intelligence (AI), and data centers are definitely qualified to be called the first. A top chair.

Digital twin

Launched a universal AI data center reference design to strengthen the data center infrastructure and promote the innovative development of edge AI and digital twin technologies.

Factory automation

All factories will become robotized factories. These factories will coordinate robots, and the robots will make products with robotic functions. Jen-Hsun Huang promotes digital twin technology that can be used in a virtual world Nvidia calls Omniverse: To demonstrate its potential, Huang showed off a digital twin of the Earth called Earth2 and how it can help model more complex weather patterns and Other complex tasks.

The factory automation system that Quanta assisted Giant Bike in building used Nvidia’s Omniverse platform to assist Juda in building smart warehousing and integrating many arms and software. Hon Hai’s Mexico factory cooperates with Siemens to build smart factories using Nvidia’s Omniverse platform, which can improve manufacturing efficiency, save costs and energy.

For robot industry, please see my below posts:

Sovereign artificial intelligence

A whole new field

On May 22, 2024, Nvidia Finance told investors in a post-earnings conference call that the company is creating a new business worth billions of dollars: rapidly expanding sovereign artificial intelligence. This new business, started almost from scratch, greatly excited some investors who thought that Nvidia would hardly have much room for development.

What is Sovereign AI?

As countries around the world invest in sovereign AI, data center revenue sources are becoming more and more diversified because countries around the world are investing in sovereign AI. The so-called sovereign AI refers to the ability of a country to use its own infrastructure, data, manpower and commercial network to produce AI.

Many countries are building their own domestic computing capabilities through various models. Some countries cooperate with state-owned telecommunications providers or utility companies to build and operate sovereign AI clouds, while others sponsor local cloud partners to provide shared AI computing platforms for use by the public and private sectors. .

How big is the market?

Starting from zero revenue in 2023, Nvidia Finance believes that the revenue of the sovereign AI business this year is expected to reach hundreds of millions of US dollars to 9 billion. “The importance of AI has attracted the attention of every country.”

Nvidia’s CFO further stated at the second quarter press conference on August 28, 2024: Countries that adopt their own AI applications and models will bring benefits to Nvidia in the financial accounting year ending in January 2025. to approximately tens of billions of dollars in revenue.

Several typical cases

Including the Swiss government-controlled Swisscom, the group recently announced that its Italian subsidiary will build Italy’s first and most powerful Nvidia supercomputer in order to develop the first Italian-based A large language model trained natively on the language.

Singapore is a big investor in the field of sovereign AI. Its supercomputing center is being upgraded with Nvidia’s latest chips. The state-owned telecommunications company Singtel is also working with Nvidia to expand the scale of data centers in Southeast Asia. In addition, Singapore is also promoting the use of Nvidia. A large model training project based on Southeast Asian languages.

French President Jean-Claude Croix called on Europe to establish partnerships and purchase more GPUs. The goal is to increase Europe’s global share of GPUs from the current 3% to 20% by 2030 or 2035.

Kenya signed an agreement with G42, an artificial intelligence company backed by Microsoft and the United Arab Emirates, to build a US$1 billion data center in the country and train a model in Swahili and English.

Conclusion

According to reports from the Wall Street Journal and other media in early September 2024, Nvidia CEO Jensen Huang is promoting the company’s new strategy:

I am the author of the original text, the essence of this article was originally published in Smart Magzine, issue of August 2024

Nvidia's new business
credit: Ideogram

Related articles

Disclaimer

  • The content of this site is the author’s personal opinions and is for reference only. I am not responsible for the correctness, opinions, and immediacy of the content and information of the article. Readers must make their own judgments.
  • I shall not be liable for any damages or other legal liabilities for the direct or indirect losses caused by the readers’ direct or indirect reliance on and reference to the information on this site, or all the responsibilities arising therefrom, as a result of any investment behavior.
error: Content is protected !!