The reasons for Nvidia’s monopoly and the challenges it faces


Nvidia discussion in my books

I have discussed the company Nvidia (ticker:NVDA) in two recent books; including:

In my book “The Rules of Super Growth Stocks Investing“:

  • Section 3-3, pp. 190-191

In my book “The Rules of 10 Baggers“:

  • Section 1-1, pages 23-24
  • 5-6, pages 242-246, an entire subsection dedicated to this company
  • Section 6-2, pages 282-285
  • Section 7-1, pp. 348-285

Posts about Nvidia

I would like to remind you that investors who are not very familiar with Nvidia are advised to read the following important posts I have made about Nvidia first:

The origin of this post

A few days ago, there was an article in Barron’s. The author expressed his opinion that in the foreseeable future, Nvidia will continue to focus on the field of artificial intelligence chips. After reading it, I think it makes sense and is similar to my personal opinion. Therefore, in addition to quoting his content, today’s article will also add my personal views-listing why Nvidia can continue its dominance.

Deep Dive on Hardware performance

Two major determining factors

Computing power vs. efficiency

It should be noted that the chip’s strong computing power and high computing efficiency are two different concepts. The number of processes and transistors represents computing power, while the number of CUDA cores represents computing efficiency.

Memory and bandwidth

As for the memory and data bandwidth dedicated to the GPU, it determines the efficiency of the GPU during operation. The memory dedicated to the GPU determines the maximum data that the GPU can store at the same time, and the data bandwidth of the memory dedicated to the GPU determines the data transfer speed between the memory dedicated to the GPU and the graphics card. ​

Products from three companies


AMD’s previous flagship chip is MI250X, which was released at the end of 2021. It uses a 7nm process and has 58.2 billion transistors. It has 128G of memory dedicated to the GPU, a memory bandwidth of 3.2768 TB/s, and a peak FP16 Performance is 369 TF with only 60 compute units. ​


Intel’s current flagship chip, Ponte Vecchio, will also be released in 2021. It uses a 7nm process and claims that the number of transistors has reached 102 billion, making it the chip with the largest number of transistors in the world. This chip has 128GB of memory dedicated to the GPU, a memory bandwidth of 3.2TB/s dedicated to the GPU, FP16 peak performance of 184TF, and 102 computing units. ​


Take the A100 GPU released by Nvidia in May 2020 as an example. This chip uses a 7nm process and Ampere architecture, has 54 billion transistors and 6912 CUDA cores, and can provide up to 80GB of memory dedicated to the GPU. and 2TB/s of the world’s ultra-fast memory bandwidth dedicated to GPUs. The peak performance of FP16 (half-precision floating point arithmetic) Tensor Core, which is commonly used for large model training and inference, can reach 312TF. When sparse computing is used, it can reach 624TF. ​

Get rid of misconceptions


But there are two myths here. The first myth is that the two chips of Intel and AMD were released a year later than the A100. Their real target opponent should actually be the H100 released by Nvidia in early 2022. , and now Nvidia’s chips have been updated to H200. ​

The second myth is that hardware indicators are not entirely equal to the overall capabilities of the chip. Software ecology is the second key indicator that determines the performance and use of the chip. ​

Software determines everything

Without a good operating system for the hardware, it is still not a good system for customers. What needs to be mentioned here includes Nvidia’s CUDA platform, NVLink, Tensor Core and other software ecosystems. ​

The CUDA platform is the core of Nvidia. It can improve the parallel computing capability of the chip; it can improve the energy efficiency ratio of the GPU through the program code, so that the same work consumes less energy. ​

In addition, the CUDA platform also supports a wide range of applications, including scientific computing, deep learning, machine learning, image processing, video processing, etc. Currently, most programmers on the market have relied heavily on the CUDA platform and development tools.

The ecological obstacles brought by CUDA are similar to this. Although other platforms also have their own software ecosystems, for example, AMD has its own GCN architecture, Intel has its Xe architecture, and even provides developers with a similar “one-click machine replacement” migration function, but It’s hard to compete with Nvidia.


CUDA is part of the moat, and other technologies such as NVLink are also critical. ​

As a GPU for large model training, no company will use a single GPU. Each company will use at least hundreds of cards, or even tens of thousands of cards to build a computing cluster. NVLink is a connection technology that enables high-speed, low-latency interconnection between GPUs. Without this technology, the computing power cluster of the entire chip cannot achieve the effect of 1+1>3, and it will increase the communication delay between GPUs, reduce the efficiency of task execution, increase the power consumption of the chip, and ultimately increase the overall System operating costs. ​

Overall operation cost of use

Large model training is actually a very energy-consuming thing. Assuming that about 13 million unique visitors use ChatGPT every day, the daily electricity bill will cost US$50,000. And without NVLink, this cost will rise exponentially. ​

To a certain extent, chips are like buying a car. Buying a car is only the first cost, and subsequent fuel, maintenance, and insurance are the bulk of the cost. That’s why Jensen Huang said, “The most important thing about an artificial intelligence system is not the cost of the hardware, but the cost of training and using artificial intelligence.”

Therefore, although AMD and Intel set prices lower than Nvidia on some chips, from a long-term cost perspective, NVIDIA chips with better software ecology, collaboration, and supporting tools are still the most cost-effective choice. ​

Taken together, whether it is from hardware performance or software ecology; whether from development tools and deployment tools, to long-term use costs and developable application scenarios. Compared with similar competitors, Nvidia is the most cost-effective and far ahead.

Reasons Nvidia will continue to dominate

The product is mature and complete

Nvidia has the most mature artificial intelligence technology products. The company has spent more than a decade resolving software and driver issues with its software ecosystem, CUDA. This means the company has solved technical problems that other less experienced vendors may still need to solve.

Platform neutrality

Nvidia is a neutral hardware platform and has nothing to do with the platforms of various cloud vendors, because cloud platforms are basically software. Customers have the flexibility to move Nvidia-supported workloads from one cloud to another. On the other hand, rival AI chip products from Amazon or Google lock users into their cloud platforms. This reduces the flexibility to switch to other providers offering cheaper services or better technology.

Complete software development tools

Developers stick with Nvidia’s products because of its decades-long platform stability, huge market share, access to industry-specific tools, and reputation for backward compatibility.

Jensen Huang, CEO of Nvidia, said: “All technological inventions built on Nvidia are accumulated over many years.”

Best overall performance

Then there’s performance. When customers evaluate the company’s software, system hardware and network hardware portfolio, Nvidia still provides the best overall capabilities.

Nvidia’s main challenges


In order to prevent China from surpassing the United States, the United States continues to increase the scope of its containment of China, of which the semiconductor embargo is an important part. Among them, artificial intelligence chips are the focus of the U.S. government’s embargo─and Nvidia is the company that has been hardest hit because more than a quarter of Nvidia’s revenue comes from China. In order to comply with the U.S. government’s ever-strengthening chip embargo regulations, Nvidia has twice been forced to launch a downgraded version of its chips for the Chinese market to comply with the U.S. ban.

The U.S. ban will only become more stringent, Nvidia’s nightmare will not stop, and the scope of revenue affected will only become larger and larger.

In December 2023, U.S. Department of Commerce Secretary Gina Raimondo warned that the export laws would be revised at any time to prevent China from using American technology to develop AI.

China’s revenue accounts for up to 40%

In December 2023, Jensen Huang admitted that Nvidia did provide services to Chinese customers in Singapore. Chinese giants with branches in Singapore, including Douyin company ByteDance, Tencent Holdings and Alibaba Group, are the main sources of revenue. According to the announcement from the competent authority, in the third quarter of 2023, Singapore’s revenue to Nvidia accounted for approximately 15%.

Therefore, if Singapore’s 15% is added, the Chinese market actually accounts for 40% of Nvidia’s total revenue. This is why Nvidia is willing to make exceptions repeatedly and launch special versions of products only for China to circumvent repeated tightening measures by the U.S. Department of Commerce. of laws.

Shortage on supply

I mentioned in the article “The artificial intelligence bubble in the capital market is forming” that Nvidia’s artificial intelligence chips have been in short supply in the past two years. This sweet burden has caused Nvidia’s revenue loss a lot.

To create the most complex artificial intelligence systems, tens of thousands of Nvidia’s most advanced GPU “H100” are usually used. A columnist from the American “Barron’s” once pointed out that the manufacturing cost of each Nvidia H100 chip is about US$3,320, but the unit price is as high as US$25,000 to US$30,000. According to analysts’ estimates, Nvidia’s annual production capacity is about 1.2 million units, but it is still difficult to meet demand.

Especially since TSMC has limited production capacity and high prices, the important thing is that Nvidia will never be able to grab the first batch of customers for TSMC’s latest process, because Apple’s order volume is much larger than Nvidia. This forced Nvidia to find Samsung, and some media speculated that the large chip customer that Intel’s foundry division was unwilling to announce was Nvidia.

Competitors abound

Nvidia’s opponents mainly fall into three categories listed below:

  • The first is old rivals such as AMD, Intel, Qualcomm and other traditional big chip designers.
  • The second type is the small designers speficic for artificial intelligence chips that have only emerged in recent years, including Groq, Graphcore, Cerebras, Tenstorrent, KAIST’s C-Transformer, Cambrian, Horizon Robotics, etc.
  • The other type is Nvidia’s very large customers who have designed their own artificial intelligence chips in recent years. They all have deep resources and talents, plus there is a huge demand for Nvidia’s artificial intelligence chips. Such opponents include Microsoft, Alphabet, Meta, and Apple in the United States; or Huawei, Alibaba, Baidu, and Tencent in China. Basically, such opponents develop artificial intelligence chips mainly for their own use and not for sale, at least until today. The exception is Huawei, which will be mentioned again in the next paragraph of this article, but for Huawei, I recommend you to read my special post before “How does the all-powerful Huawei make money?“.

But regardless of the type listed above, these opponents will not become significant enough or powerful enough to threaten Nvidia in the “short term” or even the “medium term”.

Readers can refer to my previous post: “Major artificial intelligence companies in US stocks market“. But this area is not today’s topic. If I have the opportunity and time, I will write another special article to discuss this topic in depth.


On December 6, 2023, Jensen Huang stated for the first time during his visit to Singapore: Huawei is very difficult to deal with. The company has listed Huawei as one of the main competitors for artificial intelligence chips that is “very formidable”.

On February 23, 2024, Nvidia identified Huawei as a top competitor in multiple categories including artificial intelligence chips for the first time in a filing with the U.S. Securities and Exchange Commission. Nvidia identified that China’s Huawei competes in the supply of artificial intelligence chips, such as GPU, CPU and network chips. Nvidia also positions Huawei as a cloud services company and has designed its own hardware and software to improve artificial intelligence computing.

In the same SEC filing, Nvidia also named other competitors including Intel, Advanced Micro Devices, Broadcom and Qualcomm. The chip company has also identified several large cloud computing companies, such as Amazon and Microsoft.


AMD is Nvidia’s long-time rival, but has had little success. However, in recent years, AMD has been catching up in all aspects. AMD is no longer it was before. For this part, please refer to my other post “Why is AMD’s performance so jaw-dropping?“.

Triggering antitrust investigation

Nvidia’s strong market share in graphics displays and artificial intelligence chips has currently caused uneasiness among governments around the world. As of today, it is known that France, the European Union, and China have begun antitrust investigations against Nvidia. For Nvidia, this will be a long road.


Thanks to artificial intelligence and the emergence of ChatGPT, the money Nvidia earn in 2023 was more than the total of first 25 years since its listing.

Nvidia’s monopoly is real, but don’t expect its stock price to be quadrupled  or trippled like it did before.

credit: Nvidia

Related articles


  • The content of this site is the author’s personal opinions and is for reference only. I am not responsible for the correctness, opinions, and immediacy of the content and information of the article. Readers must make their own judgments.
  • I shall not be liable for any damages or other legal liabilities for the direct or indirect losses caused by the readers’ direct or indirect reliance on and reference to the information on this site, or all the responsibilities arising therefrom, as a result of any investment behavior.

Leave a Reply

Your email address will not be published. Required fields are marked *

error: Content is protected !!