Data center, a rapidly growing semiconductor field

Data Center


What is data center?

A data center is not a chip, it is a suit of solution. It is mainly a collection of software and hardware used by enterprises to process data in the background. This article will only discuss the hardware part, not the software part. For details, please refer to the description in the “Composition of Data Center” section below.

All the technology companies or large enterprises that you and I know need data center solution products, and the market demand is increasing rapidly year by year, and there is no end in sight so far.

What are the application areas?

The application fields involved in the data center are too large, including:

  • Cloud computing
  • Big Data
  • Artificial intelligence
  • Media streaming
  • e-commerce
  • Database
  • Any web service, such as traditional web server, email server

Star field of semiconductor industry

Data center is one of the few semiconductor fields that is still growing rapidly. A modern and complete data center system includes the following components.

Current trend

In order to meet the needs of massive data computing, the data center has undergone great changes in recent years, which of course provides many business opportunities. These major trends are the following.

Serverless computing

Serverless computing, also known as Function-as-a-Service (FaaS for short), is a model of cloud computing. Based on Platform as a Service (PaaS), serverless computing provides a micro-architecture. End customers do not need to deploy, configure, or manage server services. The server services required for code execution are all provided by the cloud platform.

Storage and Computing Separation

All the architectures with storage and calculation separation are because:

  • The data retention period does not match the server update period.
  • Performance reliability and resource utilization are difficult to balance.
  • New distributed applications such as serverless computing applications require shared storage.
  • Data center taxes lead to inefficiencies in data-intensive applications.

This leads to insufficient utilization of resources during expansion, and additional work such as data migration; whether the server storage supporting the business is performance or capacity, neither can achieve high performance, high reliability, and high resource utilization at the same time.

On the other hand, the rise of new distributed applications represented by serverless applications is like a catalyst, accelerating the demand for shared access to business data, making the storage-computing separation architecture gain more user attention. For example, applications such as distributed databases have turned to containerized deployment and applications. The storage-computing fusion architecture is difficult to meet the requirements of the new decentralized architecture in terms of CPU, memory, storage capacity/IOPS/bandwidth, and network IO/bandwidth. application requirements.

The main competitiveness of Snowflake (ticker: SNOW), which is owned by Buffett, is to focus on the structure of separation of deposit and calculation. For details, please see my article “What company is Snowflake owned by Buffett? Where is its value?

The need for high-speed data transfer

The rapid development of hardware such as high-performance frameworks, new NVMe/CXL protocols, dedicated data processors (such as DPUs), and new network technologies such as RDMA/NoF has laid the foundation for a new storage-computing separation architecture. Compared with the traditional storage-computing separation architecture, the new storage-computing separation architecture has more new features: such as Diskless servers, diversified network protocols, specialized data processors, and storage systems with extremely high storage density.

New products appear

Because of the rapid changes mentioned above, the emergence of various manufacturers competing to launch a large number of DPU products is the best proof. The DPU is mainly to offload a large number of operations such as data storage and access to a dedicated data processor, release the computing power of the server, and improve the energy efficiency ratio of the architecture as a whole.

To replace the local hard disk of the server, a storage system with a diskless architecture is used. In recent years, OpenFlex, Vast Data Ceres, and Huawei’s OceanStor Micro micro-storage of Weiteng Electronics (ticker: WD) are all new storage-computing separation architectures The following storage system is represented.

Composition of Data Center

Server CPU

This is the part that ordinary people can easily understand, that is, the super powerful server-level central processing unit. A server-level central processing unit usually must be multi-core, with super computing power and low latency.

The largest market share of server CPUs has been x86-based processors for a long time, especially Intel (ticker: INTC), which once reached nearly 99% of the market share of x86-based server CPUs.

However, the market conditions have changed a lot in recent years. Please refer to my previous posts “How does Intel make money? and the benefits to invest in it” and “Intel’s current difficult dilemma“.

Another focus is that Nvidia (ticker: NVDA), the overlord of overall data center solutions in recent years, has no server CPU products so far. Nvidia did not announce its first data center processor, Grace, until 2021, and the product is expected to be available in early 2023.

Discrete GPU

The discrete graphics display processor (GPU) in the data center is not used for graphics display, but mainly for the training of a large amount of artificial intelligence data. Please refer to my previous article “Artificial intelligence investment trap“. This is also Nvidia’s strength and the main reason why it can dominate the overall data center solutions in recent years.

AMD (ticker: AMD) also has the second largest independent graphics and display processor business because of its acquisition of ATI many years ago. Although Intel’s central processing unit has “built-in” graphics display processors very early, it will not launch an “discrete” graphics display processor until 2022. In this respect, it can only be regarded as a beginner.

Programmable processor

The representative of programmable processing is FPGA (Field Programmable Gate Array). Programmable processors have very high barriers to entry, have a wide range of applications, and have a certain market demand. However, there are very limited large-scale manufacturers. Basically, there are only two major manufacturers, plus a few second-tier smaller manufacturers. But FPGA is expensive, which is its disadvantage. Its advantage is that the settings can be modified at any time, and you can regard it as a processor chip that can be modified in real time.

There are two largest manufacturers in this field, the largest Xilinx (Xilinx) has been acquired by AMD in 2021, and the second largest manufacturer Altera was acquired by Intel as early as 2015.

High Bandwidth Data Processor

A data center has many processors with different tasks. Since it is necessary to move data quickly among these large numbers of processors and components, another class of data processors is required to take care of this task.

To meet such needs, Nvidia acquired Mellanox Technologies, a well-known Israeli chip design manufacturer in the industry, before 2019 to complement this piece of the puzzle and make Nvidia immediately become a leader in the field of data centers; everyone can use this Know how important this part is to the data center.

Network communication

This part is mainly responsible for the connection and data transmission of the data center to the outside world, that is, the internet. Due to the growth and popularization of the network and various mobile devices, the application scenarios are becoming wider and wider, and the generated data is growing rapidly in a geometric progression.

It is conceivable that the amount of data that the data center needs to process is astronomically huge. This part is very professional, so it has long been used by Cisco (ticker: CSCO), Huawei, Juniper Networks (ticker: JNPR), Arista (ticker: ANET), and Hewlett Packard Enterprise (ticker: HPE), which acquired Aruba, and so on several manufacturers.

Main vendors on data center

Major players

Intel, Nvidia, AMD; no matter which one, in addition to the data center has long been listed as the core business of the company, they also emphasize that it will be the focus of the company’s future development. The main reasons are as follows:

  • High profits: Although the shipments of general processors are very large, there are many competitors for general processors, which leads to a year-on-year decline in profits. This is also one of the main reasons why Intel’s stock price has fallen for a long time and has repeatedly bottomed out.
  • Data center is still developing at a high speed, the prospect is promising.
  • There are not many manufacturers with high entry barriers, capable players, and considerable market share. At present, there are only three companies in the world: Intel, Nvidia, and Super Micro. Not only that, but these three companies are good at their own fields. Strictly speaking, none of them can have all the technologies or complete solutions.

Key suppliers and business performance

Please note that apart from Intel, the data center business of Nvidia and AMD is already the company’s largest revenue contributor.


BU performance (Q3 2022)Revenue (in US$ billion)Annual growth rate
Total revenue5.93-16.5%
Data center3.833+30.55%
Professional visualization0.2-65.34%
OEM & Other0.073 -68.8%


AMD’s data center business includes the following two divisions: data center and embedded.

BU performance (Q3 2022)Revenue (in US$ billion)Annual growth rate
Total Revenue5.6+29.93%
Client 1.02-40%
Data center1.61+45%


Intel’s data center business actually includes the following two business divisions: Data center and Artificial Intelligence (DCAI) and Network and Edge Business (NEX).

BU performance (Q3 2022)Revenue (in US$ billion)Annual growth rate
Total revenue15.3-17.3%

Market trend

In addition to these three major manufacturers, the most important trend recently is that customers will develop their own chips to meet their own special needs, because:

  • The off-the-shelf products purchased are difficult to customize, and customers are unwilling to let manufacturers know too much of their secrets (such as the size of data, algorithms, application fields, etc.)
  • The purchased off-the-shelf products, due to very few suppliers, have limited bargaining space, and the prices are very high (just look at the share price of Nvidia in recent years). However, the purchase volume of large customers is very large, and the cost is not cost-effective.
  • The Internet and technology are changing rapidly, and large customers are reluctant to let suppliers determine their own fate and project schedule.

As for how many large customers are currently doing this, the answer is a lot. For details, please refer to the description of my previous article “Intel’s current difficult dilemma“.

Data Center
credit: The Verge

I am the author of the original text, the abridged version of this article was originally published in Smart monthly magazine.

Related articles


  • The content of this site is the author’s personal opinions and is for reference only. I am not responsible for the correctness, opinions, and immediacy of the content and information of the article. Readers must make their own judgments.
  • I shall not be liable for any damages or other legal liabilities for the direct or indirect losses caused by the readers’ direct or indirect reliance on and reference to the information on this site, or all the responsibilities arising therefrom, as a result of any investment behavior.
error: Content is protected !!