The Nvidia/Arm deal could create the dominant ecosystem for the next computer era

Table of Contents

The next strategic inflection point in computing will be the cloud expanding to the edge, involving highly parallel computer architectures connected to hundreds of billions of IoT devices. Nvidia is uniquely positioned to dominate that ecosystem, and if it does indeed acquire Arm within the next few weeks as expected, full control of the Arm architecture will virtually guarantee its dominance.

Every 15 years, the computer industry goes through a strategic inflection point, or as Jefferies US semiconductors analyst Mark Lipacis calls it, a tectonic shift, that dramatically transforms the computing model and realigns the leadership of the industry. In the ’70s the industry shifted from mainframe computers, in which IBM was the dominant company, to minicomputers, which DEC (Digital Equipment Corporation) dominated. In the mid-’80s the tectonic shift was PCs, where Intel and Microsoft defined and controlled the ecosystem. Around the turn of the millennium, the industry shifted again to a cell phone and cloud computing model; Apple, Samsung, TSMC, and Arm benefited the most on the phone side, while Intel remained the major beneficiary of the move to cloud data centers. As the chart below shows, Intel and Microsoft (a.k.a. “Wintel”) were able to extract the majority of the operating profits in the PC era.

Above: Source: Jefferies, company data

According to research from investment bank Jefferies, in each previous ecosystem, the dominant players have accounted for 80% of the profits. For example, Wintel in the PC era and Apple in the smartphone era. These ecosystems did not happen by accident and are the result of a multi-pronged strategy by each company that dominated its respective era. Intel invested vast sums of money and resources into developer support programs, large developer conferences, software technologies, VC investments through Intel Capital, marketing support, and more. The result of the Wintel duopoly can be seen in the chart above. Apple has done much the same, with its annual developer conference, development tools, and financial incentives. In the case of the iPhone, the App Store has played an additional role, making the product so successful, in fact, that it is now the target of complaints by the developers who played a key role in cementing Apple’s dominance of the smartphone ecosystem. The chart below shows how Apple has the lion’s share of the operating profits in mobile phones.

Above: Source: Jefferies, company data

Intel maintained dominance of the data center market for decades, but that dominance is now under threat for several reasons. One is that the type of software workload mobile devices generate is changing. The vast amounts of data these phones generate requires a more parallel computational approach, and Intel’s CPUs are designed for single-threaded applications. Starting 10 years ago, Nvidia adapted its GPU (graphics processing unit) architecture (originally designed as a graphics accelerator for 3D games) into a more general-purpose parallel processing engine. Another reason Intel is under threat is that the much larger volume of chips sold in the phone market has given TSMC a competitive advantage, since TSMC was able to take advantage of the learning curve to get ahead of Intel in process technology. Intel’s 7nm process node is now over a year behind schedule. Meanwhile, TSMC has shipped over a billion chips on its 7nm process, is getting good yields on 5nm, and is sampling 3nm parts. Nvidia, AMD, and other Intel competitors  are all manufacturing their chips at TSMC, which gives them a major competitive advantage.

Nvidia’s domain

Parallel computing concepts are not new and have been part of computer science for decades, but they were originally relegated to highly specialized tasks such as using supercomputers to simulate nuclear bombs or weather forecasting. Programming parallel processing software was very difficult. This all changed with the CUDA software platform that Nvidia launched 13 years ago and which is now on its 11th generation. Nvidia’s proprietary CUDA software platform lets developers leverage the parallel architecture of Nvidia’s GPUs for a wide range of tasks. Nvidia also seeded computer science departments at universities with GPUs and CUDA, and over many iterative improvements the technology has evolved into the leading platform for parallel computing at scale. This has caused a tectonic shift in the AI industry — moving it from a “knowledge-based” to “data-based” discipline, which we see in the growing number of AI-powered applications. When you say “Alexa” or “Hey Siri,” the speech recognition is being processed and interpreted by a parallel processing software algorithm most likely powered by an Nvidia GPU.

A leading indicator for computer architecture usage is Cloud Data Instances. The number of these instances represents the usage demand for applications in the leading CSPs (cloud service providers), such as Amazon AWS, Google Cloud Platform, Microsoft Azure, and Alibaba Cloud. The top four CSPs are showing that Intel’s CPU market share is staying flat to down, with AMD growing quickly, and Arm with Graviton getting some traction. What is very telling is that demand for dedicated accelerators is very strong and being dominated by Nvidia.

Above: Source: Jefferies, company data

Nearly half of Nvidia’s sales revenues are now driven by data centers, as the chart above shows. As of June this year, Nvidia’s dedicated accelerator share in cloud data instances is 87%. Nvidia’s accelerators have accounted for most of the data center processor revenue growth for the past year.

The company has created a hardware-software ecosystem comparable to Wintel, but in accelerators. It has reaped the rewards of the superior performance of its architecture and of creating the highly popular CUDA software platform, with a sophisticated and highly competitive developer tools and ecosystem support program, a highly attended annual GPU Technology Conference, and even an active investment program, Inception GPU Ventures.

Where Arm comes in

But Nvidia has one competitive barrier remaining that prevents it from complete domination of the data center ecosystem: It has to interoperate within the Wintel ecosystem because the CPU architecture in data centers is still x86, whether from Intel or AMD.

Arm’s server chips market share is still minute, but it has been extremely successful. And, with TSMC as a manufacturing partner, it is rapidly overtaking Intel in raw performance in market segments outside of mobile phones. But Arm’s weakness is that the hardware-software ecosystem is fragmented, with Apple and Amazon having a mostly proprietary software approach and smaller companies such as Ampere and Cavium being too small to create a large industry ecosystem comparable to Wintel.

Nvidia and Arm announced in June that they will work together to make Arm CPUs work with Nvidia accelerators. First of all, this collaboration gives Nvidia the ability to add computing capabilities to its data center business. Secondly, and more importantly, it puts Nvidia in a strong position to create a hardware-software ecosystem around Arm that would be a serious threat to Intel.

The coming shift

The reason such a partnership is particularly important today is because the computer industry is going through its next strategic inflection point. This new tectonic shift will have major repercussions for the industry and the competitive landscape. And if historical trends continue, a merged Nvidia/Arm would result in a market at least 10 times larger than today’s mobile phone or cloud computing market. It is an understatement to say that the stakes are huge.

There are several forces driving this new shift. One is the emergence of faster 5G networks that are designed to support a far larger number of devices. One of the key features of 5G networks is edge computing, which will put high-performance computing right at the very edge of the network, one hop away from the end-device. Today’s mobile phones are still tied to a descendant of the old client-server architecture established in the ’90s with networked PCs. That legacy results in high latency networks, which is why we experience those annoying delays on video calls.

Next-generation networks will have high-performance computers with parallel accelerators at the very edge of the network. The endpoints — including autonomous vehicles, industrial robots, 3D or holographic communications, and smart sensors everywhere — will require a much tighter integration with new protocols and software architectures. This will achieve much faster, and extremely low latency communications through a distributed computing architecture model. The amounts of data produced — and needing processing — will increase by orders of magnitude, driving demand for parallel computing even further.

Nvidia’s roadmap

Nvidia has already made its intentions clear that cloud-to-edge computing is on its roadmap:

“AI is erupting at the edge. AI and cloud native applications, IoT and its billions of sensors, and 5G networking now make large-scale AI at the edge possible. But it needs a scalable, accelerated platform that can drive decisions in real time and allow every industry to deliver automated intelligence to the point of action — stores, manufacturing, hospitals, smart cities. That brings people, businesses, and accelerated services together, and that makes the world a smaller, more connected place.”

Last year Nvidia also announced that it is working with Microsoft to collaborate on the Intelligent Edge.

This is why it makes strategic sense for Nvidia to buy Arm and why it would pay a very high price to be able to own this technology. Ownership of Arm would give Nvidia greater control over every aspect of its ecosystem with far greater control of its destiny. It would also eliminate Nvidia’s dependence on the Intel compute stack ecosystem, which would greatly increase its competitive position. By owning Arm instead of just licensing it, Nvidia could add special instructions to create even tighter integration with its GPUs. To get the highest performance, one needs to integrate the CPU and GPU on one chip, and since Intel is developing its competing Xe line of accelerators, Nvidia needs to have its own CPU.

Today Nvidia leads in highly parallel compute and Intel is trying to play catch-up with its Xe lineup. But as we have learned from the PC Wintel days, the company that controls the ecosystem has a tremendous strategic advantage, and Nvidia is executing well to position itself to become the company that will be the dominant player in the next era of computing. Nvidia has a proven track record of creating an impressive ecosystem around its GPUs, which puts it in a very competitive position to create a complete ecosystem for edge computing including the CPU.

Michael Bruck is a Partner at Sparq Capital. He previously worked at Mattel and at Intel, where he was Chief of Staff to then-CEO Andy Grove, before heading Intel’s business in China.

Source Article