GTC 2018: Nvidia aims to leverage on AI, machine learning

  • GTC expected to announce new products, services to enhance its lead
  • Company gaining from cloud, machine learning, cryptocurrency developments


GTC 2018: Nvidia aims to leverage on AI, machine learning


GTC 2018: Nvidia aims to leverage on AI, machine learningTHIS week, I find myself in sunny San Jose, California. Now if you’re wondering where that is, I wouldn’t blame you as there is even a song – Dionne Warwick’s Do You Know Your Way to San Jose, written in 1968 – about it!

San Jose is situated about 80km southeast of downtown San Francisco. The city is part of the renowned strip called Silicon Valley, where the tech giants are based, including Mountain View (Google), Menlo Park (Facebook), Cupertino (Apple), Palo Alto (HP) and Sunnyvale (Yahoo!).

San Jose has come a long way and isn’t a small town anymore. It’s not only renowned geographically but is thriving with people, technology and innovation. The city is home to about one million and is the third most populous town in United States, as well as home to many large tech companies.

And this is where Nvidia Corp is hosting its annual GTC (GPU Technology Conference) from March 26 to March 30, where developers, business executives, analysts and press members come together to get a glimpse of what is expected to come from the leading graphics chip designer.

The dominance of Nvidia in this industry is quite significant. The company has approximately 72.8% of the world’s graphics processing units (GPUs) market share as of the third quarter of 2017, with rival AMD commanding the rest, according to one analyst’s estimates.

The Santa Clara, California-based company, which is an adjacent town to San Jose, has risen from the 1990s to become a giant today by betting big on producing specialised chips designed specifically to process high resolution graphics, and more recently, real-time imaging in augmented and virtual reality (AR/ VR), self-driving vehicles, as well as artificial intelligence and machine learning.

The story of Nvidia began in 1993 when three engineers – Taiwan-born Jen-Shun (Jensen) Huang, Chris Malachowsky and Curtis Priem believed that graphics acceleration at the chip level was going to drastically change the PC-based world.

Armed with a mere US$40,000 (RM155, 937) in the bank along with pure grit and determination, Nvidia was born. The company steadily grew and introduced a slew of products targeted at graphics processing and by 1999, listed itself as a public company on the Nasdaq. (US$1 = RM3.898)

While the company has consistently introduced a variety of products focused on the consumer gaming market, the stock and company, while steadily growing, has merely hovered between the US$30 and US$40 range for the past 17 years.

But in the past two years, Nvidia’s stock soared. Today (March 28) it is about US$240, giving it a market capitalisation of about US$140 to US$150 billion.

Its most recent Q4 earnings report ending January 2018 clocked US$2.91 billion in revenue, up 34% from US$2.17 billion a year earlier; it is also up 10% from US$2.64 billion in the previous quarter. For the full fiscal 2018, revenue was US$9.71 billion, up 41% from US$6.91 billion a year earlier.

So, what has changed? Why has the company become so valuable only in the past two to three years?

The GPU vs CPU

Before getting into how Nvidia has become such a valuable company, we need to examine what it does in the first place. We all know how Nvidia specialises in making GPUs but how does that differ from CPU (central processing units) makers such as Intel Corp?

In simple terms the GPU differs from the more traditional CPU in that the former is designed to have many computing processing cores that are optimised for parallel processing – executing very functionally focused computing tasks simultaneously and in a repetitive manner so that more workloads can be processed all at the same time.

Comparatively speaking, CPUs have far fewer processing cores and are more optimised as a general purpose processor, one that handles everything from input/output commands, storage and memory management, software functions and more. Because of this, the CPU is able to execute broader computing tasks compared to the fairly focused tasks of GPUs.   

Put simply, GPUs can be likened to a specialist performing a relatively simple task over and over again, thereby accelerating the process by up to 100x over the CPU; while the CPU are likened to be more generalist, having the ability to handle more complex computing tasks but not as quickly as a GPU.

This is why GPUs are great for high powered, lightning quick graphics rendering, such as the real-time computation of polygons for three-dimensional gaming, while the CPU handles all other computing functions not related to graphics.

How did this splitting of workloads come about?

Historically, as consumer computer games became more complex due to the real-time, highly computational nature of graphics rendering, the GPU became more significant alongside the CPU. But one cannot do without the other in today’s computing world, for both processors are needed as they complement each other.

Trends supporting Nvidia

It turns out that GPUs aren’t good for graphics rendering, something that Nvidia realised in the past five years. In fact, since 2014, Nvidia has been focused on becoming a platform-based company, specifically in four markets: Gaming, professional visualisation, data centres and automotive.

By concentrating on each of these markets, the company has been able to segmentise itself into various growth areas related to these markets, thereby growing its revenue and profitability.

Underpinning these four markets are three broader trends that have boosted the share price and overall profile of the company. The trends are: The growth of cloud computing, the rise of cryptocurrency mining and the use of GPUs in artificial intelligence (AI), particularly in machine and deep learning.

All these areas are very suited to the high speed, parallel processing nature of GPUs, and the increased usage of GPUs has fundamentally benefited the company in the past two to three years.

For example, Nvidia’s Q4 earnings report showed that its data centre business has hit a US$2 billion annual revenue run rate and is showing no signs of slowing down as cloud providers gobble up the company's GPUs.  

Its revenue from GPU chips supplied to its data centre business more-than-doubled to US$606 million in Q4 financial year 2018 from US$296 million in the corresponding period a year before.

And tech portal ZDNet reports that all major hardware vendors are banking on using Nvidia’s Volta V100 GPU accelerator, including IBM, Dell, HP, Supermicro, Lenovo and Huawei.

Besides this, NVidia also supplies its chips to other public cloud players namely, Amazon Web Services (AWS) and Microsoft Corp’s cloud arm, Microsoft Azure.

Research firm Gartner notes that GPUs have also found a use in the mining of cryptocurrencies and the recent rise in value of the various cryptocurrencies, especially bitcoin and Ethereum's Ether, have created additional demand for high-end, GPU-based add-in cards, a sentiment company officials agree with.

“Strong demand in the cryptocurrency market exceeded our expectations,” Nvidia’s chief financial officer (CFO) Colette Kress said on a conference call,” reports Reuters. “While the overall contribution of cryptocurrency to our business remains difficult to quantify, we believe it was a higher percentage of revenue than the prior quarter.”

Of course, the crown jewel of Nvidia’s business is still very much its graphics GPU business. In terms of growth, the company recorded US$1.74 billion in the gaming segment alone for Q4, which ended in February 2018, up from US$1.03 billion a year before. This accounts for more than half the revenue Nvidia made for fiscal 2017.


GTC 2018: Nvidia aims to leverage on AI, machine learning


Last year, Huang made (pic, above) a number of announcements, the biggest of which of is undoubtedly on Nvidia’s new Volta GPU platform, which the company claims is the world’s most powerful GPU computing architecture that has been specifically made for handling artificial intelligence and high-performance computing (HPC).

Much will be expected from this year’s GTC conference, beginning March 28. All eyes will be trained on what Huang will have to say, and the kind of announcements Nvidia will make – particularly on its roadmap for AI, machine learning and cryptocurrencies mining.

Products aside, attendees would also be looking at how Nvidia plans to scale its business and partnerships to leverage on further growth. The stakes are high as the broad trends, as I argued earlier, are all very dependent on GPUs, and if Nvidia wants to keep its leadership in this area, it will need to evolve to stay ahead of the game.

And there’s no better time to do it than at this year’s GTC conference.

Edwin Yapp reports from GTC 2018 in San Jose, at the invitation of Nvidia Corp. All editorials are independent. He is contributing editor to Digital News Asia and an executive consultant at Tech Research Asia, an advisory firm that translates technology into business outcomes for executives in Asia Pacific.


Related stories:

GTC 2017: Nvidia heralds the age of AI

GTC 2017: Nvidia aims to spread knowledge of Deep Learning

Harnessing the potential of artificial intelligence and machine learning


For more technology news and the latest updates, follow us on Facebook, Twitter or LinkedIn

Keyword(s) :
Author Name :
Download Digerati50 2020-2021 PDF

Digerati50 2020-2021

Get and download a digital copy of Digerati50 2020-2021