Though no new processor announced, upgrades to enhance AI involvement
Include autonomous vehicle simulator, deep learning, high-end graphics platform
LEADING computing architecture company, Nvidia Corp, has taken the wraps off a slew of new products, which the company believes will help it continue to dominate the graphics processing unit (GPU) processor business while furthering its efforts to grow its teeth into artificial intelligence (AI) and machine learning systems.
Speaking at its annual GTC (GPU Technology Conference) convention in downtown San Jose on Mac 28, chief executive and co-founder Jen-Hsun (Jensen) Huang (pic above) said the GPU may have begun life as specialised cards that accelerated graphics but the rise of virtual reality (VR), AI and machine learning has changed this.
“It couldn’t have come at a better, more important time,” Huang declared at his GTC. “All of these new applications are coming up at a time when we need supercomputers to make them work.
“Supercomputing is the pillar of science. It’s about people thinking about VR, AI and autonomous machines and how they can collaborate with humans. The timing couldn’t be more perfect,” he said.
In his keynote address to over 8,000 people at the ongoing GTC convention this week, Huang said one of the best decisions Nvidia has made was 15 years ago when it decided that a GPU, which were graphic accelerators of the past, has become general purpose processors.
“[We reasoned that] if we wanted to create virtual reality, we had to simulate reality, simulate life. And the simulation of a real world requires a general purpose supercomputing architecture [which can be found in GPUs].”
Huang said GPUs do specific tasks very well but at the same time they have become more flexible in their usage over the years. Nvidia, he argued, has been able to advance the flexibility of the GPU architecture without sacrificing the ability to do that specific thing well, which has accelerated graphics to the extreme.
Another piece of evidence that Huang pointed to as to why the GPU has become mainstream is that there are almost one million GPU developers today, which has grown 10 times in the last five years.
READ ALSO: GTC 2018: Nvidia aims to leverage on AI, machine learning
The announcements at GTC 2018 follows one which Huang made last year, when he declared that the era of AI has arrived and Nvidia’s efforts will shift towards the creation of new supercomputers.
Huang had then said in the world today hundreds of millions of people rely on AI-powered search, language translation and speech recognition services, and that investments in AI startups alone in 2016 have risen to US$5 billion, a clear indication that there is a vast amount of interest in the space.
“AI is driving the greatest technology advances in human history,” said Huang during his GTC keynote. “It will automate intelligence and spur a wave of social progress unmatched since the industrial revolution.”
Deep learning, high-end graphics
There were no announcements of a new chip at GTC 2018, which some industry observers have come to expect but were let down as a result. In delivering his keynote this year, Huang instead stuck to a similar script from last year on how AI is changing the world but expanded his views on this matter.
Huang said one of the applications today that benefits most from GPU acceleration is deep learning. But for deep learning to happen, Huang said the world needs “supercharged” computers.
“If you take a look at the last five years, there is no doubt that GPU computing is the right answer to this,” he argued.
Deep learning is a subset of AI, in which learning is achieved in a multi-layer, hierarchical method instead of a linear method. A computer is first fed data over and over again -- a process known as machine learning. Once these "answers" are derived through the machine learning process, they are then re-fed again to another layer next layer. Analysis is done at this second layer, which is then passed onto a third layer.
The multi-layer processing continues across the network until the best output is determined. Because this essentially mimics how humans think, deep learning is also known as neural network learning or deep neural networks.
In view of this, Nvidia has unveiled a new deep learning computing platform, which the company claims will deliver a 10x performance boost on workloads compared with the previous generation six months ago.
The new platform will be powered by a 2x memory boost to its flagship GPU chip, Nvidia Tesla V100, and a revolutionary new GPU interconnect fabric called Nvidia NVSwitch. The company said it would enable up to 16 Tesla V100 GPUs to simultaneously communicate at a speed of 2.4 terabytes per second.
Also new is the Nvidia DGX-2, the first single server the company claimed is capable of delivering two petaflops of computational power. Nvidia said DGX-2 has the deep learning processing power of 300 servers occupying 15 racks of data centre space, while being 60x smaller and 18x more power efficient.
Besides these new products, the Santa Clara, California-based company also announced what the company calls the biggest advance in computer graphics since the introduction of programmable shaders nearly two decades ago – the Nvidia Quadro GV100 GPU, powered by Nvidia’s new RTX technology.
Huang said the Nvidia RTX – when combined with the powerful Quadro GV100 GPU – brings computationally intensive ray tracing possible in real time when running professional design and content creation applications.
The company claimed media and entertainment professionals can see and interact with their creations with correct light and shadows, and do complex renders up to 10x faster than with a CPU alone. Product designers and architects can create interactive, photoreal visualisations of massive 3D models – all in real time, Nvidia claimed.
Nvidia also made significant steps in its automotive business by introducing a cloud-based system for testing autonomous vehicles using photo-realistic simulation called Nvidia Drive Constellation.
In a pre-media briefing before the keynote address, Danny Shapiro, senior director for Nvidia’s automotive division said Drive Constellation is an autonomous vehicle simulator using virtual reality.
“We’re able to build virtual worlds in a data centre and drive billions of miles to test autonomous vehicle algorithms in the safety of a simulator,” he explained.
The system comprises two servers; the first of the two servers runs Nvidia Drive Sim software, which simulates a self-driving vehicle’s sensors, such as cameras, lidar (light detection and ranging) and radar – all of which happens in the cloud.
“So we can simulate photorealistic feeds from all kinds of camera angles as well as lidar and radar,” he said.
The output of these sensors is then fed into a second server, which contains the Nvidia Drive Pegasus AI car computer that runs the complete autonomous vehicle software stack and processes the simulated data as if it were coming from the sensors of a car driving on the road.
This method is known as ‘hardware in a loop,’ noted Shapiro.
Shapiro said this gives users of the system the ability to test in the rain, snow, during day or night driving, simulate various different hazard scenarios and refine the algorithm over and over to make sure it’s safe before deployment.
“This is going to be an extremely valuable tool for users,” he said.
Edwin Yapp reports from GTC 2018 in San Jose, at the invitation of Nvidia Corp. All editorials are independent. He is contributing editor to Digital News Asia and an executive consultant at Tech Research Asia, an advisory firm that translates technology into business outcomes for executives in Asia Pacific.
Related stories from GTC 2018:
GTC 2018: Nvidia aims to leverage on AI, machine learning
GTC 2018: Autonomous vehicle development should continue, says Nvidia boss
Nvidia to further develop future AI talents in Singapore