- GPU-accelerated Microsoft Cognitive Toolkit available
- On Microsoft Azure cloud and on-site with Nvidia DGX-1
Nvidia has announced a collaboration with Microsoft to accelerate AI in the enterprise. This is done by optimising the first purpose-built enterprise AI framework to run on the company's Tesla GPUs in Microsoft Azure or on-premises. Enterprises now have an AI platform that spans from their data center to the Microsoft’s cloud.
“We stand at the beginning of the next era, the AI computing era, powered by a new computing model,” said Jen-Hsun Huang, CEO and founder of Nvidia. “Our close collaboration with Microsoft means companies have the fastest AI platform, the most scalable solution with Nvidia DGX-1 and Tesla GPUs, and the best tools to transform any product or service.”
“We’re working hard to empower every organisation with AI, so that they can make smarter products and solve some of the world’s most pressing problems,” said Harry Shum, executive vice president of Microsoft’s Artificial Intelligence and Research Group. “By working closely with Nvidia and harnessing the power of GPU-accelerated systems, we’ve made Cognitive Toolkit and Microsoft Azure the fastest, most versatile AI platform. AI is now within reach of any business.”
This jointly optimised platform runs the new Microsoft Cognitive Toolkit (formerly CNTK) on Nvidia GPUs including the Nvidia DGX-1 supercomputer that utilises Pascal GPUs with NVLink and on Azure N-Series Virtual Machines, currently in preview. According to the two companies, this combination delivers unprecedented performance and ease of use when using data for deep learning.
As a result, companies can harness AI to make better decisions, offer new products and services faster and provide better customer experiences. This is causing every industry to implement AI. In just two years, the number of companies Nvidia collaborates with on deep learning has jumped 194x to over 19,000 companies. Industries such as healthcare, life sciences, energy, financial services, automotive, and manufacturing, are benefiting from deeper insight on extreme amounts of data.
The Microsoft Cognitive Toolkit trains and evaluates deep learning algorithms, scaling efficiently in a range of environments — from a CPU, to GPUs, to multiple machines — while maintaining accuracy. Nvidia and Microsoft worked closely to accelerate the Cognitive Toolkit on GPU-based systems and in the Microsoft Azure cloud, offering startups and major enterprises:
The Cognitive Toolkit lets customers use one framework to train models on premises with the Nvidia DGX-1 or with GPU products, and then run those models in the cloud on Azure. This scalable, hybrid approach lets enterprises rapidly prototype and deploy intelligent features.
When compared to running on CPUs, the GPU-accelerated Cognitive Toolkit performs deep learning training and inference much faster. For example, the DGX-1 is 170x faster than CPU servers for the Cognitive Toolkit.
Azure N-Series virtual machines are currently in preview to Azure customers, and will be generally available in the near future. Azure GPUs can be used to accelerate both training and model evaluation. With thousands of customers already part of the preview, businesses of all sizes are already running workloads on Tesla GPUs in Azure N-Series VMs.
Nvidia announces new graphics card
Review: Nvidia GeForce GTX 1060, the new mainstream champion
Nvidia accelerates self-driving vehicle, AI research
For more technology news and the latest updates, follow us on Twitter, LinkedIn or Like us on Facebook.