- Seeks to enable companies across different verticals to apply AI in their workflow
- AI to go beyond labs and out into the field to increase work processes efficiency
ARTIFICIAL Intelligence or AI is becoming mainstream in the consumer world with applications such as voice interfaces, personal assistants and image tagging.
However, the implications of AI go beyond mainstream consumer use cases to fields including genomic sequencing analytics, climate research, medical science, autonomous driving and robotics.
These technology advancements and breakthroughs have been – and continue to be – made possible by deep learning.
“AI has been part of our roadmap for the past five years, things have really started to accelerate in the past couple of years. Nvidia has come up with the right kind of technology to train the datasets,” said HPE Asia Pacific High Performance Computing head Laurent Herviou (pic, below) during an interview at the Nvidia AI conference in Singapore.
“We can clearly see that there is a lot of interest in the market for AI, it is essentially a paradigm shift in the IT industry. To meet the needs of our customers, we have jointly developed platforms and solutions that integrate AI with the latest Nvidia accelerator based GPUs,” he said.
HPE announced new purpose-built platforms and services capabilities to help companies simplify the adoption of AI, with an initial focus on a key subset of AI known as deep learning.
Inspired by the human brain, deep learning is typically implemented for challenging tasks such as image and facial recognition, image classification and voice recognition.
To take advantage of deep learning, enterprises need a high-performance computing infrastructure, to build and train learning models that can manage large volumes of data, to recognise patterns in audio, images, videos, text and sensor data.
Many organisations lack several integral requirements to implement deep learning, including expertise and resources; sophisticated and tailored hardware and software infrastructure; and the integration capabilities required to assimilate different pieces of hardware and software to scale AI systems.
Hewlett Packard Labs’ AI Research team also built a deep learning cookbook that offers a set of tools to guide its enterprise customers estimate the performance of various hardware platforms, characterise the most popular deep-learning frameworks, and select the ideal hardware and software stacks to fit their individual needs.
One use case included in the cookbook is related to the HPE Image Classification Reference Designs. These reference designs provide customers with infrastructure configurations optimised to train image classification models for various use cases such as license plate verification and biological tissue classification.
HPE also announced its AI Innovation Centre that is designed for longer-term research projects. The innovation centre will serve as a platform for research collaboration between universities, enterprises on the cutting edge of AI research and HPE researchers.
The centres, located in Houston, Palo Alto, and Grenoble, will give researchers for academia and enterprises access to infrastructure and tools to continue research initiatives.
In its mission to help make AI real for its customers, HPE offers customers flexible consumption services for HPE infrastructure, which avoids over-provisioning, increases cost savings and scales up and down as needed to accommodate the needs of deep learning deployments.
“At the moment, everything is still in the very early stage for everyone. Everyone understands that this is a revolution. There is a lot of potential applications that need to be defined and conceptualised,” said Herviou.
HPE simplifies application deployment in manufacturing plants
HPE introduces three IoT applications
HP Malaysia locks in on security to stay ahead of the pack
For more technology news and the latest updates, follow us on Facebook,Twitter or LinkedIn.