As AI infrastructure evolves, it becomes more and more dependent on high performance computing resources to improve performance. The Deep Neural Networks (DNNs) currently being built and trained are more complex than ever before. These are the DNNs that get famous for outclassing humans at tasks like image recognition and beating world class Go players. Training these models is a hugely compute-intensive process, so much can be gained by reducing training times with optimal hardware and software platforms.

Penguin Computing provides specialized hardware and software technologies that accelerate DNN training and orchestrate DNN container deployment. We partner with Red Hat to deploy DNNs in containers using the Red Hat OpenShift Container Platform. Once a DNN model has been well trained and is ready for deployment, it can be deployed on a myriad of different hardware platforms around the world using Openshift. As the new container arrives on the edge inferencing system, it is quickly loaded and begins making new inferences on the data it receives. As the deployed DNN model sends inferenced data back to the training platform, the model can be refined and optimized, allowing for constant improvement in inferencing models. Using a container based system, this management of DNN models becomes easy, since you can update and roll back models as needed across hundreds or thousands of devices anywhere.

Penguin Computing, a subsidiary of SMART Global Holdings, specializes in innovative Linux infrastructure, including Open Compute Project (OCP) and EIA-based high-performance computing (HPC) on-premise and in the cloud, AI, software-defined storage (SDS), and networking technologies, coupled with professional and managed services including sys-admin-as-a-service, storage-as-a-service, and hosting, as well as highly rated customer support.