The future of AI and deep learning has several applications in science and quantum level computation, and many more discoveries. Machine learning and artificial intelligence have become the most important focus in the last two years. The top three manufacturers (AMD, NVIDIA, and Intel) are deploying their advanced hardware and software in several supercomputers and various new applications. Intel has updated the Gaussian and Neural Accelerator (GNA) to the main Linux code.
Intel’s GNA coprocessor was introduced to Cannon Lake in 2018. This was built on his 10nm die and was the last in the series to introduce his AVX-512. The architecture was short-lived, one of Intel’s shortest architectural lifetimes, and only one mobile processor was released. Cannon Lake was replaced by Intel’s Ice Lake processors his architecture, discontinuing the Cannon Lake line in early 2020.
GNA coprocessors are currently used in Gemini Lake, Elkhart Lake, Ice Lake, etc., and the purpose of coprocessors is to allow CPU resources to be used in other processing areas while at the same time providing help in other processing areas. to make it possible. Recognize speech and reduce noise. Over the last year, an Intel engineer has continuously updated and improved the quality and purpose of his GNA coprocessor in Linux to help future technologies.

GNA coprocessor integration is now the fourth variant, and Intel changed the coding to include the Linux Direct Rendering Manager framework or DRM. A DRM engineer insisted on this integration to place her GNA library within the AI and DRM placement of the main Linux kernel and its subsystems.
This library consists of TensorFlow, Caffe, PaddlePaddle, PyTorch, mxnet, Keras, and ONNX, which Intel leverages for CPU, iGPU, GPU, VPU, and FPGA optimizations. Intel’s GNA library also employs the company’s OpenVINO software toolkit, allowing developers to use the deep learning development kit to streamline development and easily distribute to multiple platforms simultaneously. It also offers performance and portability along with a broader support base, optimized APIs, and integrations. Integration flows across Windows, macOS, and Linux operating systems. On Linux, we introduced version 3 with updated support for new development and deep learning scenarios.
News sources: Phoronix, Intel