Hardware acceleration for computer intelligence

Document Type:Essay

Subject Area:Computer Science

Document 1

As described from a periodical review by Yoshida, The modern artificial intelligence has no hardware monoculture equivalent to the traditional X86 central processing unit (CPU) which has dominated the chipset market in the architecture of desktop computing space over the last several years. This new trend is influenced by the applications and adaptation of the new artificial intelligence architecture into the emerging specified roles in the competitive cloud-to-edge based ecosystems such as computer vision and learning. Figure 1 Routes to key market for AI chips The evolution of Artificial intelligence accelerator chips There has been a rapid evolution of the Artificial intelligence accelerator chips. This evolution can be better understood by focusing on the opportunities that have emerged in the marketplace and their associated challenges.

Sign up to view the full document!

Such include AI tiers, AI tasks, and AI tolerance. Comparative industrial benchmarks have become a major factor in determining the precision in performance and determine the price performance profile that will survive in a highly competitive market environment. The modern-day industry is moving towards the workload optimized artificial intelligence architecture. This architecture will allow users to adopt the most scalable, fastest, low-cost cloud platforms, hardware, and software to run a wide rand of AI tasks such as inference, training, development, and operationalization in any tier. State of the art One of the most common developments in AI tiers in the state of art appliances is the Nvidia's latest enhancement to the Jetson Xaviour AI line of AI systems on a chip (SOSs).

Sign up to view the full document!

The Isaac software development kit recently released by Nvidia has been instrumental in the building of robotic algorithms that will run on the dedicated hardware of new robotics. To fuel the development of AI applications, hardware friendly algorithms, domain-specific architecture, and emerging technologies are necessary. This research paper has focused on addressing the challenges of AI acceleration and neuriophonic computing in three aspects. These aspects are; (1) solving the memory challenges for AI hardware acceleration, (2) Solving the computing challenges for AI applications, and (3) novel architecture design for neuromorphic computing with the emerging technologies. Figure 4 AI hardware acceleration infrastructure (Mensor) Solving the memory challenges for AI hardware acceleration Various methods are available to boost energy efficiency and memory performance in the modern AI hardware acceleration platform.

Sign up to view the full document!

Such include process in memory (PIM) accelerator using emerging nonvolatile devices (PRIME). The heterogeneous architecture HEMERA is considered the most ideal for achieving the required energy efficient and improved performance. A runtime framework is also proposed to manage acceleration of different workloads efficiency. A runtime framework is also essential for effective management of acceleration in various workloads. Novel architecture design for neuromorphic computing with the emerging technologies Neuromorphic computing learns from the human brain through high energy efficient information processing capability. The scalable energy-efficient architecture Lab (SEAL) has proposed a number of novel architectures for neuromorphic computing that enabled by the emerging circuit technologies and devices. Tesla is also building an AI processor for their self-driven electric cars and Apple is also working on an AI processor that is designed for powering Siri and Face ID.

Sign up to view the full document!

Common AI hardware accelerator technologies are discussed below GPU The Graphics Processing Unit (GPU) is a single chip processor whose primary role is to manage and accelerate the performance of graphics and video. Some features of GPU include 3D or 2D graphics, Rendering polygons, digital output for monitor display, and application support for hyperactive graphics software. In 1999 Nvidia released the GeForce 256 GPU model that could process 10 million polygons per second and was comprised of more than 22 million transistors according to a journal report by (3). In (1), it was also noted that the GeForce has 256 single chip processor with an integrated transform, lighting effects, drawing and BitBLT support, triangle clipping and rendering engines. FPGA FPGAs have been around since the 1980s. As opposed to other chips, they are programmable on demand.

Sign up to view the full document!

FPGAs can be thought of as boards containing low-level chip basics such as AND and OR gates as noted by (1). These chips are configured using hardware description language (HDL) fundamentals. The chips are programmed to match specifications of a dedicated application or task. FPGAs are used for accelerating the application where it is used as a core processor for standard CPUs. However, the problem of interfacing the FPGAs and CPUs is yet to be completely solved although the process of hardware design has become comparatively easier through the use of High-Level Synthesis tools. Heterogeneous computing is all about complexity. It is more complex and requires specialized skills in both hardware and software design in all platforms. It also requires a balanced approach to ensure that the right task is performed by the appropriate processor.

Sign up to view the full document!

Hardware acceleration in artificial intelligence applications has been spearheaded by technology giant companies such as Nvidia, Intel, Google, and Microsoft among others. Various companies have come up with their versions of hardware acceleration technologies such as GPU, FPGA, ASICs, among others. These technologies have been useful in machine learning, language recognition, vision, video processing. Efforts have also been made to make machines that can learn their environment on their own although majority learns from human brain and the existing CPUs, and processing environments. Merging some of these technologies such as FPGAs with the CPU has become a challenge. Holzinger, Marc ReichenPhilipp, et al. "Heterogeneous Computing Utilizing FPGAs. " Journal of Signal Processing Systems 31 May 2018: 1-13. Intel. "AI Inferencing for Computer Vision Solutions using OpenVINO™ Toolkit.

Sign up to view the full document!

From $10 to earn access

Only on Studyloop

Original template

Downloadable