Hardware Acceleration

Hardware Acceleration Definition

Hardware acceleration refers to the process by which an application will offload certain computing tasks onto specialized hardware components within the system, enabling greater efficiency than is possible in software running on a general-purpose CPU alone.


What is Hardware Acceleration?

Hardware acceleration combines the flexibility of general-purpose processors, such as CPUs, with the efficiency of fully customized hardware, such as GPUs and ASICs, increasing efficiency by orders of magnitude when any application is implemented higher up the hierarchy of digital computing systems. For example, visualization processes may be offloaded onto a graphics card in order to enable faster, higher-quality playback of videos and games, while also freeing up the CPU to perform other tasks.

There is a wide variety of dedicated hardware acceleration systems. One popular form is tethering hardware acceleration, which, when acting as a WiFi hotspot, will offload operations involving tethering onto a WiFi chip, reducing system workload and increasing energy efficiency. Hardware graphics acceleration, also known as GPU rendering, works server-side using buffer caching and modern graphics APIs to deliver interactive visualizations of high-cardinality data. AI hardware acceleration is designed for such applications as artificial neural networks, machine vision, and machine learning hardware acceleration, often found in the fields of robotics and the Internet of Things.

Systems often provide the option to enable or disable hardware acceleration. For instance, hardware acceleration is enabled by default in Google Chrome, but this capability can be turned off or relaunched in the system settings under “use hardware acceleration when available.” In order to determine if hardware acceleration is working properly, developers may perform a browser hardware acceleration test, which will detect any compatibility issues.

The most common hardware used for acceleration include:

  • Graphics Processing Units (GPUs): originally designed for handling the motion of image, GPUs are now used for calculations involving massive amounts of data, accelerating portions of an application while the rest continues to run on the CPU. The massive parallelism of modern GPUs allows users to process billions of records instantly.
  • Field Programmable Gate Arrays (FPGAs): a hardware description language (HDL)-specified semiconductor integrated circuit designed to allow the user to configure a large majority of the electrical functionality. FPGAs can be used to accelerate parts of an algorithm, sharing part of the computation between the FPGA and a general-purpose processor.
  • Application-Specific Integrated Circuits (ASICs): an integrated circuit customized specifically for a particular purpose or application, improving overall speed as it focuses solely on performing its one function. Maximum complexity in modern ASICs has grown to over 100 million logic gates.

What is Tethering Hardware Acceleration?

Tethering hardware acceleration refers to the transfer of tethering traffic onto hardware via a direct path between the modem and peripherals in order to improve a device’s performance and decrease power consumption. Tethering implementation requires hardware that can transfer network packets directly between Wi-Fi/USB and the modem, bypassing the main processor. Tethering may be accomplished over Bluetooth, over wireless LAN, or by a physical cable.

Hardware Acceleration vs Software Acceleration

Software acceleration refers to the technique of implementing the maximum system functions possible in software and delegating performance-critical functions to specialized external hardware in order to reduce the execution time of a program. While software acceleration is advantageous for a limited number of special-purpose applications, the advent of contemporary tools such as field-programmable gate array (FPGAs) and application-specific integrated circuit (ASICs) have lifted the restriction of hardware acceleration to fully fixed algorithms, making hardware acceleration advantageous for a wider variety of common, graphically intensive tasks.

When to Use Hardware Acceleration

Hardware acceleration is employed to improve application performance throughout a variety of fields, with applications including but not limited to:

  • Computer graphics via Graphics Processing Unit (GPU)
  • Digital signal processing via Digital Signal Processor
  • Analog signal processing via Field-Programmable Analog Array
  • Sound processing via sound card
  • Computer networking via network processor and network interface controller
  • Cryptography via cryptographic accelerator and secure cryptoprocessor
  • Artificial Intelligence via AI accelerator
  • In-memory processing via network on a chip and systolic array
  • Any given computing task via Field-Programmable Gate Arrays (FPGA), Application-Specific Integrated Circuits (ASICs), Complex Programmable Logic Devices (CPLD), and Systems-on-Chip (SoC)

Does HEAVY.AI Offer a Hardware Acceleration Solution?

The HEAVY.AI platform is designed to overcome the scalability and performance limitations of legacy analytics tools faced with the scale, velocity, and location attributes of today’s big datasets, offering GPU hardware acceleration big data exploration at the speed of thought.

HEAVY.iDB harnesses the massive parallelism of modern GPUs and returns SQL query results in milliseconds, allows users to interactively query, visualize, and power data science workflows over billions of records. HEAVY.IDB delivers that extreme big data analytics performance with a combination of support for native SQL, rapid query compilation, query vectorization and advanced three-tier memory management.