CPU : The Central Processing Unit(CPU) is the main chip in your computer, phone, tv, etc., that is responsible for distributing instructions throughout the components on the motherboard.
GPU: The General-Purpose Graphics Processing Unit (GPU) is a type of graphics processing unit that uses a graphics processing unit that processes graphics tasks to compute general-purpose computing tasks that are otherwise processed by the central processing unit.
FPGA: The Field Programmable Gate Array (FPGA) is also a silicon-based semiconductor, but it is based on a matrix of configurable logic blocks (CLBs), which are connected through programmable interconnects.
ASIC: Application Specific Integrated Circuit (ASIC) is a kind of silicon chip designed for a specific logic function. It is a special chip that realizes a specific function from hardware design. The computing capability and efficiency of the ASIC chip can be customized based on algorithm requirements, but cannot be changed.
Differences between CPU, GPU, FPGA, and ASIC
Based on the preceding information, we have a preliminary understanding of CPU, GPU, FPGA, and ASIC.
The following describes the differences between CPU, GPU, FPGA, and ASIC in terms of computing performance, power consumption, flexibility, latency, and application scenarios.
Computing Performance
CPU: The CPU uses the von Neumann architecture and its core stores programs and data. The serial computing and parallel computing capabilities depend on the number of CPU cores. Therefore, the computing performance of the CPU is relatively low.
GPU: The GPU also uses the von Neumann architecture. However, more than 80% of the GPU chip space is occupied by ALU computing units, while less than 20% of the CPU space is occupied by ALU computing units. Therefore, the GPU computing performance is improved, especially in the floating-point computing capability.
FPGA: The FPGA uses the no-instruction and no-memory sharing mechanism. It implements software algorithms through hardware programming to improve computing efficiency. Therefore, the computing efficiency is an order of magnitude higher than that of both the CPU and GPU. Because of the no-memory mechanism, the computing workload is relatively small, and the CPU needs to be coordinated to process some complex computations.
ASIC: Currently, ASIC chips are implemented by combining CPLD and FPGA. Therefore, they have the same computing performance as FPGA. It can be understood that ASIC is an FPGA chip whose algorithm logic has been compiled based on a specific scenario.
Overall, in terms of computing performance , CPU
Power Consumption
CPU: When processing a large number of complex logic operations, the CPU needs to invoke different instruction sets and store data in the cache. Therefore, the CPU is destined to be large in size and high in power consumption.
GPU: Similar to the CPU, the GPU still needs to access the cache during processing. In addition, because the GPU has more cores, a higher parallel computing capability, and a higher speed, the GPU has a larger size and higher power consumption than the CPU. Generally, the GPU needs to install an independent fan to ensure heat dissipation.
FPGA: Because the FPGA has no instructions and does not need a shared memory system, the FPGA chip is small in size and power consumption.
ASIC: Similar to the FPGA, the ASIC has only a specific logic algorithm and does not require a shared memory architecture. Therefore, the ASIC consumes less power.
Overall, in terms of p ower consumption, ASIC<FPGA<CPU
Flexibility
CPU: The CPU itself is positioned to perform general computing, so the CPU has good flexibility. CPU is also the mainstream processor.
GPU: Initially, GPUs are mainly used for graphics computing. However, with each generation of GPUs, GPUs are not limited to graphics processing, but can also process complex logic computing. The current CPU ecosystem is mature and is a mainstream heterogeneous computing component with good flexibility.
FPGA: FPGA is a kind of semi-customized chip, which needs to be developed again for normal use. Therefore, FPGA has bad flexibility.
ASIC: As mentioned earlier, ASIC can be understood as an FPGA chip on which logic algorithms have been developed. Therefore, ASIC is a customized chip and can only be applied to specific scenarios. Therefore, ASIC has poor flexibility.
Overall, in terms of flexibility, ASIC
Latency
-
CPU: The CPU needs to invoke a large number of instruction sets and cache data during calculation. Therefore, the latency is high.
GPU: Similar to the CPU, the GPU needs to invoke a large number of instruction sets and cache data during computing. However, because the GPU has a better cache access speed, the latency of the GPU is lower than that of the CPU.
FPGA: The FPGA does not need to invoke the instruction set or cache data when performing operations. Therefore, the latency of the FPGA increases by an order of magnitude compared with that of the CPU.
ASIC: ASICs are similar to FPGAs, do not need to invoke the instruction set or cache data when performing operations. Therefore, the latency of the FPGA increases by an order of magnitude compared with that of the CPU.
Overall , in terms of latency, ASIC≈FPGA
Application Scenarios
-
CPU: The CPU processes complex logical operations and is mainly applied to traditional data center servers.
GPU: GPUs have excellent graphics processing capabilities and are mainly used in image classification, Safe City, and autonomous driving.
FPGA : FPGAs can implement computing in specific scenarios based on algorithm logic. Therefore, FPGAs are mainly used in scenarios such as deep learning and big data analysis.
ASIC: ASIC is mainly used for data inference, assisted driving, and AI synthesis.
Summary
A table summarizing the differences between CPUs, GPUs, FPGAs, and ASICs
Type | CPU | GPU | FPGA | ASIC |
---|---|---|---|---|
Flexibility | Generally | Generally | Semi-customized | Customization |
Computing Performance | Low | Middle | High | High |
Power Consumption | Middle | High | Low | Low |
Latency | High | Middle | Low | Low |
Cost | High | High | Middle | Low |
Application Scenarios | Wide application scope ●Traditional data centers ●PCs | Wide application scope ●Image classification ●Safe City | Middle application scop e ●Deep Learning ●Big Data | Low application scope ●Data inference ●Assisted driving |
The original text of this article comes from huawei.com.