gpu

A specialized electronic circuit designed to rapidly manipulate and alter memory to accelerate the creation of images.

GPUs are designed to perform parallel processing, making them highly efficient for tasks involving large datasets and complex computations. Unlike CPUs, which are optimized for general-purpose computing and sequential tasks, GPUs excel at handling the matrix operations and parallel workloads inherent in graphics rendering, machine learning, and scientific simulations. They consist of thousands of smaller cores, allowing them to process multiple operations simultaneously, significantly boosting performance for computationally intensive applications.

In the context of AI infrastructure, GPUs are crucial for training and running machine learning models. Their parallel processing capabilities enable faster model training, allowing researchers and engineers to iterate more quickly and deploy models more efficiently. The demand for GPUs has surged with the growth of AI, leading to advancements in GPU technology and the development of specialized hardware optimized for AI workloads.

        graph LR
  Center["gpu"]:::main
  Rel_asic["asic"]:::related -.-> Center
  click Rel_asic "/terms/asic"
  Rel_asus_rog["asus-rog"]:::related -.-> Center
  click Rel_asus_rog "/terms/asus-rog"
  classDef main fill:#7c3aed,stroke:#8b5cf6,stroke-width:2px,color:white,font-weight:bold,rx:5,ry:5;
  classDef pre fill:#0f172a,stroke:#3b82f6,color:#94a3b8,rx:5,ry:5;
  classDef child fill:#0f172a,stroke:#10b981,color:#94a3b8,rx:5,ry:5;
  classDef related fill:#0f172a,stroke:#8b5cf6,stroke-dasharray: 5 5,color:#94a3b8,rx:5,ry:5;
  linkStyle default stroke:#4b5563,stroke-width:2px;

      

🧠 Knowledge Check

1 / 1

🧒 Explain Like I'm 5

If a [CPU](/en/terms/cpu) is a single genius mathematician who can solve any complex problem one by one, a GPU is a stadium full of thousands of people who can all solve simple addition problems at the exact same time. This makes the GPU perfect for drawing millions of pixels on a screen.

🤓 Expert Deep Dive

GPUs utilize a SIMT (Single Instruction, Multiple Threads) architecture. They maximize memory bandwidth to handle the massive data throughput required for texture sampling and AI tensor operations. Modern 'Unified Memory' architectures (like Apple Silicon) allow GPUs to access the same high-speed memory as the CPU directly.

📚 Sources