Intel may be developing a plan for its future GPUs that would allow it to compete more effectively with AMD and Nvidia, according to hints from a new patent on a chiplet design.
Future graphics processing units (GPUs) could be divided into several chiplets instead of being designed as a single monolithic chip.
Intel is considering an innovative approach for its future GPUs by opting for a non-monolithic architecture composed of independent chiplets. This idea was propelled by the recent granting of a patent to Intel for a "disaggregated GPU architecture," which would become the first commercially available GPU architecture with logical chiplets.
Traditionally, GPUs available on the market are monolithic, meaning they consist of a single graphics chip with all integrated components. However, a disaggregated architecture allows for dividing this chip into several chiplets, which could offer significant advantages. Despite this, it is unlikely to be implemented in the next generation of Arc graphics cards, known as Battlemage, which is anticipated in early 2025.
The advantages of designing a disaggregated GPU include greater design flexibility and improved energy efficiency, crucial features for high-performance graphics cards that require large amounts of power. However, the challenge lies in ensuring fast connections between the chiplets to avoid performance degradation when dividing a monolithic chip.
Although other companies like AMD and Nvidia have explored chiplet designs, their development has been erratic. Currently, it is unclear how much Intel will focus on its discrete Arc GPU line, as communication on this topic has been sparse. As technology advances, we are likely to see more chiplet designs in the future, not only from Intel but also from its competitors.