GPUs, originally created for creating three-dimensional images, have performed well in the field of accelerating parallel computing. In the era of rapid development of artificial intelligence systems, they are in great demand. AMD CEO Lisa Su expects that in five years the situation will begin to change, and not only GPUs will find worthy applications in the field of AI.

Image Source: AMD

She shared her thoughts with The Wall Street Journal. “GPUs are now the architecture of choice for large language models because they are very efficient at parallel computing, but they offer only limited programming freedom. Do I believe they will still be the architecture of choice in five plus years? I think everything will change,” said the AMD CEO. In her opinion, in five years no one will refuse the GPU, but components for AI systems of a different kind will gain growing popularity. More targeted chips will be smaller, cheaper and more energy efficient.

Examples of such chips already exist. Cloud giants like AWS (Amazon) and Google develop them for their own needs, using them in typical applications. GPUs remain more versatile computing tools, but optimizing their power consumption and reducing costs due to the constant need to increase performance is problematic. Broadcom is already helping Google create custom chips, and there will only be more such examples.

For developers of specialized accelerators, it is important to feel the market conditions and find the right balance between programming flexibility and chip efficiency, as well as ensure compatibility with the software ecosystem used. If chip specialization becomes narrowly focused prematurely, it could cause big losses for the developer. Lisa Su added that when it comes to computing, there are no one-size-fits-all solutions. According to her, other architectures will coexist with GPUs in the future, everything will simply depend on the evolution of the models.

Leave a Reply

Your email address will not be published. Required fields are marked *