As AMD tries to increase its presence in the AI ​​market, the manufacturer not only produces hardware for these needs, but also decided to develop language models. The result of this was the announcement of the first small language AMD-135M.

Image Source: AMD

The new small language model AMD-135M belongs to the Llama family and is aimed at deployment in private businesses. It’s unclear whether AMD’s new AI model has anything to do with the company’s recent acquisition of startup Silo AI (the deal is not yet finalized and has yet to be approved by various regulators, so probably not). However, this is a clear step towards meeting specific customer needs with a pre-trained model built by AMD using its own hardware.

The advantage of AMD’s language model is that it uses what is called speculative decoding. The latter is a smaller “draft model” that generates multiple candidate tokens in one forward pass. The tokens are then passed to a larger, more accurate “target model” that validates or corrects them. On the one hand, this approach allows you to generate several tokens simultaneously, but on the other hand, this leads to increased power consumption due to increased data transfer.

The AMD language model comes in two versions: AMD-Llama-135M and AMD-Llama-135M-code. Each of them is optimized for specific tasks by increasing inference performance using speculative decoding technology. The AMD-Llama-135M base model was trained from scratch using 670 billion tokens of shared data. This process took six days using four AMD Instinct MI250-based 8-channel nodes.

The AMD-Llama-135M-code model, in turn, was improved due to additional training based on an additional 20 billion tokens specifically focused on writing program code. Additional model training took four days using the same set of AMD hardware. The company believes that further optimizations of its models could lead to further improvements in their performance and efficiency.

Leave a Reply

Your email address will not be published. Required fields are marked *