In 2015, Amazon absorbed the company Annapurna Labs, whose employees are now developing their own computational accelerators for artificial intelligence systems. This became known this week from the words of the head of the core division of AWS during a visit of journalists to a laboratory in Texas.

Image source: AWS

As Reuters notes, about six engineers are working on creating specialized chips for Amazon’s needs at this research center. Their work is highly classified, but AWS already has prototypes of these accelerators running in a server rack located in the laboratory. This activity is led by Rami Sinno, who has experience at Arm, Calxeda, Freescale Semiconductor, Marvell and Intel.

Amazon intends to reduce its dependence on Nvidia, which has virtually monopolized the market for computing accelerators, and for a major cloud player like AWS, such an initiative could save some money in the long run. It is generally accepted that Microsoft and Alphabet (Google) are also creating their own computing accelerators. According to Rami Sinno, AWS customers are increasingly demanding cheaper alternatives to Nvidia solutions.

According to AWS Vice President of Compute and Networking David Brown, the company believes it can achieve 40% to 50% improvement in the cost-performance ratio of compute components compared to Nvidia’s offerings. AWS now controls almost a third of the cloud services market, with Microsoft Azure accounting for approximately 25%. Amazon has already deployed 80,000 proprietary chips in its infrastructure to accelerate AI. Related Graviton processors currently number 250,000 in service, but they do not have specialized features for accelerating artificial intelligence systems.

Leave a Reply

Your email address will not be published. Required fields are marked *