Not all developers of artificial intelligence systems like the high dependence on Nvidia hardware solutions. Apple, for example, uses both its M family processors and accelerators from alternative suppliers, including Amazon, to develop its own systems. Representatives of the companies admitted this this week.

Image source: Amazon

A very unusual story about cooperation between technology giants at this level took place at the annual AWS Reinvent conference, held by Amazon’s cloud division. The interests of Apple, as one of AWS’s clients, were represented on stage by Benoit Dupin, senior director for machine learning and artificial intelligence, who worked at Amazon until 2014.

«We have a strong partnership and a robust infrastructure to serve our customers around the world,” Dupin said of Apple’s collaboration with AWS. As he added, the first of the companies has been using the infrastructure of the second for more than ten years to ensure the operation of the Siri voice assistant, as well as the Apple Maps and Apple Music services. Apple built search services on Amazon Inferentia and Graviton chips, and the transition to them provided a 40% increase in efficiency.

Apple wants to pre-train its own language models on Amazon Trainium2 chips. This is expected to improve the efficiency of the process in terms of productivity and energy consumption by up to 50%. According to AWS representatives, Apple was among the first customers to begin testing Trainium processors. However, it is known from open sources that it also used Google processors from the Tensor family to train its Apple Intelligence system.

Unlike leading chatbots such as OpenAI ChatGPT, Apple’s approach to artificial intelligence is not based on large clusters on Nvidia accelerators. Instead, Apple uses the iPhone, iPad or Mac chip to perform as many tasks as possible locally on the device, and sends complex requests to Apple servers using its own M-series chips.

Leave a Reply

Your email address will not be published. Required fields are marked *