Meta’s New Custom AI Chip Aims to Accelerate Ranking and Recommendation Models
Meta has unveiled its new custom AI chip, the Meta Training and Inference Accelerator (MTIA), designed to provide the optimal balance of computing power, memory bandwidth, and memory capacity for serving ranking and recommendation models. This chip is part of Meta’s broader efforts in custom silicon development, which also includes exploring other hardware systems.
In addition to hardware innovations, Meta has made significant investments in developing the software necessary to harness the full potential of its infrastructure efficiently. The company is also heavily investing in acquiring AI chips from vendors like Nvidia, with plans to accumulate the equivalent of 600,000 H100 chips this year alone.
The MTIA chip, produced by Taiwan Semiconductor Manufacturing Co (TSMC) on its 5nm process, boasts three times the performance of Meta’s first-generation processor. It has already been deployed in the data center and is actively serving AI applications. Meta has several programs underway aimed at expanding the scope of MTIA, including supporting generative AI workloads.