According to a reliable source cited by Reuters, Meta, the company behind Facebook, is currently testing its in-house designed chip for training AI models as part of its Meta Training and Inference Accelerator (MTIA) project. This specialized processing chip is a significant development for Meta, as it aims to reduce the company’s reliance on external chip manufacturers like Nvidia.
The new AI training chip is specifically designed for AI workloads, which should lead to greater energy efficiency compared to the GPUs commonly used for AI processing. The production of these chips is a collaborative effort with Taiwan’s TSMC. Meta has already begun using a limited number of these chips and plans to scale up production if the testing phase proves successful, following the completion of the first tape-out process.
For those unfamiliar, the tape-out process is the final stage of chip design, where the design is thoroughly verified and sent to a factory for sample production. This process can cost tens of millions of dollars and take anywhere from three to six months. If the initial test is successful, the chip will proceed to mass production; otherwise, the design will need to be revised and undergo another tape-out process.
Meta has been working on the MTIA chip for several years, albeit with some setbacks. However, last year, the company began utilizing its inference chip in AI systems that power content recommendations on Facebook and Instagram. Looking ahead, Meta plans to start using its in-house training chips by 2026, initially for content recommendation systems and eventually for more advanced GenAI products like chatbots.
Just last week, Meta announced that its first inference chip, which is currently in use for content recommendations, has been highly successful after years of development challenges. The company estimates its total expenditures for 2025 to be around $114,000 to $119,000 million, with capital expenditures potentially reaching $65,000 million, primarily invested in AI systems.
The introduction of DeepSeek’s low-cost, high-performance model raises questions about the return on investment for the significant amounts spent by US tech giants on GPUs. It will be interesting to see how Meta’s custom AI training and inference chips perform in terms of efficiency and cost reduction. As the company continues to develop its in-house chip technology, we can expect significant advancements in AI capabilities while reducing reliance on external hardware.