Microsoft Unveils First AI Chip, Maia 100, and Cobalt CPU

Microsoft Unveils First AI Chip, Maia 100, and Cobalt CPU: Maia 100, Microsoft’s first AI processor, and Cobalt CPU are shown. Microsoft on Wednesday announced the highly awaited Azure Maia 100 customized cloud computing chip, which it claims is ideal for applications like generative AI, at its yearly developer conference, Ignite.

According to the company, the Maia 100 is the first in a line of Maia accelerators for AI. Microsoft describes it as “one of the largest chips on 5-nanometer process technology,” referring to the chip’s tiniest features, which are 105 billion transistors and five billionths of a meter in size.

Microsoft Unveils First AI Chip, Maia 100, and Cobalt CPU

Microsoft Unveils First AI Chip, Maia 100, and Cobalt CPU

 

The Azure Cobalt 100, the company’s first internally developed microprocessor for cloud computing, was also unveiled. The processor, like Maia, is the first in a planned line of microprocessors. Its foundation is the ARM instruction-set architecture, which is licensed for usage by several businesses, such as Apple and Nvidia, by ARM Holdings.

In comparison to other ARM-based processors Azure has been utilizing, Microsoft claims that Cobalt 100, a 64-bit processor with 128 computational cores on die, achieves a 40% reduction in power usage. According to the corporation, Azure SQL and Microsoft Teams are already powered by cobalt.

According to Microsoft, the 200 gigabit-per-second networking powers the two chips, Maia 100 and Cobalt 100, which can process 12.5 gigabytes of data per second.

Three cloud vendors, Microsoft is the latest to provide specialized silicon for AI and the cloud. Google’s Tensor Processing Unit, or TPU, introduced in 2016 marked the beginning of the race to bespoke silicon. Amazon quickly followed suit, releasing a plethora of chips such as Inferentia, Trainium, and Graviton.

For years, there have been rumors about Microsoft’s activities, which have been fueled by sporadic revelations like the company’s planning document leak from last summer.

It was explicitly stated by Microsoft that it still collaborates with AMD and Nvidia to provide CPUs for Azure. Next year, it intends to include AMD’s rival GPU, the MI300, as well as Nvidia’s most recent “Hopper” GPU chip, the H200.

In addition to supporting apps like GitHub Copilot, Microsoft’s chips will power generative AI from OpenAI, an AI business that it invested $11 billion in to obtain the exclusive rights to apps like ChatGPT and GPT-4.

Last week in San Francisco, during OpenAI’s developer conference, Satya Nadella, the CEO of Microsoft, promised to provide “the best compute” for OpenAI “as you aggressively push forward on your roadmap.”

OpenAI and Microsoft are working to convince companies to utilize generative AI at the same time. Last month, Nadella told Wall Street that Microsoft’s generative AI division is experiencing rapid development. Compared to the previous quarter, the company’s paying GitHub Copilot software users increased by 40% in the September quarter.

According to Nadella, “They have over 1 million paid Copilot users in over 37,000 organizations that subscribe to Copilot for Business,” indicating that the platform is very popular outside the US.

Microsoft also revealed at Ignite that it is expanding Copilot to Azure. The company said that Copilot for Azure is a solution that will provide system managers with a “AI companion” that will “help generate deep insights instantly.”

Also Read: Is Your Gmail Account at Risk of Deletion? Google to Delete MILLIONS of Gmail Accounts