Qualcomm introduced Monday that it’ll launch new synthetic intelligence accelerator chips, marking new competitors for Nvidia, which has up to now dominated the marketplace for AI semiconductors.
The AI chips are a shift from Qualcomm, which has so far targeted on semiconductors for wi-fi connectivity and cell gadgets, not huge knowledge facilities.
Qualcomm mentioned that each the AI200, which can go on sale in 2026, and the AI250, deliberate for 2027, can are available a system that fills up a full, liquid-cooled server rack.
Qualcomm is matching Nvidia and AMD, which supply their graphics processing items, or GPUs, in full-rack programs that enable as many as 72 chips to behave as one pc. AI labs want that computing energy to run essentially the most superior fashions.
Qualcomm’s knowledge middle chips are primarily based on the AI elements in Qualcomm’s smartphone chips known as Hexagon neural processing items, or NPUs.
“We first wished to show ourselves in different domains, and as soon as we constructed our power over there, it was fairly straightforward for us to go up a notch into the info middle stage,” Durga Malladi, Qualcomm’s common supervisor for knowledge middle and edge, mentioned on a name with reporters final week.
The entry of Qualcomm into the info middle world marks new competitors within the fastest-growing market in expertise: gear for brand new AI-focused server farms.
Almost $6.7 trillion in capital expenditures can be spent on knowledge facilities via 2030, with the bulk going to programs primarily based round AI chips, in line with a McKinsey estimate.
The trade has been dominated by Nvidia, whose GPUs have over 90% of the market up to now and gross sales of which have pushed the corporate to a market cap of over $4.5 trillion. Nvidia’s chips had been used to coach OpenAI’s GPTs, the massive language fashions utilized in ChatGPT.
However firms resembling OpenAI have been on the lookout for alternate options, and earlier this month the startup introduced plans to purchase chips from the second-place GPU maker, AMD, and probably take a stake within the firm. Different firms, resembling Google, Amazon and Microsoft, are additionally growing their very own AI accelerators for his or her cloud providers.
Qualcomm mentioned its chips are specializing in inference, or operating AI fashions, as a substitute of coaching, which is how labs resembling OpenAI create new AI capabilities by processing terabytes of information.
The chipmaker mentioned that its rack-scale programs would finally value much less to function for purchasers resembling cloud service suppliers, and {that a} rack makes use of 160 kilowatts, which is comparable to the excessive energy draw from some Nvidia GPU racks.
Malladi mentioned Qualcomm would additionally promote its AI chips and different elements individually, particularly for shoppers resembling hyperscalers that want to design their very own racks. He mentioned different AI chip firms, resembling Nvidia or AMD, might even grow to be shoppers for a few of Qualcomm’s knowledge middle elements, resembling its central processing unit, or CPU.
“What we have now tried to do is be sure that our prospects are able to both take all of it or say, ‘I’ll combine and match,'” Malladi mentioned.
The corporate declined to remark, the value of the chips, playing cards or rack, and what number of NPUs might be put in in a single rack. In Might, Qualcomm introduced a partnership with Saudi Arabia’s Humain to provide knowledge facilities within the area with AI inferencing chips, and it is going to be Qualcomm’s buyer, committing to deploy as much as as many programs as can use 200 megawatts of energy.
Qualcomm mentioned its AI chips have benefits over different accelerators when it comes to energy consumption, value of possession, and a brand new strategy to the best way reminiscence is dealt with. It mentioned its AI playing cards assist 768 gigabytes of reminiscence, which is increased than choices from Nvidia and AMD.
Qualcomm’s design for an AI server known as AI200.
Qualcomm

