Skip to main content
Tech Radar Pro
Tech Radar Gaming
Tech Radar Pro
TechRadar the business technology experts
Search TechRadar
View Profile
België (Nederlands)
Deutschland
North America
US (English)
Australasia
New Zealand
Expert Insights
Website builders
Web hosting
Best website builder
Best web hosting
Best office chairs
Best antivirus
Expert Insights
Recommended reading
AI GPU accelerators with 6TB HBM memory could appear by 2035 as AI GPU die sizes set to shrink – but there’s far worse coming up
DNA computing could solve AI’s single biggest systemic problem
Intel’s Superfluid cooling solution is capable of handling 1.5kW of heat and could be used on Nvidia’s GB300 superchip
Some data centers are deliberately slowing possibly tens of thousands of AI GPUs to avoid blackouts – but this company may have a solution
Seawater’s role in surfing the AI wave
I sat down with two cooling experts to find out what AI’s biggest problem is in the data center
Microsoft, Google, and Meta have borrowed EV tech for the next big thing in data centers: 1MW watercooled racks
Pizza-sized chips are the future of AI accelerators, researchers concur – but heat remains a huge problem
Wayne Williams
25 June 2025
Wafer-scale processors could outperform GPUs
When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works.
(Image credit: Cerebras)
You may also like
AI energy demands could be lowered by large single-wafer chips
Researchers say these can overcome the limitations faced by GPUs
Cerebras and Tesla already use these huge chips, with special cooling systems to manage heat
Engineers at the University of California Riverside are exploring a new approach to artificial intelligence hardware that could tackle both performance and sustainability.
In a peer-reviewed paper, published in the journal Device, the team investigated the potential of wafer-scale accelerators – giant computer chips that operate on entire silicon wafers rather than the small chips used in today’s GPUs.
“Wafer-scale technology represents a major leap forward,” said Mihri Ozkan, a professor of electrical and computer engineering at UCR and lead author of the paper. “It enables AI models with trillions of parameters to run faster and more efficiently than traditional systems.”
You may like
AI GPU accelerators with 6TB HBM memory could appear by 2035 as AI GPU die sizes set to shrink – but there’s far worse coming up
DNA computing could solve AI’s single biggest systemic problem
Intel’s Superfluid cooling solution is capable of handling 1.5kW of heat and could be used on Nvidia’s GB300 superchip
Like monorails
These chips, like Cerebras’ Wafer-Scale Engine 3 (WSE-3), which we’ve covered previously, contain up to 4 trillion transistors and 900,000 AI-focused cores on a single unit. Another wafer-scale processor, Tesla’s Dojo D1, houses 1.25 trillion transistors and close to 9,000 cores per module.
The processors remove the delays and energy losses common in systems where data travels between multiple chips.
“By keeping everything on one wafer, you avoid the delays and power losses from chip-to-chip communication,” Ozkan said.
Traditional GPUs are still important due to their lower cost and modularity, but as AI models grow in size and complexity, the chips begin to encounter performance and energy barriers.
Are you a pro? Subscribe to our newsletter
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
Contact me with news and offers from other Future brandsReceive email from us on behalf of our trusted partners or sponsorsBy submitting your information you agree to the Terms & Conditions and Privacy Policy and are aged 16 or over.
“AI computing isn’t just about speed anymore,” Ozkan explained. “It’s about designing systems that can move massive amounts of data without overheating or consuming excessive electricity.”
Wafer-scale systems have important environmental benefits too. Cerebras’ WSE-3, for example, can perform up to 125 quadrillion operations per second, while using far less energy than GPU setups.
“Think of GPUs as busy highways – effective, but traffic jams waste energy,” Ozkan said. “Wafer-scale engines are more like monorails: direct, efficient, and less polluting.”
One major challenge still remains however – the age-old issue of heat. Wafer-scale chips can consume up to 10,000 watts of power, nearly all of which turns into heat, requiring advanced cooling systems to prevent overheating and maintain performance.
Cerebras uses a glycol-based cooling loop built into the chip, while Tesla has a liquid system that spreads coolant evenly over the chip’s surface.
Via Tech Xplore
You may also like
Gigantic AI CPU has almost one million cores, Cerebras has Nvidia in its sights
Biggest rival to Nvidia demos million-core super AI inference chip that obliterates the DGX100
World’s largest AI chip maker hit by crypto scam – Cerebras says token isn’t real
Wayne Williams
Social Links Navigation
Wayne Williams is a freelancer writing news for TechRadar Pro. He has been writing about computers, technology, and the web for 30 years. In that time he wrote for most of the UK’s PC magazines, and launched, edited and published a number of them too.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.
AI GPU accelerators with 6TB HBM memory could appear by 2035 as AI GPU die sizes set to shrink – but there’s far worse coming up
DNA computing could solve AI’s single biggest systemic problem
Intel’s Superfluid cooling solution is capable of handling 1.5kW of heat and could be used on Nvidia’s GB300 superchip
Some data centers are deliberately slowing possibly tens of thousands of AI GPUs to avoid blackouts – but this company may have a solution
Seawater’s role in surfing the AI wave
I sat down with two cooling experts to find out what AI’s biggest problem is in the data center
Latest in Pro
Microsoft is struggling to sell Copilot to corporations – because their employees want ChatGPT instead
Amazon is massively expanding Prime delivery to more US shoppers – but it’ll be too late for Prime Day
The complexity trap: why cybersecurity must be simplified
Microsoft Teams is working on a massive time-saver tool which might super-charge my productivity
5 huge stumbling blocks to legacy mainframe EOL
Half of social media marketers can’t imagine doing their jobs without AI – yet most admit wasting money on it
Latest in News
Microsoft is struggling to sell Copilot to corporations – because their employees want ChatGPT instead
The iPhone 17 Pro’s rumored camera bar looks much better than before in new dummy unit photos
New leak suggests Apple’s AirTag 2 is almost ready, but it might be facing a significant delay
Whoops! The latest Galaxy Z Fold 7 and Galaxy Z Flip 7 colors leak appears to come from Samsung itself
Jurassic World Evolution 3 thankfully won’t feature AI-generated character portraits following community backlash
Amazon is massively expanding Prime delivery to more US shoppers – but it’ll be too late for Prime Day
LATEST ARTICLES
I stream movies for work – and these 5 Netflix and Disney+ Originals are my 2025 standouts (so far)
Life getting on top of you? This ChatGPT prompt could help bring order to the chaos
Tight layover? United Airlines wants to take the stress out of catching your next flight with AI and maps
Sigma’s 56mm F1.4 is a superb crop-sensor lens for Canon, Sony and more – it’s the best portrait lens I’ve tested in this format
Nintendo Switch 2 restocks live: Walmart stock drop confirmed for today, plus all the latest updates and retailers to check
TechRadar is part of Future US Inc, an international media group and leading digital publisher. Visit our corporate site.
Contact Future’s experts
Terms and conditions
Privacy policy
Cookies policy
Advertise with us
Web notifications
Accessibility Statement
Future US, Inc. Full 7th Floor, 130 West 42nd Street,
Please login or signup to comment
Please wait…