Next-Generation MacBook Pros Rumored to Feature 'Very High-Bandwidth' RAM

Apple's next-generation 14-inch and 16-inch MacBook Pro models with M2 Pro and M2 Max chips will be equipped with "very high-bandwidth, high-speed RAM," according to information shared by MacRumors Forums member Amethyst, who accurately revealed details about the Mac Studio and Studio Display before those products were announced.

14 vs 16 inch mbp m2 pro and max feature 1
The current 14-inch and 16-inch MacBook Pro models are equipped with LPDDR5 RAM from Samsung, with the M1 Pro chip providing up to 200 GB/s of memory bandwidth and the M1 Max chip topping out at 400 GB/s. On a speculative basis, it is possible that the next MacBook Pro models could be equipped with Samsung's latest LPDDR5X RAM for up to 33% increased memory bandwidth with up to 20% less power consumption. This would result in up to 300 GB/s memory bandwidth for the M2 Pro and up to 600 GB/s for the M2 Max.

Bloomberg's Mark Gurman expects the next MacBook Pros to have few other changes beyond the M2 Pro and M2 Max chips. At this point, it seems likely that the laptops will be announced in November at the earliest with press releases on the Apple Newsroom site. Apple has launched new Macs in November multiple times in recent years, including the original 16-inch MacBook Pro in 2019 and the first three Macs with the M1 chip in 2020.

The current 14-inch and 16-inch MacBook Pro models with the M1 Pro and M1 Max chips were released in October 2021 and featured a complete redesign with a notch in the display and additional ports like HDMI, MagSafe, and an SD card reader.

Related Roundup: MacBook Pro 14 & 16"
Related Forum: MacBook Pro

Top Rated Comments

roach1245 Avatar
20 months ago
Some notes on memory bandwidth of the Apple Silicon machines (which is already extremely high) vs the new Ryzen / Intel CPUs that were released a few days / weeks ago:

Ran a simple benchmark on a new Ryzen 7950x desktop build (64GB RAM) here in the lab (the build will be returned to the supplier) vs my M1 Max laptop (64GB RAM).

Task: Take about 10000 parquet files (10.6GB total) and append them into 1 dataframe (> 400 million observations) in memory.

Hypothesis: The Ryzen 7950x should be way faster - at first thought - because it has 16 cores (versus 8 M1 Max performance cores) that are also clocked way higher.

Result: They are equally as fast because the Ryzen CPU is bottlenecked by memory bandwidth (very fast cores but just 2 memory channels on the CPU).

The files:




The task is most efficiently done in parallel using all cores available, used both polars (Rust ) and pandas-modin (C++) to do this as fast as possible.

When using all 8 performance cores on my M1 Max, memory bandwidth to CPU is at about 120 GB/s (theoretical max is 200Gb/s).




Yet the Ryzen 7950x can do 81 GB/s memory bandwidth at most as the memory runs at 5200MT/s (* 8 bytes * 2 memory channels)/1024 = 81.25 GB/s (you can stretch this to about 100 GB/s if you heavily overclock). Thus despite the 7950x's 16 faster cores it's as fast as my M1 Max with 10 cores in this task because about 6 Ryzen cores are enough to reach that 81GB/s of bandwidth. The other 10 cores are starved from input and just idling.

This is not new; others have ran similar tasks with similar results. E.g. https://tlkh.dev/benchmarking-the-apple-m1-max who finds that

"... adding more cores on the 5600X does not help (2 cores are enough to maximize memory bandwidth), while 10 cores on the M1 Max is the optimal configuration".

The M1 Ultra has 20 cores and 400GB/s of memory bandwidth and thus runs way faster than the Ryzen 7950X as none of its 20 cores are starved. This is even more so when the Ryzen 7950X is decked out with 128GB of DDR5 RAM instead (4 DIMM slots) and therefore runs at a slower 3600 MT/s instead which is a meager 56.25 GB/s memory bandwidth. 5 Ryzen cores can fully consume that; the other 11 cores will just idle.

This is also iterated at http://hrtapps.com/blogs/20220427/ which similarly highlights the importance of memory bandwidth (in computational fluid dynamics in this case) and finds that:

"M1 Ultra has an extremely high 800 GB/sec memory bandwidth.... which leads to a level of CPU performance scaling that I don’t even see on supercomputers, and is the result of a SoC (system on a chip) design"

The new Intel Raptor Lake CPUs also only have 2 memory channels and top out at about 120GB/s max memory bandwidth (heavily overclockded) as well so there won't be a difference here.

So just a heads up: the new Ryzen/Intel CPUs are good for gaming and workflows which are not so much memory dependent, but if you're doing data analysis or other scientific HPC work of some sort that is CPU-and-memory bound (thus not GPU machine learning) you'll very quickly run into memory bandwidth limits. Then you better stick to Apple's M1 / M2 chips or the AMD / Intel CPUs with more than 2 memory channels and thus more memory bandwidth (which are also way more expensive, e.g. the AMD ThreadRipper Pro 5965WX with 26 cores and 8 memory channels at 200GB/s memory bandwidth max for which you have to pay $2400 just for the chip itself and $1000 for a compatible motherboard).
Score: 73 Votes (Like | Disagree)
TheYayAreaLiving ?️ Avatar
20 months ago

Also 'very high prices'
Prices not so high. Remembering the golden era. Apple Online Store, 2002.

Attachment Image
Score: 30 Votes (Like | Disagree)
applicious84 Avatar
20 months ago

Some notes on memory bandwidth of the Apple Silicon machines (which is already extremely high) vs the new Ryzen / Intel CPUs that were released a few days / weeks ago:

Ran a simple benchmark on a new Ryzen 7950x desktop build (64GB RAM) here in the lab (the build will be returned to the supplier) vs my M1 Max laptop (64GB RAM).

Task: Take about 1000 parquet files (10.6GB total) and append them into 1 file (> 400 million observations).

Hypothesis: The Ryzen 7950x should be way faster - at first thought - because it has 16 cores (versus 8 M1 Max performance cores) that are also clocked way higher.

Result: They are equally as fast because the Ryzen CPU is bottlenecked by memory bandwidth (very fast cores but just 2 memory channels on the CPU).

The files:




The task is most efficiently done in parallel using all cores available, used both polars (Rust ) and pandas-modin (C++) to do this as fast as possible.

When using all 8 performance cores on my M1 Max, memory bandwidth to CPU is at about 120 GB/s (theoretical max is 200Gb/s).




Yet the Ryzen 7950x can do 81 GB/s memory bandwidth at most as the memory runs at 5200MT/s (* 8 bytes * 2 memory channels)/1024 = 81.25 GB/s (you can stretch this to about 100 GB/s if you heavily overclock). Thus despite the 7950x's 16 faster cores it's as fast as my M1 Max with 10 cores in this task because about 6 Ryzen cores are enough to reach that 81GB/s of bandwidth. The other 10 cores are starved from input and just idling.

This is not new; others have ran similar tasks with similar results. E.g. https://tlkh.dev/benchmarking-the-apple-m1-max who finds that

"... adding more cores on the 5600X does not help (2 cores are enough to maximize memory bandwidth), while 10 cores on the M1 Max is the optimal configuration".

Or what was referred to earlier: http://hrtapps.com/blogs/20220427/ ('http://hrtapps.com/blogs/20220427/').

The M1 Ultra has 20 cores and 400GB/s of memory bandwidth and thus runs way faster than the Ryzen 7950X as none of its 20 cores are starved. This is even more so when the Ryzen 7950X is decked out with 128GB of DDR5 RAM instead (4 DIMM slots) and therefore runs at a slower 3600 MT/s instead which is a meager 56.25 GB/s memory bandwidth. 5 Ryzen cores can fully consume that; the other 11 cores will just idle.

The new Intel Raptor Lake CPUs also only have 2 memory channels and top out at about 100GB/s max memory bandwidth as well so there won't be a difference here.

So just a heads up: the new Ryzen/Intel CPUs are good for gaming and workflows which are not so much memory dependent, but if you're doing data analysis or other scientific HPC work of some sort that is CPU-and-memory bound (thus not GPU machine learning) you'll very quickly run into memory bandwidth limits. Then you better stick to Apple's M1 / M2 chips or the AMD / Intel CPUs with more than 2 memory channels and thus more memory bandwidth (which are also way more expensive, e.g. the AMD ThreadRipper Pro 5965WX with 26 cores and 8 memory channels at 200GB/s memory bandwidth max for which you have to pay $2400 just for the chip itself and $1000 for a compatible motherboard).
You do realize this forum is for fanboys and not tech nerds, right? I'll read this on Anand Tech and Tom's Hardware, my friend ;)

Oh, and much appreciated. Nice post :)
Score: 17 Votes (Like | Disagree)
Lakersfan74 Avatar
20 months ago
Waiting for no need to upgrade from M1 posts.
Score: 16 Votes (Like | Disagree)
Mrkevinfinnerty Avatar
20 months ago
Also 'very high prices'
Score: 13 Votes (Like | Disagree)
temende Avatar
20 months ago
Nice improvement, but what matters more is memory latency, not bandwidth.
Score: 11 Votes (Like | Disagree)

Popular Stories

maxresdefault

Apple Announces 'Let Loose' Event on May 7 Amid Rumors of New iPads

Tuesday April 23, 2024 7:11 am PDT by
Apple has announced it will be holding a special event on Tuesday, May 7 at 7 a.m. Pacific Time (10 a.m. Eastern Time), with a live stream to be available on Apple.com and on YouTube as usual. The event invitation has a tagline of "Let Loose" and shows an artistic render of an Apple Pencil, suggesting that iPads will be a focus of the event. Subscribe to the MacRumors YouTube channel for more ...
Apple Silicon AI Optimized Feature Siri

Apple Releases Open Source AI Models That Run On-Device

Wednesday April 24, 2024 3:39 pm PDT by
Apple today released several open source large language models (LLMs) that are designed to run on-device rather than through cloud servers. Called OpenELM (Open-source Efficient Language Models), the LLMs are available on the Hugging Face Hub, a community for sharing AI code. As outlined in a white paper [PDF], there are eight total OpenELM models, four of which were pre-trained using the...
iOS 18 Siri Integrated Feature

iOS 18 Rumored to Add These 10 New Features to Your iPhone

Wednesday April 24, 2024 2:05 pm PDT by
Apple is set to unveil iOS 18 during its WWDC keynote on June 10, so the software update is a little over six weeks away from being announced. Below, we recap rumored features and changes planned for the iPhone with iOS 18. iOS 18 will reportedly be the "biggest" update in the iPhone's history, with new ChatGPT-inspired generative AI features, a more customizable Home Screen, and much more....
Apple Vision Pro Dual Loop Band Orange Feature 2

Apple Cuts Vision Pro Shipments as Demand Falls 'Sharply Beyond Expectations'

Tuesday April 23, 2024 9:44 am PDT by
Apple has dropped the number of Vision Pro units that it plans to ship in 2024, going from an expected 700 to 800k units to just 400k to 450k units, according to Apple analyst Ming-Chi Kuo. Orders have been scaled back before the Vision Pro has launched in markets outside of the United States, which Kuo says is a sign that demand in the U.S. has "fallen sharply beyond expectations." As a...
iPad And Calculator App Feature 1

Apple Finally Plans to Release a Calculator App for iPad Later This Year

Tuesday April 23, 2024 9:08 am PDT by
Apple is finally planning a Calculator app for the iPad, over 14 years after launching the device, according to a source familiar with the matter. iPadOS 18 will include a built-in Calculator app for all iPad models that are compatible with the software update, which is expected to be unveiled during the opening keynote of Apple's annual developers conference WWDC on June 10. AppleInsider...