1. THINK WHOLE SYSTEM: You have to look at the GPU choice as part of the MacBook Pro as a WHOLE SYSTEM. Apple doesn't think in terms of individual chip performance. It think of things as whole systems. This means also performance on OS X. When you do so, you then realize that the new GPU - even if using a 3-year old design - is up to 80% faster. Photoshop works really well with the new GPU. http://barefeats.com/rmbp15.html
2. THINK OPENCL 2.0: AMD's GPUs simply do OpenCL computes better than nVidia's GPUs ON MACBOOK PROS RUNNING OS X.
3. THINK METAL: On the even of Apple's Developer Conference, one iOS technology I would love to see MOVED to OS X is METAL - Apple's version of Direct X. Using METAL, instead of OpenCL, compatible iOS Apps run GPU tasks up to TEN-TIMES FASTER. Given that the MacPro has an AMD GPU, I bet that bringing in METAL to OS X is a whole lot easier for AMD than nVidia.
Which indicates either cap verde offered stuff Maxwell could not, Nvidia couldn't make Maxwell cheap enough for Apple (I find that unlikely) or Apple is irritated with Nvidia.
Maxwell is very competitive in OpenCL performance. And any advantage Cape Verde might have in a particular workload certainly wouldn't be enough to offset the difference in performance per W.
nVidia only supports OpenCL 1.2 (and this is very recent). AMD supported 1.2 many many years ago, and has full 2.0 compliance currently, as does Intel. Apple wants OpenCL 2.0 support as OS X uses it internally. nVidia has purposely held back its OpenCL development as they rather push their own proprietary CUDA.
Your source? At least according to Anandtech's review Maxwell OpenCL compute performance is really good. The consumer parts have only a little DP FP hardware, but that is NOT needed in consumer applications. Not even in Apple's own software
Be careful with that claim. Cape Verde is not the same animal as Tahiti and Hawaii.
If I remember correctly, the HD 7750 was pretty competitive with the GTX 650 without the huge power consumption problem that is attributed to AMD nowadays. The HD 7770 was faster, but wasn't so far off of that.
You seem to be getting power consumption and power efficiency confused a bit. A GPU can consume a lot of power and still be incredibly efficient. Likewise the opposite is also true, a GPU can consume little power but still be inefficient. Of course Cape Verde doesn't consume as much power as Tahiti or Hawaii in absolute terms, it also performs significantly lower. What's important is performance per W, and that won't change much with the M370X.
Mad? No. This is pure business. AMD is selling themselves cheap to get the PR from having the GPU of the top model Mac Book Pro, while the number of $2500 laptop sales of a single model is presumably rather slim. Same reason they're the ones in all the consoles and still hurting economically, if you lowball the offer you win at the expense of your margins. Their problem is that the halo effect isn't much, gamers read benchmarks and nVidia is making a killing - from January to April the share of GTX 970s in the Steam hardware survey has gone from 1.80% to 2.81%, in one quarter 1 in 100 Steam users have bought the same $300+ graphics card, most play on cheap old cards so that's massive. It's more than the total market share of AMD Radeon R9 200 Series of 0.94%, which is barely selling anymore with +0.11% in the last quarter. Add in the 960/980/Titan and nVidia has 90-95% of the high end. That HBM card really needs to be a killer if AMD wants to stay in the competition.
- AMD desperately needs money, so they probably made a very good price. - There has hardly been any progress design-wise from AMD, so apart from some features GCN 1.0, 1.1 and 1.2 perform pretty the same per clock and shader - GM107 based mobile GPUs are likely too fast (=expensive), whereas GM108 based mobile GPUs are cirppled by 64 bit DDR3 memory busses
Semiaccurate actually predicted this would happen. Semiaccurate claims that Nvidia, is not only suing Samsung and Qualcomm, but also Apple for their GPU IP patents and as retaliation Apple switched to AMD GPUs: http://semiaccurate.com/2014/09/04/nvidia-sues-sam...
Yeah. I cancelled my order over this. Sorry, but Apple needs to get their shit together. I'm an actual pro user who travels a LOT... I *need* a powerful GPU on the go. NEEED. Not "prefer" not "want". If they can't deliver, they don't get my money. plain and simple.
I have a laptop with a M270X (8870M). One reason I got it was because it was considerably better than the MacBook Pro's GTX750M (which is a touching on budget GPU range). As the M370X is essentially a M270X (and hence a 8870M), then no it would not be a downgrade at all.
@Tams80 Your comment just isn't logical. Apple improved their GPU with the latest Macbook Pro. I seriously doubt you actually placed an order in the first place. If this was the basis for you canceling your order, then perhaps you have other issues to deal with. Having said that, given Apple's direction of moving to Metal on OS X, your comment seems even more ridiculous. Have you seen what Adobe has been able to do with Metal with After Effects and Illustrator? Clearly, Apple knew where it would get the most performance for that type of work and it happened to be with the AMD part this time. Deal with it.
It looks like the highest SKU is a $500 premium which gives you +256 GB SSD and a few 100 MHz on the CPU. That doesn't leave much for the AMD dGPU knowing the margins Apple desires on their products. They likely wanted a powerful dGPU for essentially free, and seems only AMD was willing to go that route.
I guess the rationale is that you'd use the Iris Pro when you wanted to sip power, so they could go with a less efficient part...Still, I'm not sure that's a great gambit, as even my 15" with only the Iris Pro hits 99C regularly, so they'll probably hit the thermal wall with this thing as inefficient as it is.
Don't worry. They will go with the old modus operandi. After a few years, they will recall the product because of overheating bla2. After a few years, that gpu would be practically cheaper and they will get a headliner for service.
It may not be a huge issue in OSX because it does automatic graphics switching, so the dGPU is only turned on when it's needed.
Unfortunately, Bootcamp doesn't support graphics switching in Windows, and if you have a Mac with multiple GPUs, Bootcamp only reveals the more powerful one to Windows. As far as Windows is concerned, the MBP with a dGPU doesn't even have an iGPU, and that Radeon is going to be running 100% of the time which will have a big impact on battery life.
If you want to run Windows on your MBP, I'd probably avoid this model if battery life matters at all to you.
Question: I will be getting a 15" high end from work. The order was placed right at the crossover point between models. Which model would should I be "hoping" for? The new one or the nvidia one?
Thats hard to answer without knowing your exact use-case, and of course seeing some real world benchmarks.
I'm definitely leaning towards the new model, since:
A: Despite a (possibly) worse GPU, the other improvements might make up for it. The harddisk is around twice as fast, since the PCI interface to the hard disk is twice as fast as in the old model.
B: A lot can be done in drivers and middleware to make up for a GPU that doesn't look very promising on paper. Parallels would certainly make up for at least some of its deficiencies in the next version, to make for better Windows performance than in Bootcamp.
C: And finally, if you really need the older model, but get the new one, it's a lot easier to sell it/swap it for an older one, than if you want the new one, but got the old one!
Unless you're working with large files using large sequential reads and writes (e.g. real-time video editing), the performance for current SSDs is limited by the IOPS/4k read/write speeds, not by SATA3.
The fact that these things are measured in MB/s is misleading. MB for MB, a jump from 25 MB/s to 50 MB/s 4k speeds will have 20x a greater impact on wait times than a jump from 500 MB/s to 1 GB/s sequential speeds.
So outside of specialized use cases, there's little benefit to going to PCIe. Instead, concentrate on finding a SSD with high IOPS. 4k writes seem to be ok with a few top SSDs managing to surpass 100 MB/s. But 4k reads still lag, peaking at about 40 MB/s. (I don't really criticize Apple for going with PCIe SSDs because a significant percentage of their customers do in fact use the MBPs for video editing. Just don't assume it means the drive is "twice as fast.")
The new one is still a decent laptop, and the Cape Verde GPU is not really power hungry at all, plus AMD can bin them to hell and back to get the nice ones, so dont worry too much
Cape Verde, a GCN 1.0 part, combined with a DSL5520 Thunderbolt 2 controller... There's nothing that supports DisplayPort 1.3 here.
You might be able to do 5120 x 2160 @ 60 Hz, 24 bpp, SST using a single DP 1.2 link, but that was apparently a typo on the spec page which Apple has corrected to read 5120 x 2880. Although they haven't yet updated their support page ( https://support.apple.com/en-us/HT202856 ) to include the new MacBook Pro (Retina, 15-inch, Mid 2015), the 5120 x 2880 @ 60 Hz resolution is almost assuredly achieved by using both Thunderbolt 2 ports and 2 cables.
If you don't receive the latest version, Apple will exchange it for the latest version because you are within the window of announcement and availability. I can tell you almost every Macbook Pro with nvidia graphics has failed on us due to nvidia. All of our other Mac's have been rock solid. But take that with a grain of salt, as a lot of our users run Bootcamp. And as I understand it, the power and cooling is different which may have contributed to the problem. (Overheating, graphical glitches, and then failure)
Please. Plenty of us know quite well exactly what we're getting for our money, in all categories. Whether the trade-offs feel worth it in this category, now that's the question.
Considering how AMD has pretty much abandoned the mobile segment (aside from APUs), why Apple went with them instead of a more efficient and powerful Maxwell card such as the 950M makes completely no sense.
Upgrade now with the same Haswell CPU, a old GPU part, when in 2 months skylake will be out... makes no sense at all. Only to have the force-like touchpad ??? Look like Apple has secured a good deep discount on Haswell and AMD GPU to further improve his margin....at the end of days you're paying $2.500 for a 2 years old mobile techs... that will overheat if pushed hard...
Sky lake most certainly won't be out in two months. Not for this form factor anyways. You probably won't see a Skylake Ultrabook till the end of the year.
What makes you think the 950 would be faster? If you look at the 960 and it's overall performance it's certainly not a must have.. and with the 950 a considerable step down from that I don't think it would be something that would blow this part away... infact far from it.. if you look at bench results of the 270X and 750ti.. the 950 might be comparable to that maybe..
I love Apples products, have much of their lineup, and some may even call me a diehard Apple fanboy.
But despite this, just like when they launched their anemic, port-challenged Retina MacBook, I can only say: WTF!?! Come on Apple, what the hell is the big idea here? It's a three year old GPU!
And Nvidia have one of their best lineups ever. The new generation of Maxwell GPUs would be awesome in the 15 inch MBP and would absolutely smoke the M370X.
Sure, AMD may have an advantage in Open CL, but Nvidia is no slouch there either. And CUDA has become a standard in itself in some industries.
Apple tends to stick with a chassis design for at least 4-5 years. With the Macbook Pro (Retina, 15-inch) they pretty much locked themselves into a certain TDP range for the dGPU (I'd reckon around 40-50 W) with the current chassis and cooling solution. If you look at GK107 and Cape Verde alongside the Maxwell options, you'll see that NVIDIA doesn't really offer anything that hits the sweet spot as far as price and/or TDP are concerned:
That being said, I don't love that this is the 5th generation of MacBook Pro (Retina, 15-inch) to be based on Intel 22 nm CPUs paired with TSMC 28 nm GPUs. And if the rumor that Intel is shooting for a September-November 2015 launch window for Skylake-H is true, then it looks like they may have decided to cancel Broadwell-H entirely.
GM107 costs 20-25% more to make (depending on exact yields) or so. And uses less power. And is faster.
Intel hasn't managed to make a chip over 150mm^2 on their 14nm process that is commercially buyable. Well, maybe some FPGA partner made a large one.... But, when you sell each chip for more than the wafer probably costs, yield doesn't matter nearly as much.
GM107 may well provide better performance per watt than Cape Verde or GK107, but AFAIK, the TDP is way higher. As in most online sources claim something like 75 W for the GTX 950M, vs. only 45 W for a Radeon HD 7870M, which the R9 M370X is essentially a rebadge of. This is in line with the type of scaling you'd expect given the transistor counts seeing as everything is on the same process anyway.
The 7770 is @ 1Ghz, the 750ti running 980-1150Mhz, with the average being 1140 across all the games tested. And the 750ti is about 50% faster (7770 is ~66%... 100 divided by 66 == about 1.5)
so, at an average of 1150Mhz, the chip draws 52 watts while gaming. IF you think they cannot easily bin and reduce clockspeeds to get under 45W, you're crazy :)
Macbooks are not gaming machines. People don't buy them to play games on them. Pointless to chose a card based solely on gaming performance. They went with the better compute card, simple as that.
This is pure CHEAPNESS and cynicism in Apple's part in putting a 3 year old outdated gpu in their most expensive laptop. The 950M easily does 30+% faster within the same power constraints as this chip. To add insult to injury, Apple could've dumped these chips in the 1st gen 2012 retina macbook. AMD must me giving these away for free.
Yeah but look at the bright side. They won't be in there for long, and will hopefully be thrown in the trash in less than a year when Apple updates their MBP to Skywell CPUs.
Think of it as charity. Apple throwing some spare change at AMD, to make sure they won't go tits up before they launch their Zen/K12 CPUs.
A market with competition for GPUs and CPUs is worth a lot more for Apple than what half a year with a mediocre GPU in the top of the line MacBook will cost them.
or maybe it's Nvidia pissing Apple of. Or AMD supporting open standards (OpenCL) much better than Nvidia. AMD pricing was likely better, also.
And, Apple could pay a few billion dollars and get PowerVR to develop a larger GPU for them. Actually, they probably would just buy PowerVR outright I think.
Not too long ago you could get an entry level 15" MacBook Pro for $1600. Today, Apple is gouging customers some $400 more for essentially the same product. And the storage options offer almost nothing over smaller screen MacBooks. Now, the high-end 15" MacBook Pro seems to mirror high-price and ho-hum components of the low-end model. No doubt about it..the wheels have come off the MacBook bus. Wait a year to see if they can correct their series of mistakes...then move on if they don't.
I was planning on upgrading my Gen 2 retina MBP but not now. I do realize the benefits of the amazingly fast ssd and I think developers will do interesting things with Force Touch but I do care about gpu and this is just ridiculous.
Strange decision, can only imagine it being made for financial reasons. Would have thought that power efficiency vs performance would be the most important consideration, something that AMD have been struggling with lately, but apparently not. My rMBP 15" already sounds like a hair dryer when playing games on it with the 750M.
Looking at something like 3dmark 11 P, the 950M gets 4500, the 8870/m270 gets 3000 and the 750m 2500. The could have clearly went for the 950 in terms of performance, opencl is probably a bit closer but then still the 950m is faster. This probably has something to do with apple going for amd with their imac as well, to keep amd "in" the game and negotiate a bit better with nvidia next time.
Yeah, its slow http://barefeats.com/rmbp15.html NOT REALLY. But lets curse Apple before anyone can actually do some proper testing. Now, where are the proper AnandTech tests?
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
107 Comments
Back to Article
Scabies - Saturday, May 23, 2015 - link
GPU-Z from Bootcamp: http://i.imgur.com/drxycq2.pngjameskatt - Friday, June 5, 2015 - link
You guys are missing the point.1. THINK WHOLE SYSTEM: You have to look at the GPU choice as part of the MacBook Pro as a WHOLE SYSTEM. Apple doesn't think in terms of individual chip performance. It think of things as whole systems. This means also performance on OS X. When you do so, you then realize that the new GPU - even if using a 3-year old design - is up to 80% faster. Photoshop works really well with the new GPU. http://barefeats.com/rmbp15.html
2. THINK OPENCL 2.0: AMD's GPUs simply do OpenCL computes better than nVidia's GPUs ON MACBOOK PROS RUNNING OS X.
3. THINK METAL: On the even of Apple's Developer Conference, one iOS technology I would love to see MOVED to OS X is METAL - Apple's version of Direct X. Using METAL, instead of OpenCL, compatible iOS Apps run GPU tasks up to TEN-TIMES FASTER. Given that the MacPro has an AMD GPU, I bet that bringing in METAL to OS X is a whole lot easier for AMD than nVidia.
hfm - Monday, June 8, 2015 - link
You got your wish. Metal is available on El Capitan.jameskatt - Tuesday, June 9, 2015 - link
Yippeee! METAL comes to Mac. For games, we can see up to 10x rendering speed increases.dragonsqrrl - Saturday, May 23, 2015 - link
OH DEAR...Taneli - Saturday, May 23, 2015 - link
They had Maxwell available but they chose to go with GCN 1.0. Apple clearly doesn't care about GPU performance in current Macstestbug00 - Saturday, May 23, 2015 - link
Which indicates either cap verde offered stuff Maxwell could not, Nvidia couldn't make Maxwell cheap enough for Apple (I find that unlikely) or Apple is irritated with Nvidia.dragonsqrrl - Monday, May 25, 2015 - link
It's becoming increasingly clear that it was a political decision. There are practically no technical reasons to choose Cape Verde over GM107.Kaboose - Tuesday, June 9, 2015 - link
WWDC's Metal announcement for El Capitan is probably the main reason. AMD GPUs must see significant performance gains for apple to switch from Nvidia.tipoo - Thursday, October 6, 2016 - link
It's this.https://cdn.arstechnica.net/wp-content/uploads/201...
halo37253 - Sunday, May 24, 2015 - link
OpenCL performance....Macs are not gaming machines, and Nvidia's lower end chips underperform compute wise.
dragonsqrrl - Monday, May 25, 2015 - link
Maxwell is very competitive in OpenCL performance. And any advantage Cape Verde might have in a particular workload certainly wouldn't be enough to offset the difference in performance per W.Stuka87 - Tuesday, May 26, 2015 - link
nVidia only supports OpenCL 1.2 (and this is very recent). AMD supported 1.2 many many years ago, and has full 2.0 compliance currently, as does Intel. Apple wants OpenCL 2.0 support as OS X uses it internally. nVidia has purposely held back its OpenCL development as they rather push their own proprietary CUDA.dragonsqrrl - Tuesday, May 26, 2015 - link
That would be a nice theory, except that AMD only supports OpenCL 2.0 on GCN 1.1 and newer GPU's. GCN 1.0 is limited to OpenCL 1.2.Dug - Tuesday, May 26, 2015 - link
Wouldn't a variant of the tech allow OpenCL 2.0?tipoo - Thursday, October 6, 2016 - link
https://cdn.arstechnica.net/wp-content/uploads/201...?
Competitive? That's an Iris Pro beating them.
Taneli - Monday, May 25, 2015 - link
Your source? At least according to Anandtech's review Maxwell OpenCL compute performance is really good. The consumer parts have only a little DP FP hardware, but that is NOT needed in consumer applications. Not even in Apple's own softwareMorawka - Saturday, May 23, 2015 - link
wow cant believe apple used this gpu over nvidia's offerings. they must really be mad at nvidia to go with a 4 year old architecture.they could have got the same performance with 30% less heat and power with nvidia.
Probably just a stop gap till 14nm ff, but i won't ever pay $2k +++ for a laptop with an amd gpu
Morawka - Saturday, May 23, 2015 - link
on the plus side, all these gpu switch-a-roo's are gonna make a hackintosh much easier to build with a wider variety of parts.ImSpartacus - Saturday, May 23, 2015 - link
That's a nice little fringe benefit.Taneli - Monday, May 25, 2015 - link
GCN parts work well already, although this probably reduces the need for custom kexts. HD7850 on Yosemite here.dragonsqrrl - Saturday, May 23, 2015 - link
"they could have got the same performance with 30% less heat and power with nvidia."Seeing how this is GCN 1.0, the gap is likely significantly larger than that.
tabascosauz - Saturday, May 23, 2015 - link
Be careful with that claim. Cape Verde is not the same animal as Tahiti and Hawaii.If I remember correctly, the HD 7750 was pretty competitive with the GTX 650 without the huge power consumption problem that is attributed to AMD nowadays. The HD 7770 was faster, but wasn't so far off of that.
dragonsqrrl - Saturday, May 23, 2015 - link
You seem to be getting power consumption and power efficiency confused a bit. A GPU can consume a lot of power and still be incredibly efficient. Likewise the opposite is also true, a GPU can consume little power but still be inefficient. Of course Cape Verde doesn't consume as much power as Tahiti or Hawaii in absolute terms, it also performs significantly lower. What's important is performance per W, and that won't change much with the M370X.MamiyaOtaru - Tuesday, May 26, 2015 - link
I doubt he is. perf/watt was pretty damn nice with the 7750 series, at least at the time http://www.techpowerup.com/reviews/ASUS/HD_7750/27...dragonsqrrl - Tuesday, May 26, 2015 - link
"at least at the time"Links article from over 3 years ago where the only competitive reference is Fermi...
Kjella - Saturday, May 23, 2015 - link
Mad? No. This is pure business. AMD is selling themselves cheap to get the PR from having the GPU of the top model Mac Book Pro, while the number of $2500 laptop sales of a single model is presumably rather slim. Same reason they're the ones in all the consoles and still hurting economically, if you lowball the offer you win at the expense of your margins. Their problem is that the halo effect isn't much, gamers read benchmarks and nVidia is making a killing - from January to April the share of GTX 970s in the Steam hardware survey has gone from 1.80% to 2.81%, in one quarter 1 in 100 Steam users have bought the same $300+ graphics card, most play on cheap old cards so that's massive. It's more than the total market share of AMD Radeon R9 200 Series of 0.94%, which is barely selling anymore with +0.11% in the last quarter. Add in the 960/980/Titan and nVidia has 90-95% of the high end. That HBM card really needs to be a killer if AMD wants to stay in the competition.MrSpadge - Saturday, May 23, 2015 - link
It does seem strange, indeed. On the other hand:- AMD desperately needs money, so they probably made a very good price.
- There has hardly been any progress design-wise from AMD, so apart from some features GCN 1.0, 1.1 and 1.2 perform pretty the same per clock and shader
- GM107 based mobile GPUs are likely too fast (=expensive), whereas GM108 based mobile GPUs are cirppled by 64 bit DDR3 memory busses
lefty2 - Saturday, May 23, 2015 - link
Semiaccurate actually predicted this would happen. Semiaccurate claims that Nvidia, is not only suing Samsung and Qualcomm, but also Apple for their GPU IP patents and as retaliation Apple switched to AMD GPUs:http://semiaccurate.com/2014/09/04/nvidia-sues-sam...
WinterCharm - Saturday, May 23, 2015 - link
Yeah. I cancelled my order over this. Sorry, but Apple needs to get their shit together. I'm an actual pro user who travels a LOT... I *need* a powerful GPU on the go. NEEED. Not "prefer" not "want". If they can't deliver, they don't get my money. plain and simple.sabot00 - Saturday, May 23, 2015 - link
Well, if performance is what you need, the M370X is significantly more powerful than the previous GTX750M.Alexvrb - Saturday, May 23, 2015 - link
I was just thinking that myself. If they kept the old (slower) GPU would he still have ordered it? :/Morawka - Sunday, May 24, 2015 - link
hell no it's notTegeril - Monday, May 25, 2015 - link
It really, really, really is. Way faster.testbug00 - Saturday, May 23, 2015 - link
you canceled your order over them upgrading the GPU? Why did you order in the first place in that case?Tams80 - Monday, May 25, 2015 - link
I have a laptop with a M270X (8870M). One reason I got it was because it was considerably better than the MacBook Pro's GTX750M (which is a touching on budget GPU range). As the M370X is essentially a M270X (and hence a 8870M), then no it would not be a downgrade at all.techconc - Friday, June 12, 2015 - link
@Tams80Your comment just isn't logical. Apple improved their GPU with the latest Macbook Pro. I seriously doubt you actually placed an order in the first place. If this was the basis for you canceling your order, then perhaps you have other issues to deal with. Having said that, given Apple's direction of moving to Metal on OS X, your comment seems even more ridiculous. Have you seen what Adobe has been able to do with Metal with After Effects and Illustrator? Clearly, Apple knew where it would get the most performance for that type of work and it happened to be with the AMD part this time. Deal with it.
webdoctors - Saturday, May 23, 2015 - link
It looks like the highest SKU is a $500 premium which gives you +256 GB SSD and a few 100 MHz on the CPU. That doesn't leave much for the AMD dGPU knowing the margins Apple desires on their products. They likely wanted a powerful dGPU for essentially free, and seems only AMD was willing to go that route.tipoo - Saturday, May 23, 2015 - link
I guess the rationale is that you'd use the Iris Pro when you wanted to sip power, so they could go with a less efficient part...Still, I'm not sure that's a great gambit, as even my 15" with only the Iris Pro hits 99C regularly, so they'll probably hit the thermal wall with this thing as inefficient as it is.WorldWithoutMadness - Saturday, May 23, 2015 - link
Don't worry. They will go with the old modus operandi.After a few years, they will recall the product because of overheating bla2.
After a few years, that gpu would be practically cheaper and they will get a headliner for service.
KikassAssassin - Saturday, May 23, 2015 - link
It may not be a huge issue in OSX because it does automatic graphics switching, so the dGPU is only turned on when it's needed.Unfortunately, Bootcamp doesn't support graphics switching in Windows, and if you have a Mac with multiple GPUs, Bootcamp only reveals the more powerful one to Windows. As far as Windows is concerned, the MBP with a dGPU doesn't even have an iGPU, and that Radeon is going to be running 100% of the time which will have a big impact on battery life.
If you want to run Windows on your MBP, I'd probably avoid this model if battery life matters at all to you.
Meaker10 - Saturday, May 23, 2015 - link
Idle power usage is a lot different to load power usage.Alexvrb - Saturday, May 23, 2015 - link
As Meaker pointed out already, idle power on this chip is extremely low.MrCommunistGen - Saturday, May 23, 2015 - link
Ugh... I hope these are at least REALLY well binned chips.jeffkibuule - Saturday, May 23, 2015 - link
Really hoping we get 14nm FinFET GPUs next year. 28nm is pretty long in the tooth now.kspirit - Saturday, May 23, 2015 - link
Wow. I didn't expect Apple to do something this messy with their highest end device...beginner99 - Saturday, May 23, 2015 - link
Yeah strange. But then Apple fans are mostly not know for their knowledge about hardware so they can easily get away with it.odedia - Saturday, May 23, 2015 - link
Question: I will be getting a 15" high end from work. The order was placed right at the crossover point between models. Which model would should I be "hoping" for? The new one or the nvidia one?damianrobertjones - Saturday, May 23, 2015 - link
Cancel and order an MSI GE62V900 - Saturday, May 23, 2015 - link
You're clearly not the sharpest knife in this comment thread...But you seriously can't read 5 lines of text in order to at least TRY to answer his question?
darwinosx - Saturday, May 23, 2015 - link
Cheap and no support and runs Windows? No thanks.SpartanJet - Monday, May 25, 2015 - link
Running Windows is a huge plus over that mess of an OS.V900 - Saturday, May 23, 2015 - link
Thats hard to answer without knowing your exact use-case, and of course seeing some real world benchmarks.I'm definitely leaning towards the new model, since:
A: Despite a (possibly) worse GPU, the other improvements might make up for it. The harddisk is around twice as fast, since the PCI interface to the hard disk is twice as fast as in the old model.
B: A lot can be done in drivers and middleware to make up for a GPU that doesn't look very promising on paper. Parallels would certainly make up for at least some of its deficiencies in the next version, to make for better Windows performance than in Bootcamp.
C: And finally, if you really need the older model, but get the new one, it's a lot easier to sell it/swap it for an older one, than if you want the new one, but got the old one!
Solandri - Saturday, May 23, 2015 - link
Unless you're working with large files using large sequential reads and writes (e.g. real-time video editing), the performance for current SSDs is limited by the IOPS/4k read/write speeds, not by SATA3.The fact that these things are measured in MB/s is misleading. MB for MB, a jump from 25 MB/s to 50 MB/s 4k speeds will have 20x a greater impact on wait times than a jump from 500 MB/s to 1 GB/s sequential speeds.
So outside of specialized use cases, there's little benefit to going to PCIe. Instead, concentrate on finding a SSD with high IOPS. 4k writes seem to be ok with a few top SSDs managing to surpass 100 MB/s. But 4k reads still lag, peaking at about 40 MB/s. (I don't really criticize Apple for going with PCIe SSDs because a significant percentage of their customers do in fact use the MBPs for video editing. Just don't assume it means the drive is "twice as fast.")
testbug00 - Saturday, May 23, 2015 - link
Er, the newer GPU is most certainly faster. Apple claimed 80%, so, it's probably more realistically 40-50% in the real world.LukaP - Saturday, May 23, 2015 - link
The new one is still a decent laptop, and the Cape Verde GPU is not really power hungry at all, plus AMD can bin them to hell and back to get the nice ones, so dont worry too muchtestbug00 - Saturday, May 23, 2015 - link
If you don't game, the Nvidia model offers slightly better battery life.If you do game, the AMD model is significantly faster.
The AMD model can also driver higher resolution displays from the DP1.3 port.
repoman27 - Saturday, May 23, 2015 - link
Cape Verde, a GCN 1.0 part, combined with a DSL5520 Thunderbolt 2 controller... There's nothing that supports DisplayPort 1.3 here.You might be able to do 5120 x 2160 @ 60 Hz, 24 bpp, SST using a single DP 1.2 link, but that was apparently a typo on the spec page which Apple has corrected to read 5120 x 2880. Although they haven't yet updated their support page ( https://support.apple.com/en-us/HT202856 ) to include the new MacBook Pro (Retina, 15-inch, Mid 2015), the 5120 x 2880 @ 60 Hz resolution is almost assuredly achieved by using both Thunderbolt 2 ports and 2 cables.
Dug - Tuesday, May 26, 2015 - link
If you don't receive the latest version, Apple will exchange it for the latest version because you are within the window of announcement and availability.I can tell you almost every Macbook Pro with nvidia graphics has failed on us due to nvidia.
All of our other Mac's have been rock solid.
But take that with a grain of salt, as a lot of our users run Bootcamp. And as I understand it, the power and cooling is different which may have contributed to the problem. (Overheating, graphical glitches, and then failure)
darwinosx - Saturday, May 23, 2015 - link
Proving you know nothing about Apple fans.ciparis - Saturday, May 23, 2015 - link
Please. Plenty of us know quite well exactly what we're getting for our money, in all categories. Whether the trade-offs feel worth it in this category, now that's the question.theSeb - Saturday, May 23, 2015 - link
That's a silly generalisation. Apple "fans" complain just as much, if not more, about some of these silly hardwre decisionsberniebennybernard - Saturday, May 23, 2015 - link
Considering how AMD has pretty much abandoned the mobile segment (aside from APUs), why Apple went with them instead of a more efficient and powerful Maxwell card such as the 950M makes completely no sense.mazzy80 - Saturday, May 23, 2015 - link
Upgrade now with the same Haswell CPU, a old GPU part, when in 2 months skylake will be out... makes no sense at all. Only to have the force-like touchpad ???Look like Apple has secured a good deep discount on Haswell and AMD GPU to further improve his margin....at the end of days you're paying $2.500 for a 2 years old mobile techs... that will overheat if pushed hard...
V900 - Saturday, May 23, 2015 - link
Sky lake most certainly won't be out in two months. Not for this form factor anyways. You probably won't see a Skylake Ultrabook till the end of the year.just4U - Monday, May 25, 2015 - link
What makes you think the 950 would be faster? If you look at the 960 and it's overall performance it's certainly not a must have.. and with the 950 a considerable step down from that I don't think it would be something that would blow this part away... infact far from it.. if you look at bench results of the 270X and 750ti.. the 950 might be comparable to that maybe..just4U - Monday, May 25, 2015 - link
oh wait a minute.. disregard that comment I guess.. this is mobile chips.. hum... but then wouldn't the 950 be a mobile variant as well?V900 - Saturday, May 23, 2015 - link
I love Apples products, have much of their lineup, and some may even call me a diehard Apple fanboy.But despite this, just like when they launched their anemic, port-challenged Retina MacBook, I can only say: WTF!?! Come on Apple, what the hell is the big idea here? It's a three year old GPU!
And Nvidia have one of their best lineups ever. The new generation of Maxwell GPUs would be awesome in the 15 inch MBP and would absolutely smoke the M370X.
Sure, AMD may have an advantage in Open CL, but Nvidia is no slouch there either. And CUDA has become a standard in itself in some industries.
repoman27 - Saturday, May 23, 2015 - link
Apple tends to stick with a chassis design for at least 4-5 years. With the Macbook Pro (Retina, 15-inch) they pretty much locked themselves into a certain TDP range for the dGPU (I'd reckon around 40-50 W) with the current chassis and cooling solution. If you look at GK107 and Cape Verde alongside the Maxwell options, you'll see that NVIDIA doesn't really offer anything that hits the sweet spot as far as price and/or TDP are concerned:GM108 = 1.02b transistors, 79 mm^2
GK107 = 1.27b transistors, 118 mm^2
Cape Verde = 1.5b transistors, 123 mm^2
GM107 = 1.87b transistors, 148 mm^2
GM206 = 2.94b transistors, 227 mm^2
That being said, I don't love that this is the 5th generation of MacBook Pro (Retina, 15-inch) to be based on Intel 22 nm CPUs paired with TSMC 28 nm GPUs. And if the rumor that Intel is shooting for a September-November 2015 launch window for Skylake-H is true, then it looks like they may have decided to cancel Broadwell-H entirely.
testbug00 - Saturday, May 23, 2015 - link
GM107 costs 20-25% more to make (depending on exact yields) or so. And uses less power. And is faster.Intel hasn't managed to make a chip over 150mm^2 on their 14nm process that is commercially buyable. Well, maybe some FPGA partner made a large one.... But, when you sell each chip for more than the wafer probably costs, yield doesn't matter nearly as much.
repoman27 - Saturday, May 23, 2015 - link
GM107 may well provide better performance per watt than Cape Verde or GK107, but AFAIK, the TDP is way higher. As in most online sources claim something like 75 W for the GTX 950M, vs. only 45 W for a Radeon HD 7870M, which the R9 M370X is essentially a rebadge of. This is in line with the type of scaling you'd expect given the transistor counts seeing as everything is on the same process anyway.testbug00 - Sunday, May 24, 2015 - link
um, no. A fully enabled GM107 running at well over 1Ghz uses less power than a 640CU Cape Verde at 1Ghz.http://www.techpowerup.com/reviews/NVIDIA/GeForce_...
The 7770 is @ 1Ghz, the 750ti running 980-1150Mhz, with the average being 1140 across all the games tested. And the 750ti is about 50% faster (7770 is ~66%... 100 divided by 66 == about 1.5)
so, at an average of 1150Mhz, the chip draws 52 watts while gaming. IF you think they cannot easily bin and reduce clockspeeds to get under 45W, you're crazy :)
tipoo - Sunday, May 24, 2015 - link
Those have soft configurable TDPs.halo37253 - Sunday, May 24, 2015 - link
Macbooks are not gaming machines. People don't buy them to play games on them. Pointless to chose a card based solely on gaming performance. They went with the better compute card, simple as that.loguerto - Saturday, May 23, 2015 - link
Apple is very focused on opencl coding, the advantage GCN have on that side probably is why apple choosed AMD over nvidia.dm27 - Saturday, May 23, 2015 - link
How then is this MacBook Pro capable of driving a 5k display (5120-by-2880 @ 60Hz)?repoman27 - Saturday, May 23, 2015 - link
The same way the 27-inch iMacs and Mac Pros can: by using two cables and two Thunderbolt 2 ports.Pneumothorax - Saturday, May 23, 2015 - link
This is pure CHEAPNESS and cynicism in Apple's part in putting a 3 year old outdated gpu in their most expensive laptop. The 950M easily does 30+% faster within the same power constraints as this chip. To add insult to injury, Apple could've dumped these chips in the 1st gen 2012 retina macbook.AMD must me giving these away for free.
V900 - Saturday, May 23, 2015 - link
Yeah but look at the bright side. They won't be in there for long, and will hopefully be thrown in the trash in less than a year when Apple updates their MBP to Skywell CPUs.Think of it as charity. Apple throwing some spare change at AMD, to make sure they won't go tits up before they launch their Zen/K12 CPUs.
A market with competition for GPUs and CPUs is worth a lot more for Apple than what half a year with a mediocre GPU in the top of the line MacBook will cost them.
lilmoe - Saturday, May 23, 2015 - link
"Think of it as charity"Oblivious much?
testbug00 - Saturday, May 23, 2015 - link
or maybe it's Nvidia pissing Apple of. Or AMD supporting open standards (OpenCL) much better than Nvidia. AMD pricing was likely better, also.And, Apple could pay a few billion dollars and get PowerVR to develop a larger GPU for them. Actually, they probably would just buy PowerVR outright I think.
Tams80 - Monday, May 25, 2015 - link
It's still a good GPU. You're acting like they put in a bottom of the barrel GPU.TEAMSWITCHER - Wednesday, May 27, 2015 - link
Not too long ago you could get an entry level 15" MacBook Pro for $1600. Today, Apple is gouging customers some $400 more for essentially the same product. And the storage options offer almost nothing over smaller screen MacBooks. Now, the high-end 15" MacBook Pro seems to mirror high-price and ho-hum components of the low-end model. No doubt about it..the wheels have come off the MacBook bus. Wait a year to see if they can correct their series of mistakes...then move on if they don't.krumme - Saturday, May 23, 2015 - link
Cry me a river recordjerrylzy - Saturday, May 23, 2015 - link
M370X seems to be more powerful than GT 750M Mac Edition...According to GFXBench 3.1:https://gfxbench.com/compare.jsp?benchmark=gfx31&a...
darwinosx - Sunday, May 24, 2015 - link
I was planning on upgrading my Gen 2 retina MBP but not now. I do realize the benefits of the amazingly fast ssd and I think developers will do interesting things with Force Touch but I do care about gpu and this is just ridiculous.Zarniw00p - Sunday, May 24, 2015 - link
Cape Verde mobile versions have been GCN 1.1 since 2013.Zarniw00p - Sunday, May 24, 2015 - link
From AMD's driver: 6821 = Venus XTAnd Venus XT = 8870M
So, M370X = 8870M = Venus XT
Zarniw00p - Sunday, May 24, 2015 - link
So, M370X is actually part of AMD's Solar System chips, and they support GNC 1.1.Ryan Smith - Sunday, May 24, 2015 - link
Unfortunately not. Solar Systems are not all GCN 1.1, there are a number of GCN 1.0 rebrands in there (like Cape Verde)Zarniw00p - Monday, May 25, 2015 - link
http://developer.amd.com/tools-and-sdks/opencl-zon...At least AMD promises openCL 2.0 support for 8800M series.. and it requires to my knowledge GCN 1.1.
tipoo - Sunday, May 24, 2015 - link
Cinebench R15 – OpenGL 64Bit:~ GTX 850M : ~80fps
~ GT 750M : ~65fps
~ R9 M370X : ~60fps
The M370X is not only inferior on OpenGL to the 850M but also to the old 750M …
Laxaa - Monday, May 25, 2015 - link
Looking forward to HBM on mobile. It would be interesting to see in a couple of years what performance we could get out of a 14nm GPU with HBM2.hanngman - Monday, May 25, 2015 - link
Maybe apple wonna refresh 27" cinema display, and GCN chips support 5K resolution opposite to Maxwell?daerron - Tuesday, May 26, 2015 - link
Strange decision, can only imagine it being made for financial reasons. Would have thought that power efficiency vs performance would be the most important consideration, something that AMD have been struggling with lately, but apparently not. My rMBP 15" already sounds like a hair dryer when playing games on it with the 750M.nutral - Tuesday, May 26, 2015 - link
Looking at something like 3dmark 11 P, the 950M gets 4500, the 8870/m270 gets 3000 and the 750m 2500. The could have clearly went for the 950 in terms of performance, opencl is probably a bit closer but then still the 950m is faster.This probably has something to do with apple going for amd with their imac as well, to keep amd "in" the game and negotiate a bit better with nvidia next time.
Kaos Sverige - Wednesday, May 27, 2015 - link
Yeah, its slow http://barefeats.com/rmbp15.htmlNOT REALLY.
But lets curse Apple before anyone can actually do some proper testing. Now, where are the proper AnandTech tests?