cancel
Showing results for 
Search instead for 
Did you mean: 

AMD Zen won't help PC gaming and PC VR as much as we hoped.

RonsonPL
Heroic Explorer
New info just appeared:
http://www.guru3d.com/news-story/new-amd-engineering-sample-zen-processors-get-higher-clocks.html

in short:

Seems like the CPU production process is designed for low clocks. 😞
Which is exactly what I feared when I heard about GloFo going for Samsung's 14nm 😞
Just 3,1GHz on 4-core, and power consumption growing rapidly when clocks are increased. 
4c = 65TDP (3.1GHz)
8c = 95TDP (3.3GHz)

I hoped Zen will at least be able to outperform 2011's Sandy Bridges overclocked to 4,9GHz on air coollers, but now even that isn't certain.

Anyone still wonders why Intel delayed 10nm and 7nm recently and priced Skylakes so high? 
Intel will continue to have a monopoly in the part of market that is interesting to gamers demanding fast CPUs.
A year ago I really hoped Zen will be "an earthquake" which will cause serious price dowfall in gaming (enthusiasts) CPU market segment. It would lower the prices of fast CPUs needed for VR, it would force Intel to lower the prices of its CPUs.
But while I still think it will be a huge improvement compared to module-based architectures currently used by AMD, and I really think Zen won't be another "Bulldozer fiasco", I just lost hope for "revolution". 😞
We also won't see any significant speed boost on next Intel CPUs (comparing overclocked on air vs. overclocked on air).
If Zen was great, We could get +30-40% worst case scenario (single threaded performance) in next 3 years and at normal prices.
Now we'll get 10nm "K" CPUs at sick prices, most likely still with some stupid glue instead of solder which will lower achievable clocks by a lot.
If Zen disappoints, it will have broad consequences for everything PC related, VR and not VR.
Not an Oculus hater, but not a fan anymore. Still lots of respect for the team-Carmack, Abrash. Oculus is driven by big corporation principles now. That brings painful effects already, more to come in the future. This is not the Oculus I once cheered for.
91 REPLIES 91

Anonymous
Not applicable
Sadly, from all I've heard growth in processor sales at the moment is overwhelmingly for mobile and other low-power devices. Intel have been trying to get a stronger presence here, with ARM dominating the smartphone and tablet market, and Intel having more success in the laptop arena.

With AMD scoring a success with Xbox One and PS4 using a multi-core low clock speed item (and maybe Xbox Scorpio and PS4 pro? Who knows?) then it's hardly surprising they're also going for high core count and low clock speeds. Much easier to produce low-power variants, and developers are getting better at optimising for multi-thread processing in a single application.

Going for the high clock rate low core count PC market just doesn't seem to make good business sense at the moment.

Anonymous
Not applicable
Was anyone REALLY expecting AMD to do something useful..? 8-10 years ago they were starting fall behind Intel and NVidia for their CPUs and GPUs. They've been out of the race for YEARS and I can't see that changing anytime soon, which is a shame because I've always liked them as a company.

RonsonPL
Heroic Explorer
@snowdog
Yeah, I expected them to move towards intel. After Bulldozer, their CPUs got so much behind, that a 4-year old CPU from Intel was faster than the best AMD CPU. I hoped Zen will slice a few years of that gap, but instead Zen got delayed, and now it seems it will struggle to even beat 2011 CPUs, which would be a total disaster in my book.
I hoped Zen after OC will almost match Haswell after OC. Of course Skylake is better and by the time Zen hits the market in decent volume (and prices) it will go against Kaby Lake, but AMD didn't have to match the best of Intels to impact the gaming world by a lot.

@AndyW1384

It's most likely due to what process you use to manufacture the chip. AMD devided itself to GloFo (production, foundry) and AMD (all the rest). Recently we got the news that GloFo failed so bad that they had to buy 14nm from Samsung. And Samsung is not a high-clock oriented company.
Actually it look way worse:
- Samsung? Nope.
- GloFo? Same as Samsung
- but surely Intel? Nope. Over a year ago they showed a roadmap for the upcoming 3-4 years. There was no high-clock production process on it. 😞 Why? Because it would suit mostly gaming, high end gaming, and Intel doesn't think there's a potential there. 
- there's also TSMC, but not much hope here either.

What I'm trying to say is this:
- we advance in like 3% per year now. To have CPUs not even 50% faster than 5,5 year old 2600K (OCed vs OCed) is a tragedy for every game/hardware enthusiast.
- we won't get to 20% per year anytime soon, quite contrary, it might go to full stop
- BUT we WOULD get much bigger progress if it was the focus. But it's not. Mobile, low power, IoT etc. only this matters.


BTW. If anyone's interested in things like that. A link for you: 
http://www.semiwiki.com/
I don't understand half of what they write there, not more than 1% is interesting from gaming/VR perspective, but still, the page I really have to visit at least once a week. Mostly just to get more sad as most news are bad news unless you care for low power chip world, unlike me.
Not an Oculus hater, but not a fan anymore. Still lots of respect for the team-Carmack, Abrash. Oculus is driven by big corporation principles now. That brings painful effects already, more to come in the future. This is not the Oculus I once cheered for.

cybereality
Grand Champion
I don't know, it sounds like it can still be alright. I mean, 3.6GHz turbo is not exactly slow. Especially with 8 real cores, and much improved IPC over current AMD architecture. If the price is fair, I think AMD could sell a lot of these.
AMD Ryzen 7 1800X | MSI X370 Titanium | G.Skill 16GB DDR4 3200 | EVGA SuperNOVA 1000 | Corsair Hydro H110i Gigabyte RX Vega 64 x2 | Samsung 960 Evo M.2 500GB | Seagate FireCuda SSHD 2TB | Phanteks ENTHOO EVOLV

RonsonPL
Heroic Explorer
@cybereality

Of course it will be biggest CPU success for AMD since they stepped into Bulldozer. Of course it will change some things, make some positive impact. But it won't be a thing that "could change the world". 
BTW. 3.6 is one-core clock, not Turbo.
Also, 8-core aren't even remotely as important for 120fps/VR/low latency gaming as single threaded performance and all things related to memory access time. And that tends to rely on clocks. We could have hopes for a miracle where 3,5GHz Zen matches 5GHz Sandy from 2011, but it would have to use some low latency memory solution, and it won't - it will just have DDR4. Either that or some huge on-die cache, which would be very expensive if it was even possible, but it's not there - we know this from the leaks that showed up so far.
If it gets really bad and Zen cannot reach even 4,5GHz after OC, it won't drastically influence prices of CPUs like 6600K and 6700K, it won't be a  thing to consider when you have Sandy/Ivy/Haswell CPU, but most likely it won't have a nice performance gap so AMD could offer performance of 2500K @4,5GHz for the low price, in the CPU for the masses, which would hugely influence the PC VR in 2017-2019.
I hoped this level of performance would see -60-70% price drop. It could've happen if all the other CPUs pulled far ahead. And this is still way, way more than PS4pro or even Scorpio, so it would influence the chances for VR games designed in such a way, that consoles couldn't handle it, which in other words, would mean a progress in VR.

It might be great for multi-core purposes though. 8c looks not that bad.
Unfortunately I don't believe any AAA VR game will benefit from 8 cores before 2020 if ever.
Not an Oculus hater, but not a fan anymore. Still lots of respect for the team-Carmack, Abrash. Oculus is driven by big corporation principles now. That brings painful effects already, more to come in the future. This is not the Oculus I once cheered for.

cybereality
Grand Champion
Multi-core gaming is going to be big. Developers are still getting up to speed with the new graphics APIs, but the potential is there, even with current hardware. I'd expect we'd see progress in a few years or less. In any case, most games push the GPU much harder than the CPU, and you'd only be CPU-bound at low resolutions or on older/cheaper systems.
AMD Ryzen 7 1800X | MSI X370 Titanium | G.Skill 16GB DDR4 3200 | EVGA SuperNOVA 1000 | Corsair Hydro H110i Gigabyte RX Vega 64 x2 | Samsung 960 Evo M.2 500GB | Seagate FireCuda SSHD 2TB | Phanteks ENTHOO EVOLV

Felixm477
Expert Protege
the problem here is some of you demanding so much power, CPU's in general have peaked, i havent even had a need anymore to overlock as the processors have been decent so far, AMD is struggling because gamers want that e-peen that they just have to have even if they dont use every ounce of processing power. Zen will be able to give us the speed needed to finally enjoy things, it wont beat intel but it will be good enough for most. Intel is taking advantage of those users that just have to have that bigger digit number to be the best.

RonsonPL
Heroic Explorer
@cybereality

Most games push GPUs more than CPUs BECAUSE console CPU's suck. Not the other way around. You didn't see Amiga 500 games push for 16MB RAM either, did you? 😉
And while I know what you're saying - Vulkan and DX12 should help fix what is basically a total mess in multi-threading now - the issue remains.
There is lots of things you cannot execute before you have the results of previous calculations.
You always needs to focus on details when you look and conslude what slows down what (CPU GPU or GPU CPU)

That's why you can see stupid benchmark charts saying memory makes just 1-2% difference, whereas when you lower res to 720p, when you unlock framerate, when you focus on minimal framerate, instead of average, when you actually make a PROPER testing environment, suddenly you can see that lowering memory access helps not by 2% but by 30%. Why 2% in all the tests then? because testing on ultra and AAx8 is just stupid, if you test in a game designed for a tablet CPU in the first place.
I play games at 120fps since 15 years and over 60fps since a few more on top of that. It's always the same - majority says they know better and then it turns out it's not true.
Cache sizes = "don't matter!". Then a game shows up which uses that and suddenly Duron 800 is 50% slower than Athlon 800.
Core 2 Duo 2MB is 10% slower than 4MB version! but... one site tests Fear game with shadows off and guess what? it's 100fps vs. 160fps. No other site does even one test showing that... and a few years later this pops up in many games, just because devs started to utilize this bigger cache.
I remember some idiots testing Far Cry. For almost 2 years they got 45fps everywhere no matter if it was 6800, 7800 or other. Suddenly C2D CPUs showed up and... 70fps. Whereas the site was constantly saying that CPUs don't matter nowadays, everything is GPU bound, and 45fps in this particular test is just a reason of code (cough... cough... Forza Horizon 3 and "it's because it's designed for 30fps", yeah... where do I know this from...)

I was always against what majority said. They said upclocking FSB in Athlon XP/64 CPUs didn't matter. But some sites (and me) tested and concluded otherwise.
Then they said memory doesn't matter because they tested at 1800 and at 1333MHz (DDR3) and the was no difference. And I disagreed.
Same story over and over again. Single threaded performance and memory access time matters. By a lot. But not if you target 30fps gaming and/or crappy physics.

Since 2006 we have ZERO progress in memory access time. Whenever we reach the point where it could happen, we get the voltage lowered. 1.8V-> 1.5V -> 1.2V.
Even now DDR4 are a tiny bit slower than best DDR3, and it's all the same as DDR2 in 2006.
Of course, higher voltage memory would consume much more power, but I seriously wouldn't care for +50W more instead of having to wait 10 years for the same performance.

Things like this are the things that should've been PC's strenght, since you cannot expect a console with memory consuming 60W, not to mention mobiles.
But PC gaming doesn't matter so we don't even have a "gaming-friendly" memory on the market. Micron's HMC exist since a few years, but since almost noone outside of high-performance gaming really needs that, it stays low volume and ultra-high-price. Which doesn't mean it is ultra-expensive so it cannot be used for gaming. It would drop in price like a melting snowman on a Sahara desert, when volume gets to high numbers.
That's why since moving memory controller on the CPU die, we went to full stop. As a nice lady who works as a CPU designer said in a speech which was linked here some time ago, we have smaller PCs today compared to 2006, but not exactly much faster ones.
I am certain that 3-core APU equipped with low latency HMC memory or something like this, would give way better results than traditional chip, even 8 or 16 core. In VR, in 120fps gaming and most importantly, in physics which are so important for motion controlled VR stuff.

We have the nasty pop-up on both consoles and even in offensively bad PC ports like FH3. They f...d up the CPU code so they... removed half of geometry with the latest patch, because "who needs 60fps anyway" resulted in pretty sound and mass complaints from gamers. APIs like Vulkan and DX12 could help (if used properly!, unlike FH3), but not if you already targetted the low latency, high geometry amount of physics, from the start.
It will for example help with draw calls, by a lot, but only if your game uses lots of it. If you used forward engine you won't get that sweet +100-200% gains you could get in games like AC:Unity where the PC is flogged with drawcalls squeezed through uncapable DX11 API.

Unfortunately we are still probably decades away from technology which allows to include whole memory in the CPU die. We could get some today, but how much? 50MB? 100? Even 200MB is nowhere near to what games use now.
BTW. I'd really like to talk with someone who knows this stuff and ask if alternative chip design could make a revolution. Physics accelerators, before the company was acquired by Nvidia, had 128MB. Maybe there's a chance here. Maybe if VR gets big enough...
After all, just a year before first 3D accelerators (Mistique, VooDoo1), people said PC is not a good place for 3D gaming and expected them to take years to catch gaming consoles. Then "poof" and suddently it was consoles which had to chase the PC.
I hope we'll get such "poof" soon for VR. I'm so tired with the stagnation. The physics are same as it was in 2005. Dragons in Skyrims, dinosours in The Climb fly like they are on a string or a rail, the Nvidia funhouse for Vive/Move looks cool, but it's still not exactly like real world, and scripted physics ruin more and more immersion in games for me, if I have 10 years to accidentally learn how to watch to spot the flaws. I would prefer a game with 2005 graphics but next-gen physics and animations and overall world interaction, than 2025 game with superb graphics and same old crappy, immersion breaking animations/interaction as 20 years earlier.

Not an Oculus hater, but not a fan anymore. Still lots of respect for the team-Carmack, Abrash. Oculus is driven by big corporation principles now. That brings painful effects already, more to come in the future. This is not the Oculus I once cheered for.

Zoomie
Expert Trustee
You mention physics for things like Skyrim Dragons.  For a while it looked like GPU's would take the physics load from the CPU but that never really happened.  What was AMD Zen supposed to improve so dramatically?  I'm afraid I haven't really followed this chipset very closely.

Edit: Just did a little reading and it looks similar to the 900 series nVidia GPU to the 1000 series.  Sure there was a performance gain, but the real gains came in efficiency.  A 40% claimed power reduction is nothing to sneeze at.  
Any sufficiently advanced technology is indistinguishable from magic. - Arthur C Clarke