Pretty much, yeah. Even if AMD manages to beat Intel on the tech side, they've been written off for far too long now to really recover to a true rival. AMD could come up with something that tripled performance over Intel and they still probably won't take back any notable market share. Intel can just sit back and watch AMD struggle, then release their answer a few months later.
On the graphics side of things, while I can't say as I like AMD graphics just yet, mainly due to issues I had when AMD's graphics card division was still the company ATi and linux drivers were a PITA to try to find and get working, I'm willing to give them a new shot since I recently learned that nVidia has been going outside of the true OpenGL standard so much that they have caused developers to create issues with the competition due to programming for the nVidia misuse of OpenGL.
As far as I am aware (and I've been using AMD Desktop GPUs in various forms, from a 7850 to an R9-285 to an R9-380 to an RX-480) AMD's Windows drivers are right now very stable and do not exhibit any surprising behaviour when it comes to OpenGL. There was a bit of wonkiness in the early days of their GCN architecture, but it seems they've worked out most of the bugs by now.Well, I did run into a bit of weirdness where FSO was being told by the driver that it supported a certain OpenGL extension when it actually didn't, but that became irrelevant when we moved to OpenGL Core (and would only have been relevant before that if we decided to use explicit version declarations in our shaders on Windows, which we didn't).
I wouldn't be so sure. The first tests for the 7k-series Intel CPUs are coming out, and they show absolutely no improvement over Skylake when run at the same clock speed. This is actually the best chance AMD have had in a long while to catch up; all they need is a range of chips that can perform within a couple percentage points of an i7 or i5 at a lower price point. Right now, all the enthusiast press is warning people off of Intel and recommending to wait until Ryzen drops; this is definitely the best position AMD's been in for years.
Also the fact that AMD's CPUs haven't been competitive for a rather long time has given Intel much more room to sit on their ass and delay real performance upgrades a while more. Intel hasn't done anything since 2013.I looked up where my cpu is at nowadays, a i7-2600k: https://www.cpubenchmark.net/common_cpus.html
My 4th gen i7-4790K has almost identical specs to a shiny new 7th gen i7-7700K. The differences are extremely minor and most come from the integrated graphics, something almost nobody who buys an i7 will actually use. The 7700K has a 100Mhz clock speed advantage at factory clocks. That's almost nothing considering the 2.5 year gap.
With the same rig they put up the same numbers in benchmarks, the 2.5 years newer model is only slightly better in terms of power efficiency and thermals.
I know diminishing returns make it harder but it seems like they're not even trying to push performance in their consumer models.
And no, the 1700$ 6950X or the 1000$ 6900K do not count as consumer models.
...as long as your code doesn't try to use vendor-specific extensions. If you do, chances are you're going to see bad things on systems not using cards made by that vendor.
The issue you mention is more an effect of nVidia making a very strong effort to get game developers to use their proprietary software packages (https://developer.nvidia.com/what-is-gameworks), which to my mind is less of a mark against nVidia and more of a mark against those studios.
Very dead.
And I also think that this lack of performance improvements comes directly from Intel's monopoly in high-performance and high-price part of CPU market. Why would you modernise antything when what you've got now is still only a dream for about 90% of your customers?
Plus, it would seem that most of new game engines give better results for medicore CPU & monster GPU than monster CPU & medicore GPU. Of course, there are exceptions (Minecraft?). Investing more in GPU than CPU seems to be simply a better option if you have to choose because you can't have both for cash reasons.
As far as I can tell, we're hitting the point at which Moore's law is running head-first into physics constraints. It might be less due to Intel getting complacent and more about the fact that that it may not even be possible to squeeze more performance out of a single chip. Intel has simply hit the limits for silicon electronics and is just incrementing numbers for marketing purposes (along with some minimal architecture optimizations).Very dead.
I still can't wrap my head around it. Why then go all the way down to 10 nm in the first place? Why all the talks about 7 and 5 nm? If it's all about just optimization, then why even bother.
BTW, would there be any benefits to using multiple CPUs? IIRC, there are "processor cards" on market, but they seem to be meant for professional applications such as data analysis and scientific calculations. I'd imagine they'd have the same problems as multicore processors, only worse.
IIRC multiple CPUs back when dual CPU mobos were a thing would only allow for ~30% more speed? Of course this may have changed since the dual PIII days.
BTW, would there be any benefits to using multiple CPUs? IIRC, there are "processor cards" on market, but they seem to be meant for professional applications such as data analysis and scientific calculations. I'd imagine they'd have the same problems as multicore processors, only worse.IIRC multiple CPUs back when dual CPU mobos were a thing would only allow for ~30% more speed? Of course this may have changed since the dual PIII days.
You've got to be talking about multiCPU's in desktops right? Because servers have had multiCPUs since... hell, probably before I was born! And multiCPU servers also massively predate multicore CPUs.
As for benefits, it always comes back to the application & the OS. With both setup correctly, you'll get nearly linear performance improvements as you add CPUs, applications written for IBM s/390 & successor systems come to mind. Without both setup correctly, you're generally limited by the single process/thread performance from a single CPU/core (ofc).
My guess is that the added overhead from intercommunicating between two distinct CPUs means that they don't see any meaningful performance gains except on absurdly parallel workloads, i.e. not the kind of thing any desktop user is likely to want.
The question I always have for AMD when they substantially undercut Intel pricing is heat+durability.If Ryzen is as cheap as it's supposed to be with the only problem being heat production, you'd be able to get a water cooling tower and still come out ahead. I heard those can be much quieter than fans.
I bought AMD processors a couple times, but I've gone back to Intel in recent builds because I like to achieve decent temperatures without fan speeds that sound like jet engines (even with aftermarket coolers). If AMD processors are now actually performing at the same level as their Intel equivalents, I'm curious if the other features and factors line up as well. That said, any cut into Intel's pricing is good news.
Photonics seem like the only way to progress in that case. Using light instead of electrons could allow higher speeds and even smaller sizes, but this field is relatively recent, so we don't know if it could actually be a practical solution for a desktop computer.Well, actually... rather not. Electricity "moves" nearly with the speed of light (electrons do like a meter per hour, but they are "bumping" each other so the "wave" moves with c speed), and typical optic fiber wavelenghts are around 1,2 to 1,5 um. Not to mention that a diameter of an optical fiber is at least about 125 um - and that's the smaller, single-mode one.
I'm curious what would the performance be if someone actually made a fully multicore-ready game engine. Yeah, Unreal Engine can chew these frames per second with 4 cores, but that doesn't mean it actually does it optimally.
[...] further subdivide those tasks so that the engine spreads its load optimally across an arbitrary number of cores.
Of course. I had in mind exactlyQuote[...] further subdivide those tasks so that the engine spreads its load optimally across an arbitrary number of cores.
There are tons of things that can be parallelized in a game like FreeSpace. Tons. For instance, when a gigantic high-poly masterpiece lumbers on screen, like an SJ Sathanas or a GTVA Colossus, right now all the people on board, all the rivets and bolts, all the Shivans and creepy biomechanical stuff are simulated sequentially. Why not divide the ship's population eight ways, and assign each fraction of the population to a different core on one of these new Ryzen things? If you divided the groups up correctly you'd make it so that the groups don't actually need to interact with each other as much - like, make one group the Science division, one group the Engineering division, and one group the Command division, for those tri-core Phenom people - and then you don't have to deal with any synchronization between the cores, since the dependency graphs are entirely isolated. Within each group, arrange the crew members in a hierarchy - a tree, to use a Computer Science term - with the department head at the top.I'm curious what would the performance be if someone actually made a fully multicore-ready game engine. Yeah, Unreal Engine can chew these frames per second with 4 cores, but that doesn't mean it actually does it optimally.
Define "optimally".
There isn't a lot you can do to parallelize certain tasks. You can't, for example, run gameplay logic in parallel threads easily; there's a certain this-before-that that has to stay intact if game designers want to make reasonable decisions about how the game flows.
Most game engines these days divide their thread pool such that high-level tasks like gameplay logic, physics, audio and rendering can be worked on in parallel, but there's not a lot you can do to further subdivide those tasks so that the engine spreads its load optimally across an arbitrary number of cores.
The lesson here is that multithreaded programming is hard.
There are tons of things that can be parallelized in a game like FreeSpace. Tons. For instance, when a gigantic high-poly masterpiece lumbers on screen, like an SJ Sathanas or a GTVA Colossus, right now all the people on board, all the rivets and bolts, all the Shivans and creepy biomechanical stuff are simulated sequentially. Why not divide the ship's population eight ways, and assign each fraction of the population to a different core on one of these new Ryzen things? If you divided the groups up correctly you'd make it so that the groups don't actually need to interact with each other as much - like, make one group the Science division, one group the Engineering division, and one group the Command division, for those tri-core Phenom people - and then you don't have to deal with any synchronization between the cores, since the dependency graphs are entirely isolated. Within each group, arrange the crew members in a hierarchy - a tree, to use a Computer Science term - with the department head at the top.
There are tons of things that can be parallelized in a game like FreeSpace. Tons. For instance, when a gigantic high-poly masterpiece lumbers on screen, like an SJ Sathanas or a GTVA Colossus, right now all the people on board, all the rivets and bolts, all the Shivans and creepy biomechanical stuff are simulated sequentially.
Why not divide the ship's population eight ways, and assign each fraction of the population to a different core on one of these new Ryzen things? If you divided the groups up correctly you'd make it so that the groups don't actually need to interact with each other as much - like, make one group the Science division, one group the Engineering division, and one group the Command division, for those tri-core Phenom people - and then you don't have to deal with any synchronization between the cores, since the dependency graphs are entirely isolated.
I can tell you what would happen if you tried to parallelise the game by dividing the points in each ship between 8 CPU cores, and it basically involves the Colossus trying to perform a constructive proof of the Banach-Tarski theorem.
Yeah, overclocking to 4+GHz is stupid easy these days (I had my i5 going at 4.5 GHz with some stability issues using a cooler that's designed for quiet operation).
Do you do anything but game? Ryzen.
If not, 7600k.
Do you do anything but game? Ryzen.
If not, 7600k.
But Ryzen typically outperforms the i5 chips in gaming workloads.
7700k is what you would want to maximize gaming performance.
On similar logic I can't see AMD ever making inroads into the Dwarf Fortress player market.