There are tons of things that can be parallelized in a game like FreeSpace. Tons. For instance, when a gigantic high-poly masterpiece lumbers on screen, like an SJ Sathanas or a GTVA Colossus, right now all the people on board, all the rivets and bolts, all the Shivans and creepy biomechanical stuff are simulated sequentially.
What? No. That's not what is happening at all
. Not even a simulationist nightmare like Star Citizen would do it this way. Either you have no clue at all about game engines, or you're constructing an analogy so deeply flawed that it's invalid by design.
Why not divide the ship's population eight ways, and assign each fraction of the population to a different core on one of these new Ryzen things? If you divided the groups up correctly you'd make it so that the groups don't actually need to interact with each other as much - like, make one group the Science division, one group the Engineering division, and one group the Command division, for those tri-core Phenom people - and then you don't have to deal with any synchronization between the cores, since the dependency graphs are entirely isolated.
This assumes that all those tasks are computationally equivalent (i.e. that they require the same amount of work per frame). That is not the case; In FSO (and I would imagine that the same holds true for most other engines), Setting up a frame for rendering and actually rendering it require the most computational effort, followed by physics, audio and gameplay logic.
Here's an illustration. Using our profiler, this is an accumulated graph of how the CPU time is divided up between tasks, recorded using one of the WiH cutscenes.
Look at the second row. There's a small purple-ish label there that says "Simulation". On every frame, this takes an average of 3.6 ms to execute (an average that is heavily distorted by the first frame, because that one takes 216 ms to process; it is not uncommon for this step to take only a couple of nanoseconds).
Now look at the long brown bar right next to it that says "Render Frame". That's all the CPU time spent issuing commands to the GPU and waiting for them to finish. That takes an average of 40 ms on my machine (which again is distorted: this step takes anything from 700ms to 9ms).
Due to the nature of OpenGL, there is little we can do to split up tasks that are part of the frame rendering; there are tasks that have to be completed before others can be started (and since there's only one GPU in the system, no matter how much we parallelize the task of issuing commands to it, all those tasks need to wait for the GPU to work through things).