With single core, it's easy enough - 3.0 GHz = 3.0 GHz (non-geeky equation), but with all these new ones that only say they have about 2.2GHz, is that per core or the total processing power? Does that mean a new quad core 2.0 GHz CPU will have 8.0 GHz total?
No. The clock frequencies announce the processor's cycle time - N cycles in a second. With 2.0 GHz clock frequency, the processing unit has two billion cycles in a second (2,000,000,000 Hz).
The fact that there are multiple cores does not magically increase the cycle rate.
However, cycle rate is integral in defining a single core's calculating power; essentially, it means it can do N processes in a second (this is grossly simplified since there are other factors that amount to the calculating power), and the calculating power is measured in FLoating point Operations Per Second (FLOPS), usually with modern computers this ranges in gigaflops which should not become as a surprise to an acute reader at this point.
This means that if each core has calculating power of 2 GFLOPS, a dual core processor has the
parallel processing power of 2x2GFLOPS, quad-core has 4x2GFLOPS etc. This, of course, requires that the program itself supports parallel processing which requires that the program uses so-called "multiple threads"; one thread per
virtual processor.
Yes, to complicate the matters, some processors have virtualization that enables the operating system to see one physical core as two discrete virtual processors, which can be very beneficial in some ways and problematic in others.
For older applications such as FS2_Open, which don't support multi-threaded processing, the performance of the CPU depends on a single core's performance which often can be more or less directly expressed in main clock frequency.
However, there are other things that affect the processing power in different scenarios; mainly, the speed and size of the processor's memory cache and the bandwidth to system random access memory. These form the so-called von Neumann bottleneck. This means that in many cases, computing power is not limited by the raw processing power but the ability to transfer data to and from between the system RAM and the processor. Not only this, but the processor's are required to cache their own instructions in addition to the data passing through. The latency of memory operations is what causes most significant delays in computing when the amount of data flow goes over certain threshold. If for example the processor need to use an instruction that isn't stored in the instruction cache, it needs to fetch the instruction there and the processing is stopped for the thread that requires that instruction, for the duration of fetching the instruction to the cache. Same with data - when the thread receives data from memory location to calculate (a word, in modern processors it is of length 64 bits), it processes it, spits out the result, and this result is stored back to a memory location, and the processing is halted for the thread until new data can be accessed from the memory.
The data cache speeds up the process by increasing the chunks of data that can be stored "closer" to the processor (computatively speaking) instead of moving each word one by one through the system bus between the RAM and the CPU.
Here is a diagram of a reasonably modern CPU's cache structure (AMD K8 core):

Now, messing up with process priority will not magically increase the processing power of the core. It will, however, elevate the process to higher priority which prevents threads from other processes from being processed on the appropriate core. Setting the priority to real-time does just that - it gives all of the core to the single processes demands, all the time until the priority is dropped or process terminated.
This might work on a multi-core processor when you set affinity to a single core, or the application natively only uses a single core. In this case, it leaves other core free to do important stuff like keep the operating system responsive and working, things like that. If, however, you set a multi-threaded process to real time priority, it can easily hog all of the CPU, prevent the operating system processes from working and crash your computer.
In single core systems this will happen every time if you set a "greedy" process to real-time priority. Unless the OS is protected from stupid user errors like this.
In other words: don't do it, it's misguided and silly and will only grant you a placebo effect of renders going faster. If other processes are messing up with renders, you should set their priority lower, not the rendering process priority to higher. Or you can set rendering processes affinity to a core that is not used by other processes as much. Either method works.
Incidentally, this is also why so many people experience such huge frame rate drops with FRAPS. Sure, it does decrease performance by having to dump each frame into HDD in a video file, but if you set FRAPS affinity to a different core than the game you play, it will work out much smoother... when the same core attempts to run both FS2_Open and FRAPs for example, it creates conflict of interests for both processes which slows down both process threads; when they operate in discrete cores, they are free to use more processing power simultaneously.