Think of a thread as a separate program coexisting within the same memory space as 1) the main process and 2) any other threads
So the instructions from one thread are actually (ignoring the operating system moving threads around cores) all being run on the one core, while instructions in another thread might all be run on another core, or time sliced by the operating system on the same core. You can even have multithreaded applications on a single core, although they may actually suffer a slight performance decrease thanks to the CPU cache being invalidated regularly
Because they're in the same space, you've gotta be careful with reading and writing memory, because you can overwrite other threads data, and there might be no way of telling whether that is the case. (The example given by (taylor?) above using the milkshop analogy is really good for seeing how this works)
This is the reason why any multithreaded application needs to be designed from the ground up to have threads, and you've gotta be awake when you're implementing them (there are some veryvery easy to make mistakes which will cause all sorts of havok)