Originally posted by Descenterace
I've been fiddling with DX9 for a few months now, writing a space-sim game engine form the bottom up, and I've decided to resurrect a question that has been bothering me.
Is it possible to set a volume of space to be filled with fog, and have it rendered by the hardware?
As far as I can tell, DX9 fog affects the whole visible world. I can't find a way of telling the renderer that this arbitrary area has to be filled with fog, and this arbitrary area (eg. the rest of the world) musn't be. Am I reduced to using billboarding techniques and alpha blending? Or is there a way of doing this? Because if it does work, I'll have found the perfect way of doing explosion effects.
Also, is there a way of simulating a 'line' light source in DX9 without just using a string of point light sources? This is important for lasers and beams; I want them to illuminate the scene realistically.
And finally, how can I do polygon-perfect collision detection? The intersection calculations become prohibitive just for a linear collision. If I include the rotation of the two masses as well, things get ridiculous. I don't want to use the old bounding-sphere method if I can help it. The physics code in my game engine requires the collision detection code to return the barycentric coordinates and face # of the impact point, as well as the normal to the face, in order to calculate the impulse on each mass and the size/position of the dent caused by the collision.
There IS a way for doing volume fogging, but it isn't built into the API. The basic idea is that each back-face of the fog volume adds fog from the view-point up to it, and each front-facing volume substracts fog from it to the eye point (so if the rear of a fog volume is 10 units from the view-point, and the front of it is 5 units from the view-point, you get everything behind to be through 5 units of fog. You can render the fogging info into the alpha channel (If you're planning on using the effect only for explosions, you can get away with using only 8-bit precision). Of course, the main problem is to correctly interact with objects inside the fog volume.
I think there's a sample that comes with the DirectX SDK that implements that idea, and somehow manages to steal 12-bits of precision, using the texture interpolator for the fog data, as far as I remember, and it explains everything better than I did.
As for collisions, I'm no expert (Haven't gone to college, yet, so my math is very limited), but the general idea is to rule out as many possible polygons with simple checks. For example, FS2 uses BSP data for collisions, but since you're doing (or so it seems) geometry modifications on the fly, that won't work, so try some alternative approach (OCtrees, maybe?). As I said, I'm no expert, and I'm sure you could find good articles on the subject (as this is one of the most common problems in games), so don't take my word for granted.
The last subject would be line light-sources. First of all, there's nothing in the fixed-function pipeline to accomodate line light-sources. That leaves vertex shaders as the only possible solution.
Ideally, to compute a line light-source, you'd do an integral of the contribution of each point on the line, but that's impossible to do in real-time (Well, I haven't checked it, but I'm pretty sure), so, what should be done is basically to use the point that contributes the most brightness, and then scale it some or raise by some low power. Again, I'm not sure of anything, but that should be about right (Anisotropic lighting is simulated in a similar way, and smarter people than I were the ones that came up with the way to do it with anisotropic lighting).