Hard Light Productions Forums
Modding, Mission Design, and Coding => FS2 Open Coding - The Source Code Project (SCP) => Topic started by: Descenterace on September 06, 2003, 04:50:40 am
-
I've been fiddling with DX9 for a few months now, writing a space-sim game engine form the bottom up, and I've decided to resurrect a question that has been bothering me.
Is it possible to set a volume of space to be filled with fog, and have it rendered by the hardware?
As far as I can tell, DX9 fog affects the whole visible world. I can't find a way of telling the renderer that this arbitrary area has to be filled with fog, and this arbitrary area (eg. the rest of the world) musn't be. Am I reduced to using billboarding techniques and alpha blending? Or is there a way of doing this? Because if it does work, I'll have found the perfect way of doing explosion effects.
Also, is there a way of simulating a 'line' light source in DX9 without just using a string of point light sources? This is important for lasers and beams; I want them to illuminate the scene realistically.
And finally, how can I do polygon-perfect collision detection? The intersection calculations become prohibitive just for a linear collision. If I include the rotation of the two masses as well, things get ridiculous. I don't want to use the old bounding-sphere method if I can help it. The physics code in my game engine requires the collision detection code to return the barycentric coordinates and face # of the impact point, as well as the normal to the face, in order to calculate the impulse on each mass and the size/position of the dent caused by the collision.
-
it may be expensive, but for to limit which objects are fogged you can check to see if they are inside a fog cloud and then enable or disable fog depending on the results
-
Originally posted by Descenterace
I've been fiddling with DX9 for a few months now, writing a space-sim game engine form the bottom up, and I've decided to resurrect a question that has been bothering me.
Is it possible to set a volume of space to be filled with fog, and have it rendered by the hardware?
As far as I can tell, DX9 fog affects the whole visible world. I can't find a way of telling the renderer that this arbitrary area has to be filled with fog, and this arbitrary area (eg. the rest of the world) musn't be. Am I reduced to using billboarding techniques and alpha blending? Or is there a way of doing this? Because if it does work, I'll have found the perfect way of doing explosion effects.
Also, is there a way of simulating a 'line' light source in DX9 without just using a string of point light sources? This is important for lasers and beams; I want them to illuminate the scene realistically.
And finally, how can I do polygon-perfect collision detection? The intersection calculations become prohibitive just for a linear collision. If I include the rotation of the two masses as well, things get ridiculous. I don't want to use the old bounding-sphere method if I can help it. The physics code in my game engine requires the collision detection code to return the barycentric coordinates and face # of the impact point, as well as the normal to the face, in order to calculate the impulse on each mass and the size/position of the dent caused by the collision.
There IS a way for doing volume fogging, but it isn't built into the API. The basic idea is that each back-face of the fog volume adds fog from the view-point up to it, and each front-facing volume substracts fog from it to the eye point (so if the rear of a fog volume is 10 units from the view-point, and the front of it is 5 units from the view-point, you get everything behind to be through 5 units of fog. You can render the fogging info into the alpha channel (If you're planning on using the effect only for explosions, you can get away with using only 8-bit precision). Of course, the main problem is to correctly interact with objects inside the fog volume.
I think there's a sample that comes with the DirectX SDK that implements that idea, and somehow manages to steal 12-bits of precision, using the texture interpolator for the fog data, as far as I remember, and it explains everything better than I did.
As for collisions, I'm no expert (Haven't gone to college, yet, so my math is very limited), but the general idea is to rule out as many possible polygons with simple checks. For example, FS2 uses BSP data for collisions, but since you're doing (or so it seems) geometry modifications on the fly, that won't work, so try some alternative approach (OCtrees, maybe?). As I said, I'm no expert, and I'm sure you could find good articles on the subject (as this is one of the most common problems in games), so don't take my word for granted.
The last subject would be line light-sources. First of all, there's nothing in the fixed-function pipeline to accomodate line light-sources. That leaves vertex shaders as the only possible solution.
Ideally, to compute a line light-source, you'd do an integral of the contribution of each point on the line, but that's impossible to do in real-time (Well, I haven't checked it, but I'm pretty sure), so, what should be done is basically to use the point that contributes the most brightness, and then scale it some or raise by some low power. Again, I'm not sure of anything, but that should be about right (Anisotropic lighting is simulated in a similar way, and smarter people than I were the ones that came up with the way to do it with anisotropic lighting).
-
vertex shaders were made for things like tube lights, it's the perfict solution
-
I've figured out the collisions problem. Solution: something I call 'normal maps'.
Each object has an array of distances and coordinate information. First subscript is xy plane bearing to point on bounding sphere, and the second subscript is yz plane bearing. Each element in this array is calculated as follows, and stored as part of the ship model file:
1. A line is drawn from the centre of the bounding sphere to the point defined by the subscripts.
2. The point at which this line intersects the model (furthest from the centre) is determined.
3. The face intersected by the line, plus the barycentric coordinates of the intersection, are determined.
4. This data is saved in the array.
The array will be a QR_NORMAL_MAP structure, with subscripts [20][11]. There are 20 lines of longitude around the sphere, and 5 lines of latitude above and below the equator. Prime meridian is defined as the xz plane.
This gives an apparent total of 220 points on the bounding sphere, which is adequate resolution. The equator is 5, south is 0, and north is 10 (C++ uses 0-bounded subscripts). In reality, the total is less than 220 because a second subscript of 10 is the north pole, regardless of the first subscript.
How the normal map is used:
When two bounding spheres are determined to have touched each other, a snapshot is taken of their positions at the moment of contact. Each vertex in each object is cycled through (some are dismissed due to face normals, etc). Distance and bearing from the centre of the other object is calculated, and used to look up the normal map for the closest bearing. The distance in the normal map tells the physics engine how close a vertex has to be to be inside the object. The vertex whose distance is closest to the normal map distance is assumed to have hit.
I could use the normal map to refine the bounding sphere test, too, using the bearings from one CoM to the other.
That's a horrendously complex solution, and drinks a huge amount of memory, but it should work OK. I won't bother updating it when dents occur, because they're only cosmetic and won't be deep enough to affect collisions too much.
-
Normal map out the window. Simpler solution: do a different game engine type and use spherical approximations.
Vertex shader is coming along nicely. All problems are now solved except for shadows, but I can do them with multipass rendering and stencil buffers.
In short, Project Vertigo is a major 'Go', so gimme a year and then expect some eye candy. Highest detail level: individual muzzle flashes will light the scene and cast shadows.
Niiiiiice...
-
Spherical collisions kinda cause problems with cylindrical ships, don't they?
Consider the Iwar2 method (here I go again): collision hulls. A collision hull is a simplified version of an object's geometry (call its LOD3.5). When you're only worrying about a set that consists of 30 or 40 polys, rather than a 1200 poly ship, it makes the math a hell of a lot simpler.
-
Because the models in Project Vertigo are multiple-section meshes to enable animation, each submesh has its own bounding sphere. If this engine was to be used for FreeSpace 2, each ship would have multiple component parts.
So Beams really would cut up ships.
-
Originally posted by Descenterace
Because the models in Project Vertigo are multiple-section meshes to enable animation, each submesh has its own bounding sphere. If this engine was to be used for FreeSpace 2, each ship would have multiple component parts.
So Beams really would cut up ships.
ok, what are we talking about if the coders would try and merge that into FS2?
i think the entire physics engine done again?
-
Originally posted by Descenterace
Because the models in Project Vertigo are multiple-section meshes to enable animation, each submesh has its own bounding sphere. If this engine was to be used for FreeSpace 2, each ship would have multiple component parts.
So Beams really would cut up ships.
Right... so what I'm suggesting is that each part gets its own unique collision hull...
Sphere's have problems becasue they assume a uniform distribution over volume. In the case of any solid that isn't nearly spherical, such as say, an Orion, you'd be getting false positives (HITS!) in empty space.
-
Ummm, the collisions I'm talking about here are collisions between ships. Weapons fire is handled quite easily by using DirectX's D3DXIntersectMesh function which takes an LPD3DXMESH and two D3DXVECTOR3* arguments. The vectors are the initial point and direction of the ray, and the mesh is... the mesh. It returns the number of the face hit, the distance along the ray of the hit, and the barycentric coordinates of the hit.
Besides, I intend to use Project Vertigo initially for a game where ships have spherical shields...
And damn, do I love shaders. I've figured out perfect projection shadows using shaders of varying complexity. For example, if you're trying to obtain a stencil mask for doing an alpha-blend shadow, use a pixel shader that only outputs depth data and vertex shader which doesn't process lighting.
-
Originally posted by Descenterace
Ummm, the collisions I'm talking about here are collisions between ships. Weapons fire is handled quite easily by using DirectX's D3DXIntersectMesh function which takes an LPD3DXMESH and two D3DXVECTOR3* arguments. The vectors are the initial point and direction of the ray, and the mesh is... the mesh. It returns the number of the face hit, the distance along the ray of the hit, and the barycentric coordinates of the hit.
My points stands. Using spherical volumes two Orions will collide when they are nowhere near each other. It doesn't matter if you're talking about weapon fire or hull collisions: spherical volumes are entirely too prone to errors unless your ships are spherical.
Even if your shields are spherical, what do you do when the shields are down? Do shields collide with shields?
As for your point about using a D3D function for mesh intersection:
A) its not portable (which is really neither here nor there. Its just that DirectX is a lock in and thus a pet peeve)
B) using collision hulls instead of raw avatar geometry would speed up the calculation of the intersection function immensely. Its a small optimisation in the case of a ship or two, but if you like to have lots of ships on screen, or lots and lots of weapons fire on screen, this will save you a huge number of clocks.
-
sphere colision is very usefull for object culling
-
Originally posted by mikhael
Spherical collisions kinda cause problems with cylindrical ships, don't they?
Consider the Iwar2 method (here I go again): collision hulls. A collision hull is a simplified version of an object's geometry (call its LOD3.5). When you're only worrying about a set that consists of 30 or 40 polys, rather than a 1200 poly ship, it makes the math a hell of a lot simpler.
Meh, FS2 is not Iwar2. In Iwar2, you go so fast it's hardly possible to get close enough to anything to collide ( in dogfight situation, not when pirating ). In FS2, you come close and personal with capships. I can imagine the frustration of the guy wanting to fly through the arms of a sathanas and bumping against a 1000 meters wide invisible wall :/
Also, in FS2 you have subsystems, turrets, etc, that play a real role in gameplay. How are you supposed to destroy/disable them if they're inside a collision box you can't go through? mmh?
finally, since none of the thousands of pofs available for FS2 have those colision meshes, who would volunteer to add them? :p
-
Originally posted by Venom
Meh, FS2 is not Iwar2. In Iwar2, you go so fast it's hardly possible to get close enough to anything to collide ( in dogfight situation, not when pirating ). In FS2, you come close and personal with capships. I can imagine the frustration of the guy wanting to fly through the arms of a sathanas and bumping against a 1000 meters wide invisible wall :/
With a spherical collision model, Venom, you could not ever fly through the arms of a Sathanas. You'd collide with that invisible wall before you ever got there. You'll notice that you didn't collide with invisble walls in Iwar2. That's because collision hulls do follow the real contour of the mesh. A spherical collision model doesn't do that. Its a sphere. There's a lot of empty volume for false positives. If you'd like, I can show you pictures of what I mean.
Also, in FS2 you have subsystems, turrets, etc, that play a real role in gameplay. How are you supposed to destroy/disable them if they're inside a collision box you can't go through? mmh?
finally, since none of the thousands of pofs available for FS2 have those colision meshes, who would volunteer to add them? :p
We're talking about this guy's personal engine, not the FS2 engine. Thus, your point is irrelevant.
However, let's just assume we ARE talking about FS2: I don't recommend switching FS2 to collision hulls.
However, let's play devil's advocate. :D Assuming someone liked the idea (since it would improve the speed of the overall engine for the sake of using a little more memory and a slightly longer load) no one would have to go and make the collision hulls for the existing models out there. In most cases, LOD2 would could used just fine. In the case of mods that don't have proper LOD2's, just default to the simplest LOD. This would, of course, result in you using LOD0 for a some mods, but those should be rare (I would hope). Problem solved.
Next, there is no reason you wouldn't be able to hit subsystems or turrets. In Iwar2 you can destroy subsystems just fine, even though they are within the collision hull. Since you're targetting the subsystem, the game knows that you're attacking it. With turrets, the intersection calculation shows that your fire is hitting the collision hull of the turret (turrets are low poly enough that they can be their own collision hulls). Thus turrets and subsystems are taken care of.
-
:wtf:
OK, there's apparently a lot of arguing about collision resolution here.
First thing that springs to mind is progressive meshes. You store the ship as one mesh, but can reduce it to lower LOD at will (for distance detail reduction and for collisions). Simple solution. However, the game I'm writing does not normally need the complexity of hull-level collisions. Spheres are fine.
When shields are down... I will indeed have to use a more accurate representation of the ship. I'm not even sure that I'm going to include armour plating in this game, though. Zero shields might equal 'dead'.
BTW, how would you determine if two collision hulls had collided? Perhaps extend a vertex normal from the closest vertex on one through the mesh of the other, and consider the direction of the normal of the first intersected face?
-
How many video cards these days support Multielement Textures? That's multielement textures, not multitexturing. I'm looking for a way of rendering light data to a surface but keeping diffuse and specular components seperate. At this rate, I'll have to turn off specular lighting for cards that don't support multielement textures, which includes my own Radeon 9700 Pro.
Pity you can't assign two seperate textures as pixel shader output surfaces instead of having to use a single multielement texture. It'd make things soooooo simple.
I've done all my vertex shaders now. I need to test them, though, and I haven't written enough of the engine to render anything yet. So now I'm hoping that a friend of mine will buy the Radeon 9800 Pro I recommended, so I'll be able to test the engine on a capable platform.
Ah well, specular lighting isn't that important. Anyway, by the time the game's finished, every graphics card (or driver, at any rate) will have the features I need. Since the shaders are software shaders, they can use texture formats that are implemented by the DirectX software. So hardware support isn't critical.
-
Originally posted by Descenterace
:wtf:
OK, there's apparently a lot of arguing about collision resolution here.
Don't worry. That's what Venom and I do. You could say its our reason to live. ;)
-
Originally posted by Descenterace
How many video cards these days support Multielement Textures? That's multielement textures, not multitexturing. I'm looking for a way of rendering light data to a surface but keeping diffuse and specular components seperate. At this rate, I'll have to turn off specular lighting for cards that don't support multielement textures, which includes my own Radeon 9700 Pro.
Pity you can't assign two seperate textures as pixel shader output surfaces instead of having to use a single multielement texture. It'd make things soooooo simple.
I've done all my vertex shaders now. I need to test them, though, and I haven't written enough of the engine to render anything yet. So now I'm hoping that a friend of mine will buy the Radeon 9800 Pro I recommended, so I'll be able to test the engine on a capable platform.
Ah well, specular lighting isn't that important. Anyway, by the time the game's finished, every graphics card (or driver, at any rate) will have the features I need. Since the shaders are software shaders, they can use texture formats that are implemented by the DirectX software. So hardware support isn't critical.
Erm, the Radeon 9800 doesn't implement any new features. It's just faster (Not that I'm not enjoying mine :) )
-
Originally posted by mikhael
Originally posted by Descenterace
:wtf:
OK, there's apparently a lot of arguing about collision resolution here.
Don't worry. That's what Venom and I do. You could say its our reason to live. ;) [/B]
yeah :D The thing about collisions is I like the way it is now, the fact that if I see a small trench in a hull, I can fly there, if I see a little bump on the hull, I'll have to avoid it, etc etc. the fact that hulls are, well wysiwyg :D
-
Right then. To stop people wondering whether collisions will have to be accurate in my game, I'll tell you the project's real name:
Descent 4: Ground Zero.
There. Done it. Now please, no one say that D4 is too big a project, because it's coming along just fine. Engine: no problem. It's just a case of getting a high pixel rate, good sound quality, and reliable input functions. The game content won't be started until next year, by which time I should know enough people at University to get some of them working on 3D models, sound effects, videos and levels.
And the concept drawings are mostly done, as is the storyline, so unfortunately there's nothing for anyone else to do until I finish the engine.
-
another Descent 04 project?
ah well.
Anyway, if it's a FPS, collision has to be accurate.
-
/me thinks of comments that pop up when FS3 is mentioned.
/me thinks you should rename that one.
-
Okay, to steer the topic back to the original subject, just to clarify what I meant about volumetric fog, assuming every model representing a fog volume is convex (every ray that intersects with the model must intersect an even number of times. Not including cases in which a ray is tangential to part of the model).
the sum of the Z values (or radial distances from the eye-point, if you want to compute it in a more accurate and computatinaly heavy PS2.0 pixel shader) of all the fog model's back-faces is the far plane, and the sum of the Z values of all the front-faces is the near plane.
Hope that explains the idea more clearly.
-
Yeah, I found a Volumetric fog sample in DX9 SDK. It looks easy enough to implement. I'm using PS_2_X and VS_2_X anyway, so this shouldn't present a problem.
The problem's gonna be providing alpha-blended explosions for people who use lower detail settings...
-
PS_2_X means you probably don't expect to have the game done in like four years or so, since that seems to be a reasonable time-frame for serious penetration of PS_2_X cards to the market (even though chances are that next-gen stuff will skip 2_x and go directly to 3_0. At least with vertex shaders it seems likely, since the only difference is the texture-read in vertex shader)