Hard Light Productions Forums
Modding, Mission Design, and Coding => FS2 Open Coding - The Source Code Project (SCP) => Topic started by: InvalidPointers on July 04, 2012, 07:55:06 pm
-
Howdy everyone, not-very-longtime listener, first-time caller. Since this is my first time here, some introductions! I'm IlmrynAkios over on the BGS forums and TRSGM on Doom3World.org; if you play either Oblivion or Doom 3, chances are you may have heard of some of the things I've had my hands in-- Sikkmod and OBGE/OVEP. While I've been out of the modding scene for some time, I recently ran across the FSSCP and had a pretty powerful itch to get back into things.
So, long introductory cutscene over, onto the meat. In rough estimated utility:
- I see you use tangent-space normal maps for your ships which seems very wasteful. The reason everyone in the professional sector loves these is because it works well with skinned meshes (meaning you're recalculating all that stuff anyway) and because you can, in theory, recycle the same texture for multiple objects. Since FreeSpace does neither, you can instead create object-space normal maps (more on that in a sec) and skip a 3x3 matrix multiply per pixel. While your normal encoding scheme would no longer work, you can probably get away with standard DXT5. In application land, you would just need to pre-rotate the light data by multiplying it with the model's rotation matrix. Pretty basic change for wide-reaching benefits.
- You can also try removing all directional lights from the traditional lighting shader outright and instead combine them into like 3rd or 4th-order spherical harmonics, though this idea doesn't get *really* cool until...
- You can experiment with projecting the skybox into spherical harmonic coefficients and actually have things like colorful nebulae, etc. light ships, probably combining the results with (or alternately removing the need for) directional lights/suns. The disadvantage of this is that specular becomes more difficult, though very much not impossible (http://developer.amd.com/documentation/presentations/legacy/Chapter01-Chen-Lighting_and_Material_of_Halo3.pdf).
-If you do reintroduce the third normal component, consider using the Toksvig specular antialiasing technique. It's only a handful of ALU ops and will completely eliminate specular shimmer. If you ever decide to add physically-based shading and/or HDR lights this will become especially important to address.
- Creating some shader permutations for different numbers of lights could also help speed things up a bit. While the GL preprocessor is terrible (I salute you for continuing to work with OpenGL, D3D has spoiled me immensely in this regard) you may be able to use some Python scripts or something to do the heavy lifting. This has the disadvantage of making sweeping lighting changes more difficult to maintain.
Will add some more as I think of them and as I poke around the source.
edit: Fixed the link -Chief
-
I'm not really familiar with anything you just said, but you obviously know your stuff. It's great to hear your thoughts on this. :):yes:
You mentioned you are poking around the source. Are you a graphics coder perchance, in addition to modder?
-
I'm not really familiar with anything you just said, but you obviously know your stuff. It's great to hear your thoughts on this. :):yes:
You mentioned you are poking around the source. Are you a graphics coder perchance, in addition to modder?
Mainly as a hobbyist, but yes. That said, about 90% of my experience is with Direct3D in its various incarnations, and in particular D3D11 with a smidgin of D3D9 as appropriate.
EDIT: Would like to go pro at some point, FWIW.
-
ATM we're probably not going to be considering any of the lighting features you mentioned until Valathil and I get the deferred rendering system finished. As much as I'd like to get spherical harmonics and global illumination into Freespace, we need to build the cart before putting the horse in front of it.
-
ATM we're probably not going to be considering any of the lighting features you mentioned until Valathil and I get the deferred rendering system finished. As much as I'd like to get spherical harmonics and global illumination into Freespace, we need to build the cart before putting the horse in front of it.
That's something I've been kicking around a bit too. The overall light distribution and scene complexity don't seem to justify the additional overhead G-buffer generation passes add. I mean, were FS2 a FPS I could probably see it, but considering that a large portion of the scene is empty space and that most active light sources would fully encompass a ship anyways it seems like we're in pretty good shape with the system in place!
While I hesitate to pass final judgement without first getting the gory details of the new system, it seems like you're already at a net loss per light as well. With a forward setup, you can amortize texture reads across multiple lights, whereas with a basic volume-per-light deferred approach you're going to have to retrieve the various material properties from the G-buffers with every light. You can probably get around this via a tiled approach (more Uncharted 1 than Battlefield 3) but it still feels like fixing something that shouldn't need to be fixed to begin with :( Like I said, probably need to go through the renderer before I say for sure, lest I end up with a bad case of foot-in-mouth.
-
Do you happen to know anything about the MNG file format, by the way?
-
We're interested in deferred lighting because we simply want more lights, not for any performance reasons. There's been some dissatisfaction in how FS handles the limited number of lights in our forward shader so me and Valathil decided that deferred shading was the most scalable way to dramatically increase the number of light sources.
-
Ooh! A new coder! (http://www.theabeforum.com/images/emoticons/New/bow.gif)
:welcome:
-
- I see you use tangent-space normal maps for your ships which seems very wasteful. The reason everyone in the professional sector loves these is because it works well with skinned meshes (meaning you're recalculating all that stuff anyway) and because you can, in theory, recycle the same texture for multiple objects. Since FreeSpace does neither, you can instead create object-space normal maps (more on that in a sec) and skip a 3x3 matrix multiply per pixel. While your normal encoding scheme would no longer work, you can probably get away with standard DXT5. In application land, you would just need to pre-rotate the light data by multiplying it with the model's rotation matrix. Pretty basic change for wide-reaching benefits.
Problem is that this would invalidate all the normal maps that have been created already. Unless there's a batchable way to convert those maps to the new format, introducing it would come at quite a cost.
All in all, I would like to add my greetings to the choir. If you have the time, please join the #scp channel on irc.esper.net, that's where some of us coders and other assorted people hang out.
-
- I see you use tangent-space normal maps for your ships which seems very wasteful. The reason everyone in the professional sector loves these is because it works well with skinned meshes (meaning you're recalculating all that stuff anyway) and because you can, in theory, recycle the same texture for multiple objects. Since FreeSpace does neither, [...]
You sure? I'm under the impression that it's very common to re-use uv's on the symmetrical (or otherwise very similar) parts of a ship (at least that's what we do in Fate of the Galaxy), and unless I'm missing something, that wouldn't work with object space normal maps (without some extra tricks, anyway).
-
If you have the time, please join the #scp channel on irc.esper.net, that's where some of us coders and other assorted people hang out.
This is where ALL THE COOL PEOPLE hang out. FTFY
-
recycle the same texture for multiple objects. Since FreeSpace does neither
FS is still, unfortunately, infested with tileraped ships. So, yeah, texture reuse is plentiful.
-
What zookeeper said above is also a factor.
-
Holy lots of thread replies, Batman! Good to know about the texture stuff, though. Guess it's not the silver bullet I thought it would be :|
Problem is that this would invalidate all the normal maps that have been created already. Unless there's a batchable way to convert those maps to the new format, introducing it would come at quite a cost.
It just so happens that xNormal (http://"http://www.xnormal.net/1.aspx") can do that. And it's batchable AFAIK. Wonderful little toolkit, in the event you haven't seen it-- it supports Morten Mikkelsen's improved tangent space handling scheme so there could be some benefit to re-processing art anyway so long as you make the application-side changes too.
We're interested in deferred lighting because we simply want more lights, not for any performance reasons. There's been some dissatisfaction in how FS handles the limited number of lights in our forward shader so me and Valathil decided that deferred shading was the most scalable way to dramatically increase the number of light sources.
Alright, fair enough. Have you considered clustered shading at all? It's definitely not an option for older hardware, but that doesn't mean it's off the table. According to the Chalmers internal benchies, it beats out naive volume-based deferred shading pretty handily in almost all cases.
Do you happen to know anything about the MNG file format, by the way?
Unfortunately I do not :(
-
Do you think you could work with Valathil on implementing those improvements? It seems like you might be a miracle worker, like him. :) The features you're proposing sound very interesting, but keep in mind that FSO code isn't exactly clear or easy to modify (read: it's an abomination that wouldn't look out of place in The Warp :)).
-
to be fair its not that bad. granted it was in horrible condition when what would become the scp got ahold of it. it has come a long way i think. some of it is still atrocious and evil however.
-
Howdy everyone, not-very-longtime listener, first-time caller. Since this is my first time here, some introductions! I'm IlmrynAkios over on the BGS forums and TRSGM on Doom3World.org; if you play either Oblivion or Doom 3, chances are you may have heard of some of the things I've had my hands in-- Sikkmod and OBGE/OVEP. While I've been out of the modding scene for some time, I recently ran across the FSCP and had a pretty powerful itch to get back into things.
Hello and welcome! I'm not a coder (though I can probably dig and claw my way through some simple code snippets and get a general idea of what they do), but I do know stuff about textures!
So, long introductory cutscene over, onto the meat. In rough estimated utility:
- I see you use tangent-space normal maps for your ships which seems very wasteful. The reason everyone in the professional sector loves these is because it works well with skinned meshes (meaning you're recalculating all that stuff anyway) and because you can, in theory, recycle the same texture for multiple objects. Since FreeSpace does neither, you can instead create object-space normal maps (more on that in a sec) and skip a 3x3 matrix multiply per pixel. While your normal encoding scheme would no longer work, you can probably get away with standard DXT5. In application land, you would just need to pre-rotate the light data by multiplying it with the model's rotation matrix. Pretty basic change for wide-reaching benefits.
Here are my comments on this, for what it's worth:
There are several ships that use tilemaps instead of UV mapped textures, and moreover there are several ships that share certain textures - usually tiles, but I'm reasonably sure there are ships that share UV textures, or use parts of the same texture.
To retain compatibility with these assets, tangent space normals should still remain compatible, and adding object space normal map support would require a new texture type, OR alternatively some identifier that changes how the map is read and how the normal map is applied.
Ignoring the questions of feasibility: On the topic of using standard DXT5 for object-space normal map, I must very much disagree. In DXT compression algorithms, the contents of RGB channels affect each other. For diffuse and specular maps this is good enough; for normal maps, it isn't. This is the reason why the DXT5nm format is used for tangent space normal maps. It stores one normal map channel in RGB and the other in Alpha; these remain distinct in DXT5 compression, so that the contents of vertical normals channel don't end up affecting the horizontal normals channel.
With different data in red, green and blue channels, you would end up with hopeless artefacts on the normal map.
Moreover, I was under the impression that object-space normals basically store 3D vectors, rather than 4-vectors (I am assuming that 3D graphics is not yet branching to general relativity). Surely, three channels would be sufficient to store three values; in this sense, if DXT compression were used, it would have to be DXT1c for maximum memory efficiency (and horrid quality). In practice this would most likely be untenable due to quality drop, so u888 would likely be the target format.
Now that I think of it, if the engine can identify whether the normal map has an alpha channel or not, it could use that to define whether it should apply tangent space normals or object space normals. I do not know if this is feasible to execute.
Other part of your post that interested me was the stuff about making the skybox affect the lighting more, but I really have no clue about OpenGL or 3D graphics coding in general.
-
Yeah, Herra is kind of our resident skybox expert. I think it safe to assume you can go to him if you need more info on that topic.
-
dxt color channels are severely resolution limited. seeing as its a 5'6'5 colorspace (though i think its upsampled to 8 bit before computing the interpolated color values for the other 2 colors in a cell), is not well suited to store object space normal data. for the red and blue channels that is 11.5 degree increments of perturbation for each value (half that for the green value). thats barring upsampling and normalization though. granted its a vector and not a system of rotations. you could probibly represent it as rotations to apply to a base vector resulting in the final normal. that way green could represent a 360 degree angle, and for the other 2 angles you only need a 180 degree rotation. 180/32 == 360/64. so you have a 5.625 degree angular increments giving you a little bit better resolution than a normalized vector, though requiring a little more math, probibly not a good thing. you could also make a 2 component rotational representation work as well, but it would probibly need to be uncompressed to work right.
dxt5nm is passible for 2 component normal maps, rgtc2 (bc5) is better (color values are 8 bit, higher number of interpolated values so you have 8 colors per cell as opposed to 4), but not as well supported by video hardware. neither of those formats works for a 3 component map though, so im with herra that a u888 format would be optimal here. you could do other things, like 3 rgtc1's (bc4) to help reduce the memory footprint of those textures (i think by 2:1). of course that adds complexity you have 3 textures instead of one for your normal maps.
*edit*
attempted to make it sound less retarded, failed
-
dxt color channels are severely resolution limited. seeing as its a 5'6'5 colorspace (though i think its upsampled to 8 bit before computing the interpolated color values for the other 2 colors in a cell), is not well suited to store object space normal data. for the red and blue channels that is 11.5 degree increments of perturbation for each value (half that for the green value). thats barring upsampling and normalization though. granted its a vector and not a system of rotations. you could probibly represent it as rotations to apply to a base vector resulting in the final normal. that way green could represent a 360 degree angle, and for the other 2 angles you only need a 180 degree rotation. 180/32 == 360/64. so you have a 5.625 degree angular increments giving you a little bit better resolution than a normalized vector, though requiring a little more math, probibly not a good thing. you could also make a 2 component rotational representation work as well, but it would probibly need to be uncompressed to work right.
dxt5nm is passible for 2 component normal maps, rgtc2 (bc5) is better (color values are 8 bit, higher number of interpolated values so you have 8 colors per cell as opposed to 4), but not as well supported by video hardware. neither of those formats works for a 3 component map though, so im with herra that a u888 format would be optimal here. you could do other things, like 3 rgtc1's (bc4) to help reduce the memory footprint of those textures (i think by 2:1). of course that adds complexity you have 3 textures instead of one for your normal maps.
*edit*
attempted to make it sound less retarded, failed
It's actually worse than that if you only store vectors that lie on the unit sphere (just as a heads-up, my use of the words 'could work' means just that-- having written DXT compressors I'm full aware it's not ideal!) since you limit the possible values of the other channels. With that said, though, you're only placing restrictions on the endpoints of the approximation to surface orientation normal maps provide. With interpolation and normalization you get a lot of fidelity/gradation back, even if it isn't an accurate representation of the source mesh or whatevs that you generated said normal map with. Welcome to computer graphics, where If It Looks Right, it Is Right®
EDIT 2: As far as I am aware most GPUs decompress DXT5 to A8R8G8B8 in the texture cache for blending. There are also a significant fraction that leave DXT1 as-is in 5:6:5-- good for texturing performance, but it ends up looking like ****.
To retain compatibility with these assets, tangent space normals should still remain compatible, and adding object space normal map support would require a new texture type, OR alternatively some identifier that changes how the map is read and how the normal map is applied.
A model flag seems the best way (can we do that?) to accomplish this; you just pick a different lighting shader that skips doing the transforms. A deferred shading architecture would unfortunately render this useless, though, so I think it's a dead end.
Ignoring the questions of feasibility: On the topic of using standard DXT5 for object-space normal map, I must very much disagree. In DXT compression algorithms, the contents of RGB channels affect each other. For diffuse and specular maps this is good enough; for normal maps, it isn't. This is the reason why the DXT5nm format is used for tangent space normal maps. It stores one normal map channel in RGB and the other in Alpha; these remain distinct in DXT5 compression, so that the contents of vertical normals channel don't end up affecting the horizontal normals channel.
With different data in red, green and blue channels, you would end up with hopeless artefacts on the normal map.
As has been mentioned, 3Dc/ATI2/BC5 is really the better choice since you have 2 8-bit DXT channels. Is there a reason why we don't use that, btw? Really, CTX would be the best choice here, but that's only on Xenos. Kind of surprised that Microsoft didn't add that into D3D11, but I don't work there. Also, while I touched on this earlier, it bears mentioning that the error metric most DXT compressors use is designed solely for use with color data and is really, really bad for normals-- it's based on human color perception and not angular difference. If you change this out, the quality boost is pretty insane and Crytek lists this as one of the major reasons why all the surfaces in Crysis 2 look butter-smooth.
Moreover, I was under the impression that object-space normals basically store 3D vectors, rather than 4-vectors (I am assuming that 3D graphics is not yet branching to general relativity). Surely, three channels would be sufficient to store three values; in this sense, if DXT compression were used, it would have to be DXT1c for maximum memory efficiency (and horrid quality). In practice this would most likely be untenable due to quality drop, so u888 would likely be the target format.
Now that I think of it, if the engine can identify whether the normal map has an alpha channel or not, it could use that to define whether it should apply tangent space normals or object space normals. I do not know if this is feasible to execute.
It's a pretty trivial problem to solve, actually. I don't think it's a useful direction of research based on my understanding of the technology roadmap, however.
EDIT: You could encode the entire tangent space as a quaternion and pop that into a DXT5. Not super-useful outside of anisotropic lighting but it's one of those things where it just feels weirdly promising.
Other part of your post that interested me was the stuff about making the skybox affect the lighting more, but I really have no clue about OpenGL or 3D graphics coding in general.
tl;dr it's why Halo: Reach looked so awesome (http://www.youtube.com/watch?v=QJPqBt2KGpI). Area lights! Physically-plausible shading models! On a seven-year-old piece of hardware! Cut back only marginally for the shipping release!
Do you think you could work with Valathil on implementing those improvements? It seems like you might be a miracle worker, like him. :) The features you're proposing sound very interesting, but keep in mind that FSO code isn't exactly clear or easy to modify (read: it's an abomination that wouldn't look out of place in The Warp :)).
I'll see what I can do after I get done purging all the daemon filth ;)
-
Please always bear in mind that, whatever happens, we will never go back to a DX renderer. Maintaining more than two render paths (fixed function/programmable) is a bad idea for a team whose programmer retention rate is, let's face it, not all that great.
That said, you _really_ need to join the irc channel. It's where most of the magic happens.
-
Not to mention, obviously, that DX means only windows users get to have fun (or not) with it.
-
That said, you _really_ need to join the irc channel. It's where most of the magic happens.
Links can be found up top, under the HLP drop-down, FYI, if that makes it a bit easier convenient for you. ;)
EDITed: so it didn't sound like I was saying he didn't know how to join an IRC channel *facepalm*