Hard Light Productions Forums
Modding, Mission Design, and Coding => FS2 Open Coding - The Source Code Project (SCP) => Topic started by: zookeeper on July 25, 2014, 02:30:37 pm
-
In r10944, I fixed some RTT bugs I found, both pertaining to camera-based full-scene rendering with mn.renderFrame():
1. Normals were inverted on all geometry, so you'd only see backfaces.
2. The output bitmap was mirrored on both axes if post-processing was off, and a quarter of it was left black on the left side.
Oddly enough, I don't recall seeing anyone mention those before. Has RTT just been so broken for so long that no one's even tried to use it in the last two years or so? :ick:
If you have any RTT-based scripts, I'd suggest checking whether they now work where they previously didn't... or if they did work before (not that I see how that'd be possible) that they still do. There still seems to be some bugs left, such as the output being stretched vertically if post-processing is on, but I'll try to fix those too if possible.
-
i think its been that way since i messed with it last. which was probibly around the time diaspora came out with all the bells and whistles i had prototyped with scripting several years prior. i just assumed it was the kind of thing that was better off letting the engine side handle rather than do it with scripting, because there was just so many things that didnt look right due to not having full control over the gpu.
i think the best way to deal with it is just have a blank slate hud gauge that you can create in the hud_gauges.tbl or whatever, and then have a hook into rendering, where all the appropriate render contexts are set up for you and you can just focus on the nitty gritty of drawing stuff (including calls to mn.renderframe()). this would work be it an rtt gauge or a hud gauge. in case of rtt, render to the texture bounds, in case of an on screen hud gauge, render within a clipping box. this is because of the simple problem that screen space is pretty much backwards from texture space (screen starts at top while textures start at bottom). the mn.renderframe() function (and many others as well) probibly needs to be sensitive to that context and configure itself correctly. shader support would be nice too.
my cockpit scripts should still be on svn if you need something to test against, though considering their age im not certain what kind of other problems they may have. i know my method of turret control has long been depricated and so those features dont work too well. but i was pretty sure that the rtt panels were functional last i checked. my scripts all need a full rewrite with oop rather than the functional code i used at the time.
-
Eh, actually I noticed that normals are still inverted when post-processing is on, will have to fix that.
EDIT: Fixed in r10945. Also fixes the way the rendered scene is stretched and offset wrong.
-
id also make sure the draw calls and things like getScreenCoords() work properly too. thats one of those things i had problems with. you would flip your uv space to get your screens oriented the right way, and then you draw a hud on top of it and the targeting brackets dont line up and text is upside down. i never got consistent operation out of any of it, so i would just mass workarounds on top of workarounds. which in all fairness may have been why things were the way they were.
-
Well, I'm not specifically going to go looking for possible bugs, I'll just keep fixing ones I encounter myself and which get in my way... which in all likelihood is most of them, anyway. Right now the only one I'm seeing is that suns get drawn in the wrong place.
-
the main ones to worry about is anything that operates on or returns screen coordinates. it probibly just would need to apply some 2d transforms on the inputs and outputs while in the context of an rtt panel to fix that. anything high level like draw calls for text, bitmaps, or targeting brackets might also require extra scrutiny.
actually it wouldnt be too bad to have more control of rendering related stuff when doing render to texture cameras. say you want to do cockpit rear view mirrors, then you actually want rtt to flip the x coord. that might be one of those cases where you can just flip your uv space though since you are not going to be drawing huds on top of it.
-
After a few days of figuring it out, I found out why mn.renderFrame() sometimes rendered to screen instead of to the render target texture; the code for environment reflections was setting its own render targets without restoring the original one afterwards. I fixed that by disabling reflections when doing RTT because I didn't find a simple way to save/restore the render target.
-
That's... unfortunate. I assume you'll come back to it later, or at least leave some sort of TODO?
-
That's... unfortunate. I assume you'll come back to it later, or at least leave some sort of TODO?
No, I don't really plan to revisit it. I'd leave a note about it somewhere (besides next to the code which does the deed, where I did) if there was a suitable place, but at least I'm not aware of any.
-
I understand a stack-based solution to render targets is in the works; expect it sometime after 3.7.2 goes final.
-
For the record, I have implemented a "solution" which is 100% untested: https://github.com/asarium/fs2open.github.com/compare/asarium:cmake...feature/nestedRenderTargets
I don't even know if it compiles :nervous:
-
This (http://scp.indiegames.us/mantis/view.php?id=3082) is an RTT bug I can't seem to figure out, so hopefully someone OpenGL-competent will. :sigh:
-
my opengl knowledge is limited to v1.3, but i have a hunch. it might have something to do with the row order of textures vs the screen. y is backwards on textures and as a result you need to re-init your projection matrix to render correctly. of course when you do this all your normals are suddenly backwards when they get transformed into screen space. this affects things like backface cull, lighting, depth testing, etc. there might also be some z buffer issues going on as well. calls that draw things need to know what is being drawn to, so as to feed the data correctly. for this one glCullFace() can be used to change the backface cull mode to compensate for the flipped normals. just be sure to change the mode back to the previous mode when the call is done so as not to mess something else up.
-
I did see a mention of glCullFace() on the internets, but I didn't try to workaround the problem with that since it did seem like it'd be a hacky workaround at best, not a proper fix of the underlying issue. I guess it's still worth checking out regardless.
-
the other option is to multiply all the normals by -1 before feeding them to opengl, of course i dont think newer versions of gl work that way, vbos/ibos and such.
-
@Nuke: Only the positions of the verts matters for backface culling (vertex normals are for lighting only).
-
still when those verts are transformed into screen space, they are backwards on the y axis, so vertex order is going to be backwards. you still need to change your cull mode to compensate.
-
Yeah, but what you said was "multiply all the normals by -1", and the only "normals" you have control over are the vertex normals, which have no bearing on backface culling.
-
hard to say if the lighting is wrong though, with the backface cull all wonky, and the backgrounds being drawn wrong.