Hard Light Productions Forums
Modding, Mission Design, and Coding => FS2 Open Coding - The Source Code Project (SCP) => Topic started by: Bobboau on March 05, 2005, 12:13:43 am
-
http://66.70.170.53/Ryan/nrmphoto/nrmphoto.html
I think that normal map generation method just screams "do this in a 3d package"
-
coo
-
You'd like this too, if you want CG sources for your normal maps...:
http://members.shaw.ca/jimht03/normal.html
-
I've seen that article before, nce one eh?
I thaught it was posted here originaly, mabye not.
I'm not entirely font of the method in the link omni posted, btter to use ATI's normal map generation utility or nVidias "Melody".
-
Kewl.
So when is Bump mapping due?
Next week yeah? :D :p
-
actualy, I am right now working on some of the extreem early requierments for the system that will allow it, there are a number of features we need internaly before the enigne is ready for it, wich means I need to reorganise how the high level graphics interface does a few things, the whole way we draw ships for example is little more than one giant hack, I need to seperate it out so that all the rendering passes are called from outside the api, in order to do this I have to make the external (to the APIs) code aware of the fact that there are multable texture stages, then I have to build an external state block system, then I have to implement a material system useing the state block system, then I have to make that into a table, then I have to integrate Cg, then I have to get a vertex shader system, then add that to the afore mentioned material system, then I have to workout any incompatabilities between them, then I have to add a pixel shader system, add it to the table, work out conflicts between the three, then I have to start seriously writeing shaders, first step would probly be getting a perpixel specular lighting model, then I can get started on more advanced features like bump mapping or paralax mapping or the two of them /*drools*/
and I have to coordinate all of this with someone working on the OGL side.
so it'll be the end of summer at the very earliest.
if you people want it faster, start paying me if I see a stedy supply of`$250 a week you can bet your ass I'll be on it every moment I'm not doing homework, sleeping, or eating, actualy, I'll probly still be doing it while I eat, and I have been to known to have dreams about codeing, and I never do my homework anyway.
-
End of sumer is much earlier than i could expect.
So i guess the current rendering engine is ****e? how exactly dows it work ATM?
-
lol.
I only know what half of that meant and I'm scared. Most of it would be pretty straight forward logical stuff.
But damn that's a truckload of coding.
Here I am fretting over a simple catalogue program.
How I hate the troubles of recursing methods... waa. :p
Mind you, a better understanding of how tree search and store design works would probably help me. Sigh.
Meh.
Good luck. Maybe I'll buy you some handwarmers or anti-sweat gloves. :) Maybe the whole SCP team. :p
-
This might be the Myrmidon bumpmap.
... I think it did everything according to he instructions...
Looks weird.
(http://img100.exs.cx/img100/3108/fighter2t05abump7gq.jpg)
-
errr, DaBrain, it only works on real objects, because you can't shine a flashlight on a texture map.
-
Hehe, why not?
I'll kust have to redraw everything before I do so.
I just tested the bump map. It works pretty well for a cheap attempt.
This is rather intersting. I'll just re-draw all higher than zero orbjects, like plates and use a diffuse light source to bring in the shadows.
BTW the posted map has an error. I've just fixed it.
Edit:
Fixed.
(http://img92.exs.cx/img92/4297/bump23ek.jpg)
I had to downzise and heavily compress it. It got too big.
Edit2: See for yourself. I think it more work intensive than this, but I think it can do it.
(http://img100.exs.cx/img100/2371/bumptest9lh.jpg)
Edit: There is an easy way to see if you map works. Just test it and if you can see the light is moving around, it's working.
If it just switches between bright and dark your map is not working.
This is the preview of the standard myrm map. I can't see the light moving around the object...
(http://img123.exs.cx/img123/6269/myrmnobump1am.jpg)
-
wow! thats exactly what im waiting for;)
-
Bob, is there any benefit to moving to DX9 during this process?
-
Originally posted by Admiral Nelson
Bob, is there any benefit to moving to DX9 during this process?
Yeah, we can boast that FS2 now has a DX9 capable engine ;)
-
yeah we can drop suport for a whole bunch of people still useing windows 9x.
-
They can still use OpenGL.
Will the OpenGL part keep up with the development?
Edit: Timewarp. Bob posted before me.
-
And people who like their Antialiasing. :p
-
Well once the DirectX 8 rendering is done for the SCP, separate the renderer into two like Half-Life 2 did.
Half-Life 2 has practically three separate entire engines for DirectX 7, 8 and 9 respectively, and autoconfigures itself for the highest available one supported by the end-user's video configuration.
Once the DirectX 8 FSOpen Engine is "reasonably done" for all those without DX9, make FSOpen have a setting for DirectX 8 or 9 in the Launcher, separate from the OpenGL/Direct3D setting. When the user selects DX9, it renders using a completely different code base (a la Half-Life 2) so that future changes to the SCP to get DX9 bumpmapping features and stuff won't affect DX8 users negatively.
This would prevent removal of support for the 9x users, and you'd still be able to make updates to the DirectX 8 code in FSOpen if more bugs in it are found, without affecting the new DirectX 9 stuff.
Of course, you'd have to be careful that global bugfixes like interface changes and stuff are either separately designed around each DirectX format, or globally designed to be compatible with both so that fixing bugs in global things like the HUD and interface wouldn't break one or the other DirectX mode...
...but in the end, doing what I'm suggesting (modeling how to handle the switch after what Valve did with Half-Life 2's engine) would be a happier ending for everybody.
-
Would I be right in thinking that by separating it into three separate rendering engines, you're creating three times as much work as you had before?
-
Originally posted by Gregster2k
...but in the end, doing what I'm suggesting (modeling how to handle the switch after what Valve did with Half-Life 2's engine) would be a happier ending for everybody.
Sounds icky to me. We'd have OGL, DX9, and DX8.
But then, I think we already did something like that in the early days of the SCP.
It mostly depends what it involves. Everything in FS2Open is abstracted below a line, so instead of calling "DrawPrimitive" or "glBegin(GL_LINES)/glVertex" you just call "gr_line" and that handles everything. So if there are significant changes between the DX9 and DX8 API, those wouldn't be much of a problem. But if we're talking adding new effects, that could make the parts of the code that use those new effects messier, because they'd have to check that the current mode supported it.
HL2 has the advantage that it only has one general API set to worry about, DirectX, so they can directly call all the functions and use different rendering paths by directly calling DX functions. They didn't have to worry about abstracting between two different APIs.
It'd be best to upgrade OGL rather than keep Dx8 around, IMHO. FS2 could dominate in the Linux games department. There's simply nothing that compares.
-
Moreover all recent builds seem to do better on OGL performance wise.
I good OGL overhaul (env-map transplantation, ect.) is good idea IMHO.
-
I thought DirectX 8 did support bump mapping?
-
Originally posted by WMCoolmon
Sounds icky to me. We'd have OGL, DX9, and DX8.
But then, I think we already did something like that in the early days of the SCP.
It mostly depends what it involves. Everything in FS2Open is abstracted below a line, so instead of calling "DrawPrimitive" or "glBegin(GL_LINES)/glVertex" you just call "gr_line" and that handles everything. So if there are significant changes between the DX9 and DX8 API, those wouldn't be much of a problem. But if we're talking adding new effects, that could make the parts of the code that use those new effects messier, because they'd have to check that the current mode supported it.
HL2 has the advantage that it only has one general API set to worry about, DirectX, so they can directly call all the functions and use different rendering paths by directly calling DX functions. They didn't have to worry about abstracting between two different APIs.
It'd be best to upgrade OGL rather than keep Dx8 around, IMHO. FS2 could dominate in the Linux games department. There's simply nothing that compares.
Is there a reason for having seperate DX and OGL based engines? Or is it simply because of the old codebase (that was what, Glide, Software and DX5?) was divided up in a similar way? (NB: isn't Glide a subset or precursor of OpenGL?... I thought I read something along those lines, anyways)
-
Well, if you don't abstract the engines somewhere, all of the highlevel graphics calls would look like
if(API==OGL)
{
OGL stuff;
}
elseif(API=DX8)
{
D3D stuff
}
It's a lot easier to do simply say "draw a line" then to say "if you have a pencil then sharpen it, and draw a line using a ruler, or if you have a pen, take a ruler with an ink-edge, and draw a line".
At least, that was what my C++ book thought me about abstraction.
-
Does FS2 have a class diagram?
-
not realy, but if it did it would look sort of like one of those pictures of the internet. FS2 was made in little more than normal old C, it doesn't realy use classes exept for a few places though some of the newwer code has been more oo.
-
I'm not 100% sure, but I think I can get away with not needing DX9 for the shader system I want, I'm going to use Cg and it is API independent, and I think compatable with DX8.
-
Originally posted by kasperl
Well, if you don't abstract the engines somewhere, all of the highlevel graphics calls would look like
if(API==OGL)
{
OGL stuff;
}
elseif(API=DX8)
{
D3D stuff
}
It's a lot easier to do simply say "draw a line" then to say "if you have a pencil then sharpen it, and draw a line using a ruler, or if you have a pen, take a ruler with an ink-edge, and draw a line".
At least, that was what my C++ book thought me about abstraction.
Problem comes if you have one feature that can use another in some way, I guess. That's a general way... Obvious problem is that you have a sudden mass of code jumping up, and at some point you need to go back and check you've got all the switches in the right place. And if you add another rendering method, then you may need to add new switches all over again....
I think a better way is to use subclassing / interface implementation to hide the details altogether, but I'm not sure how amenable the current FS2 structure is to that.
(i.e. pass in some object/s representing the draw methods; this object conforms to an interface defining render calls, but the specific method is hidden to the callee)
i.e. (pseudo-java)
public Renderer r;
public void setRenderer(Renderer r) {
this.r=r;
}
public void draw(Stuff) {
r.draw(stuff);
//where Renderer is an interface defining the method draw(stuff)
}
and you'd have something like
if(Api.equals(OPEN_GL) {
fs.setRenderer(new OpenGLRenderer());
}
else if(Api.equals(DIRECTX8){
fs.setRenderer(new DX8Renderer());
}
else if(Api.equals(DIRECTX9){
fs.setRenderer(new DX9Renderer());
}
...of course, you'd need your interface class to define the methods for all renderers, and probably provide a default abstract class that defined the 'null' behaviour; i.e. what to do if the Renderer object doesn't provide that functionality (the Renderer extending the abstract class and subclassing the methods it does provide)
Don't think you can do this for FS2, though; IIRC it's pretty 'flat'.
-
FS has a structure with a bunch of funtion pointers, FE2 though uses the interface method you mention.
-
I think we'll have to move on to DX9 anyway...
-
Originally posted by DaBrain
Hehe, why not?
because a computer application will look at the light and dark values for the hightmap, so when you use a 3D light source on a bump map, it'll see the brighter parts of the map as higher and the darker parts lower. therefore, using that process on a texture map instead of on an actual object will do absolutley nothing but make it silly colors.
(http://img.photobucket.com/albums/v109/Carltheshivan/7143e994.jpg)
(http://img.photobucket.com/albums/v109/Carltheshivan/0bb4f8a0.jpg)
-
That's the reason I took the Myrmidon map.
It's my version of the map. I've added some nice shadings to simulate higher and lower parts.
Of course it won't work this way.
You'll have to redraw high parts like I said before.
But the method I used will work for some tilemaps.
Tilemaps that are 'flat'. So method is just used to add some structure on the model.
Edit: I just ran another test.
It works exactly like I guessed.
This is a part of the Myrmidon maop, or well the bumpmapped part of it. :)
(http://img186.exs.cx/img186/6104/works8em.jpg)
(http://img113.exs.cx/img113/144/works35kg.jpg)
I just redraw them with white, added some shading to the lower areas and used the same method as before (but this time a diffuse light!).
Then I just added the new layer with the shaded stuff on it as blue channel in the normal map.
I think the shading is too smooth, but I can do it better next time.
Doing this to all textures will be much work. Much more work than the shinemap stuff...
But I think I can get some so-so bumpmaps of my maps if I just place my 'lines' layer in the normal maps blue channel
-
Get a render of those maps on the myrmidon model...
-
Which 3D render software allowes the use of normal maps?
AFIAK 3DS supports only greyscale informations.
-
Well, what I do is turn the bump map into a 3d model with about 1/2 million polies upwards. and then normals/shadow maps etc can all be generated from that :)
-
You'd have to redo the UV mapping that way, right?
-
Nope, it's the actual UV map I convert.
Normal mapping assumes that the light is coming from 'above' no matter where on the model you are looking, so as far as I can see, it has to be done on the UV Map.
I use a landscape program which can convert grey images into height maps, and then export that heght map as a DXF. What you end up with is a ultra-high poly model of your UV Bump map. Lightwave will let you generate the normals for that (or would, if I could figure out how to stop it crashing) as can 3DS Max.
It's also good for surface baking, as you can add 'fake' bumps, simply by adding the colour UV map to the model and baking the whole lot into a final texture, so all your shadows are cast by physical bumps on the UV, not by a computed bump-map ;) Down-side to that is that shadows made that way don't move.
-
FYI Max does normal maps.
-
Ok, as I don't have the time to rework the whole map right now, here is a render of a box with the alread done part.
(Two light postions.)
I think that's enough to show it's working.
(http://img144.exs.cx/img144/7233/bumpt12wv.jpg)
(http://img144.exs.cx/img144/9328/bumpt24st.jpg)
-
Originally posted by Flipside
Well, what I do is turn the bump map into a 3d model with about 1/2 million polies upwards. and then normals/shadow maps etc can all be generated from that :)
is the bump map use use the original texture map? because if it is, then that's like printing out a word document, then scanning it with a scanner, then saving the file as a .bmp file. It's completely uncessesary and won't do any good.
-
Regarding final implimentation of bump-mapping in FSO (a long way away, I know), will this be accompanied with a major reworking of the texture system, or will it just be tagged on to our current system as glow and shinemaps are? Now, if the second is closer to the truth, there might be a way to save a render pass. From what I've seen in this thread, bumpmaps are just greyscale, right? If that's true, then perhaps we can upgrade glowmaps from pcx to dds and slip the bumpmap into the alpha channel. Granted, glowmaps don't realy need the extra 16-bits of color depth, but when faced with either another render pass, or a slightly larger glowmap, I think that most people would choose the latter. So, is this a good thought?
-
what you posted:
(http://img.photobucket.com/albums/v109/Carltheshivan/1c9f56c4.jpg)
It would work for a texture, but notice that the parts that are higher and lower are not all brighter and darker respectively.
what the bump map for the texture should look like:
(http://img.photobucket.com/albums/v109/Carltheshivan/2bfefa58.jpg)
notice the parts that are sticking up are bright, while slopes are gradients, and low parts are dark.
Result:
(http://img.photobucket.com/albums/v109/Carltheshivan/b753333b.jpg)
-
Carl is correct.
-
You are going to loose the structure of the map this way.
Though it it's still visible, it won't be affected be the light position anymore...
-
Carl, what DaBrain posted was a normal map, not a bump map.
your statement that it'll see the brighter parts of the map as higher and the darker parts lower.
is wrong. That's not how normal maps work.
-
On a nromal map the Red Green and Blue vaules of each pixel add up to 1. Each color represents a component of a vector tangent to the surface that the map is applied to act as a modifier for the normal of each pixel. 1 is fully in the + direction 0.5 is 0, and 0 is fully in the negative direction.
Blue is for the vector component sticking in and out of the screen, with +blue being out.
Red is for the horisontal vector compoinent, with + being left.
Green is for vertical with + being up.
I think the rule of 0.5 being zero may be different for blue due to the fact that you cant have a pixel normal facing inwards.
-
:: patiently awaits the faitfull day that the SCP project DESTROYS Starshatter's coming DX9 implementation without using DX9 ::
-
What FireCrack said.
-
actualy omni, after further investigation, it seems like we will still need to upgrade to D3D9 to get anything decent out of a Cg implementation. I can get started useing d3d8 but before we call it done we will have to have d3d upgraded. and this may cause some major problems, especaly with some of our developers.
-
especially the VC.NET requirement :blah:
-
Originally posted by Bobboau
actualy omni, after further investigation, it seems like we will still need to upgrade to D3D9 to get anything decent out of a Cg implementation. I can get started useing d3d8 but before we call it done we will have to have d3d upgraded. and this may cause some major problems, especaly with some of our developers.
Developers? As in complicate the modding process? If that is the case I am more than willing to adjust to any modding nuance. If you mean other coders, well I guess that would be a pain.
The BSG project is attracting a number of programmers, a langauge I'm not well versed in. Moon language. So far I've pointed a couple of them to HLP for them to get acquainted with the work you guys are currently on. I hope, if they stay around, they can make usefull contributions to your noble cause of gaming perfection.
I don't understand it. OpenGL has achieved much greatness without traversing 9 versions. Am I assuming incorrectly when I say that the achievments of Doom 3 is purely software uberizing VS software/hardware dependant progress? If I recall correctly Bumpmapping has been around since the dawn of DX8 and HT&L.
If DX9 is a major challenge, would taking the OGL path be much more headacheless?
-
Incidentally, I think OpenGL has been through 6 versions (excluding precursors...I think the first version was pulicly released in 1992, and it was developed off of another SGI API; I think Glide may have been a subset / branch based off OpenGL, but I'm not sure).
This (http://www.cprogramming.com/tutorial/openglvs.html) might be interesting; I don't know if it's up-to-date as there is no date one it :)
That seems to be a useful looking site, actually.......
-
Omni:You can only compile a DX9 program on a machine with XP or 2k. Meaning that all devs not using either of those can no longer compile builds, or test their own code untill someone else compiles the code for them.
-
To expand on what kasperl said:
MS dropped support for DX9 SDK with anything less than XP/2k3 with the most recent updates. As many of the programmers dislike XP or don't want to switch for some reason, ie cost, (myself included) that means that they have to use and find the original SDK in order to compile Freespace 2. Or, we have to support three different rendering libraries, which just isn't going to happen.
Not to mention that I think DX support for Win98 was dropped with DX8, but I'm not 100% sure.
Going OpenGL only would solve compatibility problems, but Bobb prefers DX ;)
Edit: aldo, that link you posted is somewhat out of date...OpenGL does support Vertex buffers.
-
DX8 works even on Win95.
-
Originally posted by WMCoolmon
To expand on what kasperl said:
MS dropped support for DX9 SDK with anything less than XP/2k3 with the most recent updates. As many of the programmers dislike XP or don't want to switch for some reason, ie cost, (myself included) that means that they have to use and find the original SDK in order to compile Freespace 2. Or, we have to support three different rendering libraries, which just isn't going to happen.
Not to mention that I think DX support for Win98 was dropped with DX8, but I'm not 100% sure.
Going OpenGL only would solve compatibility problems, but Bobb prefers DX ;)
Hrmmph.... so is it still possible to use the stuff detailied in the 'Free compiler for the FSSCP: The answer' thread? Because I can't install service packs (and I'm on Win2000 anyways), because they (literally) break my machine.
Originally posted by WMCoolmon
Edit: aldo, that link you posted is somewhat out of date...OpenGL does support Vertex buffers.
Dagnabbit. Kind of figured it would be, in all honesty.......