Ugh... catching up...
How about render, while running, the hud to a dynamic texture with a fancy name like "HUD", with a transparent background. Have each hud control in a specific area. The modeler would then use a template of the texture "HUD", also the template would be named "HUD" and would UVmap the polygons to the correct hud area. Then the game would use the dynamically built hud texture where-ever it sees a mesh in need of texture called "HUD" For example shield, have a square polygon apply the "HUD" texture name to it and uvmap it so that it maps just the shield status. Course you would need a second squarish polygon behind it, but give it whatever texture you want, something like dirty glass.
Hmm...I had a wild idea: how about going with an "all modeled" HUD?
So instead a re-rendered texture at each frame you'd have a bunch of flat/simple polygons in the appropriate position relative from either the cockpit model, a submodel of the cockpit model, or tied to the view point if it's a visor element.
You could map current HUD functions to those polies by scripting their position, movement etc. to the data that currently handles the HUD right now.
So you'd have a set of trigger stats that currently drive the HUD and translating that to some kind of display device would be up to you. It could be a hell of a lot of work to make a HUD but with a number of templates it could be really powerful and absolutely modable.
Now even without in depth knowledge of the code, I know some problems are sure to crop up:
-Sticking these "display/HUD" polygons to the cockpit: how are they generated, are they modeled as part of the model or are they separate objects?
-If separate objects how is render order resolved?
-If they are subobjects how are they highlighted that they should be treated differently?
-In both cases can the animation code handle re-creating useful (readable, visible) devices?
-For changing items, that show labels, texts, or any data that should be something readable - ergo requiring a string input - how does this system handle these items. Are these objects generated/swapped on the fly? Or is it a place where render to texture is the most sane thing to do?
Finally when one looks at all the myriad problems, I'm not sure that the all-modeled approach is a good immediate goal.
Why bring it up then?
Because in the end some of those features could be very handy, and thinking about them ahead of time could ease laying the groundwork for the later development.
What both a modeled and a rendered to texture approach share to is how to handle on-the-fly generated content. The current method is through hard coded procedures that create the HUD elements from a discreet set of graphic elements and functions.
The new system will need a base set of those, but if modability is desired, some sort of scripting and/or rendering interface will have to be implemented to generate those elements from the trigger data in a dynamic manner.
Beside the "mere rendering" (Mere? Ha! That is already a complex problem of its own!) handling of the new cockpit what are the plans for this new "HUD" / "rendering" interface?