Modding, Mission Design, and Coding > Cross-Platform Development

OpenGL ES2 and Linux and pi - now with patches!

(1/24) > >>

Kromaatikse:
I noticed some previous discussion about OpenGL ES, centred around Android and iOS.  And yes, I saw the conclusions of those discussions: that the input methods normally available on those platforms are not suitable for a space fighter sim.

But what if I start talking about a proper Linux platform, where one can assume a proper sized screen and proper desktop input devices, yet where the graphics acceleration is provided by OpenGL ES (or ES2) instead of "classic" OpenGL?  It's less powerful than today's desktop PCs, of course, but it compares very well to what was available at retail FS2's release.

I am of course talking about Raspberry Pi.

So I made sure that I could compile and run FSO with retail VPs on a more conventional Linux machine - which was surprisingly easy - an then started looking at what would be needed to convert it to GLES2.

Oh boy.  The ancient and obsolete-in-GL-1.1-already glBegin/End paradigm is all over the place.  Where did you guys learn your GL coding skills?  Seriously.   :nono:

So that's the first stage, and it can be done without breaking compatibility with anything whatsoever, and probably improving performance a bit while we're at it.  I see that vertex arrays are already used in some places, but since GLES has completely dropped the Begin/End paradigm (and for good reason), they have to be used *everywhere*.  Yes, even for single points or fullscreen quads - those are the easiest to convert, in fact, and I already refreshed my skills on the MVE cutscene player (which has the virtue of being the very first thing on screen, so ridiculously easy to test).  Ideally we should also be using VBOs where practical, but that's not a necessity.

I wonder if we can get rid of some of the unnecessary and inefficient fixed-point arithmetic while we're at it.  This game never (intentionally, anyway) ran on anything less than a Pentium Classic, which had a pretty good FPU already.

The next stage, assuming the target is GLES2, is to convert literally everything to shaders.  GLES1 doesn't have shaders at all, which is a bit limiting, and all worthwhile hardware supports GLES2, so that's the obvious target.  This is, however, the point where backwards compatibility with old desktop hardware (eg. original Radeon) that was perfectly capable of running retail FS2 starts to become a problem.  Any thoughts on this would be interesting to hear.  It should be possible to leave the fixed-functionality code in parallel, but then you have two code paths, one of which is likely to decay over time unless it is actively tested.

As part of converting everything to shaders, it's time to drop vertex arrays...  what, didn't we just finish putting those in five minutes ago?  Let me finish.  Drop vertex arrays, normal arrays, colour arrays, texcoord arrays, all of them - and specify them as vertex attribute arrays instead.  Vertex attributes are the correct way to do a fully shaded graphics pipeline, which avoids having to shoehorn everything into and out of the original fixed-function pipeline state.  You guessed it, as part of the major API cleanup, GLES2 dropped that state, so vertex attributes are all there is.  This requires a minor rewrite of all the shaders, BTW, but there at least we don't have to worry about backwards compatibility - desktop GL2 has vertex attributes, and anything not supporting GL2 doesn't run GL shaders at all.  Also fortunately, replacing vertex arrays et al with vertex attribute arrays is a like-for-like replacement job, so very easy.

All of the above can be done on desktop GL on any already-supported platform.  All of it is a good refactoring and cleanup operation *anyway*.  Some of it might directly result in performance improvements due to coming up to date and being cleaner.

Once all of that is done, it should be reasonably straightforward to do the actual port to GLES2.  It mostly involves hunting down uses of missing features and sorting out quirks (such as precision specifiers) of the smaller API.  It also means arranging for a new context creation and possibly input method, which might involve work on SDL rather than FSO itself.

All in favour?   :yes:

All against?   :wtf:

chief1983:
 :yes:

As far as:


--- Quote ---Where did you guys learn your GL coding skills?  Seriously.
--- End quote ---

Mom's basement.  Dunno bout everybody else.

Eli2:

--- Quote from: Kromaatikse on March 06, 2012, 01:58:04 pm ---Where did you guys learn your GL coding skills?  Seriously.   :nono:

--- End quote ---

I have no GL coding skills, but i will learn it to review your patch !

The E:
wat.

I'm sorry if we've offended your sensibilities regarding the OpenGL code, but bear in mind the following:

1. FSO runs on a lot of hardware, with a lot of people still using OpenGL 2-level stuff (looking at you, intelgrated people)
2. A move toward a forward-compatible OpenGL3 implementation is planned, but won't be started until after 3.6.14 makes it out the door
3. Until quite recently, we did not have people on the team with the necessary skills to make such an effort
4. OpenGL 3, not ES, is our focus. While "portable" installations would be a nice thing to have, most of our users are on desktop or laptop PCs.
5. While we plan to remove the shader support for GL2-level hardware, the original fixed-function render path should stay intact (see above re: Intel users). We do not want to alienate people without pressing need.

Kromaatikse:
Well, now that I've got your attention...   ;)

Nearly all of what I posted above also applies to converting to GL3 Core Profile.  You still need to eliminate Begin/End, convert everything to shaders and vertex attributes, and eliminate all other references to obsolete functionality.  Only the final step, of downconverting to the GLES2 feature set, is eliminated.

Porting from GL3 Core to GLES2 should be relatively easy, since essentially the same design decisions and API culling occurred in each design - not surprising since they were done by the same group of people.  IIRC, Khronos took over OpenGL responsibility once GLES2 was already established and becoming popular, and that's when GL3 began to take shape.  If the GL3 conversion is done carefully, it should even be possible for GL3 and GLES2 support to coexist in the same code path without too many #ifdefs.

Not to mention that GL2.x supports almost everything that GL3 Core does.  The problem with, ah, "Intelgrated" graphics is mostly due to performance, bugs and a few relatively minor missing features which are mandatory in GL3.

Maintaining separate code paths for old (FF) hardware and modern (fully shader based) hardware is fine by me, if that's what you are planning anyway.  The main concern is avoiding bitrot in one or the other.  I suggest doing the conversion away from Begin/End before splitting, though, since that loses you nothing and is particularly important on slower hardware.

And, in fact, I have a fairly good idea where the ubiquity of Begin/End comes from: NeHe.  Once I learned enough to understand how graphics hardware really works, and looked back to see when vertex arrays were introduced - hint, it was long before PC 3D accelerators arrived - I seriously wondered why anyone bothered to teach Begin/End any more.  Yet all the tutorials I could find started with that, and introduced vertex arrays as an "advanced technique" for performance optimisation.  Only when GLES arrived and made vertex arrays mandatory did tutorials - and only those specifically for GLES - start to use them from the start.

And yet, if your head isn't already cluttered up with the Begin/End way of doing things, it's much easier to understand the GL as a whole and how vertex arrays fit into it.  This is the beauty of GLES2, as it reveals the inherent conceptual simplicity of a modern graphics pipeline.  Once you realise that models usually have a fixed number of vertices and thus memory management is a lot easier than you first thought, and you write a few helper functions to take care of some of the nastier boilerplate, and you write a few simple shaders to cover your basic rendering needs, you forget Begin/End ... until you run across it in someone else's code, and promptly start :banghead:.

It's like BASIC with GOSUB all over again, twenty or so years later.  The only way to pass data into or out of a GOSUB routine is via global variables.  I'm sure you all know what a crapshoot that is.

The truth is probably that the tutorial writers learned Begin/End first, for whatever reason, and didn't feel confident enough with C memory management to recommend vertex arrays as a starting point for anyone else.  The fact is, Begin/End forces the GL to do the conversion and the memory management for you, only much less efficiently, because the hardware only works with arrays and the driver can't make any optimisations based on data probably remaining the same between frames.  Specifying one vertex at a time only ever made sense with software rendering.

Finally, I agree that starting a major upheaval like this is best done after a release, rather than just before it.  Perhaps some preliminary work can be done in a branch, as a proof of concept if nothing else.

Navigation

[0] Message Index

[#] Next page

Go to full version