Maybe look into full screen super sampling? I do realize with the new shadow and deferred lighting code that FSO is getting pretty taxing on the GPU, so this might not be possible.
Problem is that typical super sampling anti-aliasing methods do not work with deferred shading, rendering or whatever you want to call it. You're in most cases restricted to anti-aliasing done with shaders. Especially if you want something that works with both NVIDIA and AMD hardware, and preferably with Intel too when they actually get something decent out.
But it's not like some clever people other than NVIDIA and AMD engineers haven't come up with solutions. As evidenced by http://jcgt.org/published/0002/01/01/
Actually, supersampling is the only AA technique that will always work. Reason being that it is the simplest, most brute-force method available.
By that same token, however, SSAA is also the most inefficient technique, as it requires an enormous amount of memory bandwidth (especially in a deferred render setup). What you meant was Multisample AA (MSAA), which is an optimization of SSAA that does not require oversized render targets and downsampling to work (since it works by taking several samples for each pixel and averaging the result).
In essence, SSAA is easy to implement, but really really inefficient. My preferred solution would be something like 1.5x or 1.7x supersampling combined with an FXAA pass, this allows us to better capture stuff that is otherwise only a subpixel artefact that gets murdered by FXAA, or a solution similar to nvidia's proprietary TXAA. The latter is a variant of FXAA that takes into account temporal data (i.e. data from several preceding frames) in order to get subpixel detail right.