About transformation and lighting, and why it's better to do in hardware.
First of all, transformation is the process of transforming all the vertices according to the current view point. what does that mean? If you move forward, in reality, what is done is that every vertice is moved backwards by the amount you moved forward, which has the same result. If you turn to the right, in truth, every vertice is rotated to the left by the amount you turned right. Basically, transformation is the process of rotating and moving all the vertices in a scene (except that moving the vertices is called Translation). Lighting is, of course, finding the amount of light hitting each vertex according to the positions of light sources. These calculations are essential to every 3D application.
Before there were consumer-level graphics cards, and before those cards had T&L capabilities, all those calculations were done with custom code different for every game, tailored to meet the specific demands of that game. When the first graphics cards came to the consumer market, they were simply raster devices - you gave them screen-coordinates of vertices (vertices which have already been transformed, and projected from 3d coordinates to screen-space (which is 2d)), along with Z values for the Z-buffer, texture coordinates, lighting values, fog values, etc. and they drew the 2d polygons.
The programmer could use the functions built into the APIs for handling transformation and/or projection, but they were simply slower than doing it with your own code. FreeSpace 2 was coded like that.
The problem is, to use T&L, you HAVE to use the built-in API functions, since they automatically use what's available (If you haev T&L capable hardware, they use your hardware, otherwise, the CPU), and, while it not sound like so much from how I describe it, it is a huge amount of work that requries massive re-structuring of the code