Okay, let's start with hardware T&L.
Basically, a 3d scene is comprised of many vertices (3D points), and lists of polygons that reference to these points (As in, polygon A is made of points 1,2,3, polygon B is made from points 3,4,5, etc.).
To simulate movement, for example, when the viewpoint (the player, generally) moves X units forward, the whole world is actually translated (moved) X units backwards. If the viewpoint turns Y degrees right, the world is rotated Y degrees left.
Rotation and translation (And scaling and shearing, for the matter) are grouped into Transformations.
Up until the GeForce 256, transformations were generally preformed by the CPU with specialized code. The advantage of using the graphics card to do those calculations is that first of all the CPU is offloaded of those calculations, letting it utilize its power in things like AI and physics, and the great advantage comes from general case circuitry (CPUs floating point math units, in this case) are usually an order of magnitude slower than circuitry made for a specific use.
The second part of T&L, lighting, is just as it sounds. What the graphics processor generally does is compute how brightly lit each vertice is, according to the basic lambertian reflection model, which says that the intensity of light on a point is equal to the light's strength multiplied by the cosine of the angle between the light vector, and the normal vector of the point. As vertex normals are usually stored, and both they and the light vectors are generally stored normalized (stored as vectors with the length of 1), and the dot product of two vectors, u·v, is defined as |u|*|v|*cosè (|u| being the length of the vector u, |v| being the length of vector v, and è being theta, the angle between them), |u|*|v| equals one, therefore the dot product gives exactly what is required for the lighting equasion. Sorry if it got too mathematical (Even though it is only basic vectors), but that's about it regarding T&L.
Anti-aliasing is a technique to reduce image artifacts, called aliasing (Makes you wonder how they thought of the name 'Anti-aliasing', eh?

), which is a general name for artifacts caused by two inherent problems of the computer monitor. First of all, pixels are inherently square. Therefore, only square shapes can be displayed without any artifacts. Secondly, the resolution of monitors isn't high enough to fool the eye into thinking there are no artifacts (at least not yet). The best case in which you can see aliasing is the 'stair-steppes' effect, where a polygon is nearly horizontal or vertical, and slow-moving on the screen, and you can see how the pixels on the edges of the polygon are slowly moving.
All anti-aliasing algorithms generally draw the image in a sub-pixel precision, and then sample it down to fit the monitor (called 'supersampling', as you draw more pixels than you can see (as in, in a higher resolution), and then sample them down), or just take several samples from the screen and blend them (called multisampling). The result of all of them, to some degree, is eliminating those jaggies, by making the lines have smooth gradients, instead of hard edges.