This is a sort of thorny issue. I agree that on final recordings, such bit depth and sample rate are most likely wasted. Physiologically, humans aren't well equipped per se to distinguish between higher and lower quality recordings, IF the recordings are otherwise identical.
But.
What about editing? Digital data has a tendency to degrade when you edit it due to rounding to closest available value. This is, incidentally, the reason why professional image editing is usually done in more than 8 bits per channel depth. The final image is usually not of such bit depth (though there are exceptions where that is useful, such as height maps distributed by NASA, which are 16-bit greyscale RAW images).
Similarly in audio editing, errors will at some point start to accumulate, and using higher bit depth definitely reduces the amount of error you end up with in any specific operation.
Whether or not you would be able to hear the difference depends almost entirely on the amount of editing done to the track, but the fact remains higher quality audio will be possible to edit further without degradation becoming noticeable. Obviously, for the final result, 16-bit accuracy per sample is more than sufficient for the majority of listening purposes.
Also, in games things like spatial effects can be calculated at higher precision at higher sample rate/bit depth. Technically, this means somewhat better quality on the output audio, even if original audio samples are in 44k16b stereo standard, mixing in higher quality can lead to better results.
Then there's the issue of sample rate. Higher sample rate results in much higher accuracy with overtones, and that kind of thing is quite important in certain types of music, especially electronic (which uses a lot of non-natural waveforms such as square waves) and orchestral music. Of course, in any type of editing (post processing and such) the sample rate assists in making the resulting waveform more precise before it is downsampled to final release format. But, more crucially, there's the issue of downsampling which is probably the single most important factor in people wanting to get their hands on recordings in their original quality.
If you reduce the sample rate from 192 kHz recording to 44.1 kHz, there will likely be some trouble somewhere; whether it is audible depends on the content of the audio as well as whether or not you have the original source to compare to. Let's illustrate what happens in such a case.
I made a 192000 Hz sample rate, 12000 Hz square wave in Audacity. It sounds almost identical to a sine wave, actually, since the peaks and bottoms of the wave are equally distributed and 12000 Hz happens to be a factor of 192000 Hz, so all is fine and dandy. I saved it as 192kHz FLAC.
Then I set project quality first to 96000 Hz, then 48000 Hz and finally 44100 Hz.
Here is a visualization of the waveform degradation due to downsampling:

As you can see, there is actually a fair bit of quality reduction here if you compare the waveforms directly to each other. Well, it turns out this is not really a problem down to 48000 kHz, because the waveform remains constant and symmetric due to the frequency 12000 Hz being a factor in all three frequencies; 4x12 = 48, 8x12 = 96 and 16x12 = 192.
But then the problems start.
12000Hz is not a factor of 44100 Hz. The samples are mis-aligned. So, in the process of downsampling to 44.1 kHz, the audio signal that you actually hear will change. It will no longer be a sine wave. Here's a zoomed out image that shows the problem:

As you see, the mis-alignment of samples creates sub-tones to the signal, oscillations that you actually sound as multiple tones being played at the same time.
Here are all four signals in FLAC, so you can try it out yourself.
12000Hz-Sqare-Wave-Downsampling.7zYou'll notice a pretty clear and distinct change in the 44.1 kHz version compared to other three.
And now the reason for the need for having recordings in their original quality makes a little bit more sense, I hope. CD Audio is still the industry standard for most audio that is put on the market. CD Audio is 44.1 kHz by definition, while the recordings are usually done in another sampling rate.
At the very least, it would be a welcome change for most music to be marketed in a sample rate that would nicely factor into the original sample rate. 48kHz would be good. 44.1 kHz is, as demonstrated, technically problematic and mostly used for legacy reasons.
So, is the 44.1 kHz sampling rate good or bad? It depends. It is certainly sufficient in delivering good audio, especially if the sound is converted directly from analog signal to 44kHz digital signal (bit depth notwithstanding). Problems start when you start messing around with digital signal and downsampling them from one to another, especially if your resultant sample rate is not a nice factor of the original.
So is 44.1k audio better or worse than original audio?
It's never better in quality, and can be worse. But it depends so much on the signal content and downsampling algorithm that it's very hard to give any definite answer. Personally, I think if original recordings are made in digital format, for goodness' sake at least use a sensible downsampling scheme that doesn't result in atrocious bit crushing. If original audio recording is in analog format, then 44.1 kHz sample rate is probably very much sufficient.
EDIT: The article addresses many of these issues mentioned, and especially points out that higher bit depth and sample rate are used in editing and post-processing for exactly these reasons. Teaches me to read an article rather than just skim over it...
