For every algorithm that generates pseudo-random numbers that you know, suppose you have a computer program that tries to generate random numbers in a Gaussian distribution, taking as input a mean and a standard deviation. It generates 1,000,000 numbers. It fails if more than whatever that percentage is of the numbers lie outside one stdev of an arbitrary mean. You have a script that runs that program 1,000,000 times. what is the expected number of failures for this program? If you run the entire test 1,000,000 times what is the expected number of times you don't get the expected number of failures? Doing that 1,000,000 times, the expected number of times the expected number of times the expected number of failures is wrong?