****************************** Solution F ****************************** prime, lab: 0.75 s gauss, lab: 1.73 s prime, grendel: 2.55 s gauss, grendel: 4.04 s prime, ozark: 4.69 s gauss, ozark: 3.07 s prime grendel/lab: 3.40 prime ozark/lab: 6.25 gauss grendel/lab: 2.34 gauss ozark/lab: 1.77 These measurements come from the below C program. This program reports the total CPU time (user plus system) spent applying the primality algorithm to the 100,000 integers starting from one billion, and then it reports the total CPU time spent applying the Gaussian integral algorithm with n = 50,000,000. It measures and reports each of these five times. I compiled the program using the command "gcc -O assn1.c -lm". Then I executed the program with "./a.out". The program displays five separate measurements for each. As you can see in the raw output listed below, these five measurements were always within 0.02 second of each other. I took the minimum for my reported time since the larger measurements may reflect errors in measurement. These results indicate that the lab computers are fastest on both benchmarks, by a factor of at least 1.75. (The gap is even wider for the primality benchmark.) Comparing ozark and grendel is more interesting. On the primality benchmark, grendel is 80% times faster than ozark, while ozark is 30% faster than grendel on the Gaussian integral benchmark. On aragorn (a lab computer), the output was as follows: 0: 0.75 1.74 1: 0.75 1.73 2: 0.75 1.73 3: 0.75 1.73 4: 0.75 1.73 On grendel: 0: 2.55 4.05 1: 2.55 4.04 2: 2.55 4.05 3: 2.55 4.05 4: 2.55 4.04 On ozark: 0: 4.69 3.08 1: 4.71 3.09 2: 4.69 3.09 3: 4.69 3.08 4: 4.69 3.07 #include #include #include int prime(int n) { int i; for (i = 2; i * i <= n; i++) { if (n % i == 0) { return 0; } } return 1; } int gauss(int n) { double sum; double delta; double x; int i; sum = exp(-2) + exp(-2); delta = 4.0 / n; for (i = 1; i < n; i++) { x = -2.0 + i * delta; sum += 3 * exp(-(x * x) / 2.0); } return delta * sum / 3.0; } int main() { int trial; int i; long t0; double tprime; double tgauss; for (trial = 0; trial < 5; trial++) { t0 = clock(); for (i = 0; i < 100000; i++) { prime(1000000000 + i); } tprime = (clock() - t0) / 1e6; t0 = clock(); gauss(50000000); tgauss = (clock() - t0) / 1e6; printf("%2d: %4.2f %4.2f\n", trial, tprime, tgauss); } return 0; }