****************************** Solution H ****************************** I have compared the performance of three computers: a Linux lab computer, grendel, and ozark. In order to compare the performances of the processors, I coded the prime and gaussian tests in python. I chose microseconds so I could show the difference between the integer computation versus the floating point computation. Below are the following ten numbers(in microseconds) that I have calculated according to the outline: 1. 3 ms |raw time for Linux lab prime execution 2. 22214 ms |raw time for Linux lab Gaussian execution 3. 7 ms |raw time for grendel prime execution 4. 72850 ms |raw time for grendel lab Gaussian execution 5. 8 ms |raw time for ozark lab prime execution 6. 110377 ms |raw time for ozark lab Gaussian execution 7. ~2.333 ms |prime grendel / prime lab 8. ~2.667 ms |prime ozark / prime lab 9. ~3.28 ms |Gaussian grendel / Gaussian lab 10. ~4.97 ms |Gaussian ozark / Gaussian lab In order to get these results, I wrote the code that is in the appendix. My steps to approaching this problem was initially in scala, but then I coded my final code in python. I wrote 4 functions to perform the tests from above. I wrote the prime and Gaussian code as provided in the outline, making sure to make little to no changes. I then wrote a func function that allows me to perform the f(x) integral, also provided in the outline. I then wrote a timer function. This function is the backbone of the provided code. This function has a list of n values for which to test both the prime and Gaussian code. I made sure that I ran each n value on both functions to gather accurate data and I provide multiple n values(large and small) to fully test the code (this being the reason for my Gaussian integral time being higher than prime). As one can deduce from the appendix, I then simply called timer directly at the bottom of the code to begin the testing automatically. Now that I have explained the code, I will explain how I received the numbers by running the code in the terminal. The first step that I took was getting into my Desktop file in the terminal by executing the command "cd Desktop". My testing file was located on my Desktop so I could simply execute the command "python testing.py". The way my code is written, nothing more is asked from the user and the results for the lab computer should appear, labeled appropriately. Next, I had to test the same code on grendel. In order to do this, I used the command "ssh grendel" to connect to grendel. I then put in my password and followed a similar path to testing the lab computer. I used the command "cd Desktop" to get into my Desktop folder, and then I executed the command "python testing.py" to get my results for the grendel server. Last, I had to test the code on ozark. This proved a little different in that I had to copy the test file into the www subdirectory. I then used the command "cd www" to move to my www file and then I executed my last terminal command "python testing.py" to gather my last bit of data for ozark. While performing these tasks, I wrote down each result and then I did the math for the final four numbers above(7-10) and have written them above as they were returned to me by the code. Appendix: import math import time from datetime import datetime def prime (n): d = 2 while(d*d < n): if(n%d == 0): return False d = d + 1 return True def func(n): return math.exp(-(n*n)/2) def gaussian (n): sum = func(2) + func(-2) delta = 4/n for i in range(1, n-1): sum = sum + (3 * func(-2 + i*delta)) return delta * sum / 3 def timer(): totalp = 0 totalg = 0 lst = [19, 36, 57, 123, 139, 777, 1234, 14567, 103479, 2345678] for ls in lst: start = datetime.now() prime(ls) end = datetime.now() totalp += (end - start).microseconds start1 = datetime.now() gaussian(ls) end1 = datetime.now() totalg += (end1 - start1).microseconds print("Prime time for 10 tests: " + str(totalp/10) + " ms") print("Gaussian time for 10 tests: " + str(totalg/10) + " ms") timer()