Statistics: Before the 12-point curve...
mean 85.000 / 120 stddev 20.195 median 80.000 / 120 midrange 76.250-102.500 # avg 1 8.46 / 10 2 8.15 / 10 3 8.54 / 10 4 6.23 / 10 5 4.23 / 10 6 7.31 / 10 7 6.31 / 10 8 8.54 / 10 9 5.77 / 10 10 8.00 / 10 11 8.85 / 10 12 4.62 / 10
The CPU has an instruction for issuing a software interrupt, which simultaneously jumps the CPU into trusted operating system code and changes into unprotected mode.
There are many possible answers. The most popular ones were:
Be as explicit as possible, naming specific semaphores you would use, and saying exactly where you would do the DOWN and UP operations.
Define a single semaphore, clients_left, whose initial value is MAX_CLIENTS. Each time, before spawning a new thread, run down(clients_left). And, each time a client finishes, execute up(clients_left).
int count = 0;
pthread_mutex_t count_lock = PTHREAD_MUTEX_INITIALIZER;
pthread_cond_t thread_avail = PTHREAD_COND_INITIALIZER;
void* serveClient(void *arg_p) { int main(int argc, char **argv) {
// receive request // start the server listening
while(true) {
// send response to client // accept connection from client
// before we finish, tell ``main'' it // wait until # client threads is low
// can go (if indeed it was waiting) pthread_mutex_lock(&count_lock);
--count; ++count;
pthread_cond_signal(&thread_avail); while(count > MAX_CLIENTS) {
pthread_cond_wait(&thread_avail, &count_lock);
} }
pthread_mutex_unlock(&count_lock);
// start thread to service client
}
}
This code has a bug: The ``main'' thread could sometimes end
up waiting when in fact there are fewer than MAX_CLIENTS clients.
Explain how this could happen, and explain how to fix the bug.
Suppose the server tests whether count is more than MAX_CLIENTS, and - before it executes pthread_cond_wait(), execution transfers into the client process. Suppose further that the client executes --count; and pthread_cond_signal() and finishes. When control transfers back to the server, it must execute its next instruction, the call to pthread_cond_wait(), which will make it start waiting, even though a client has just finished.
The solution is to put the --count; and pthread_cond_signal() call between a call to pthread_mutex_lock(&count_lock) and a call to pthread_mutex_unlock(&count_lock).
Modern CPUs winclude the concept of a segment register. When the OS loads the program into memory, it will simply set the segment register to be the first address where the program is stored. Each time the user process accesses a memory address, the CPU automatically adds the contents of the segment register to the requested address to get to the actual memory address.
First, disk seek time has not improved much over the years, especially when compared to CPU time. Thus the disk is becoming more and more of a bottleneck. Second, memory space has improved dramatically, and this has led to larger disk caches, and so cache misses have become more infrequent, and a larger fraction of disk accesses have been requests to write to the disk (not read from it, since those requests are handled from cache when possible).
Log-structured filesystems optimize for the write case - by continually writing at the end of the log, the disk doesn't have to seek for the writes. Reads become more cumbersome, since they can involve large jumps all over the disk (files become very fragmented), but block reads are less important than they once were.
write()
Defining and implementing protocols for communicating between computers is tedious and error-prone. RPC attempts to hide this process of defining a protocol from the programmer, instead abstracting each request for a computer to do something as a simple procedure call.