Test 2 Review A: Questions

R2a.1.

Identify two reasons why shared-memory concurrency with locks/semaphores (as implemented by Java, for instance) is problematic, leading some to suggest that message-passing concurrency should be preferred.

R2a.2.

In a message-passing system, threads must often be able to select between different message categories: For instance, our Web server has a cache-management thread that sometimes receives query messages and at other times receives update messages. In the D programming language, how is a thread written to process such different categories in different ways?

R2a.3.

Why does D require that any parameter to send have a type that is marked as immutable?

R2a.4.

A typical concurrency setup is to have a fixed set of threads who can process the same varieties of jobs, where one submits a job to be executed by the next available thread; in this way, several job instances can be worked on simultaneously.

For example, perhaps we have 10 different threads each capable of determining whether an integer is prime; when one submits an integer, the next available of these 10 threads determines its primality.

In English, describe how message-passing concurrency can be used to provide this functionality. That is, describe what threads you would need and how messages would be passed between them. Feel free to describe it in the particular context of primality testing.

R2a.5.

Complete the following D function so that any thread can send a message containing its own tid. Usually, there is no response to such a message, but when the number of waiting threads reaches threshold, the function should send back a message to all the accumaleted tids with true as its message.

To do this, you will want to maintain a list of waiting tids. You can use an array of type Tid[]. To add a new item to the list, you can use the ~= operator: “waiting ~= new_tid”.

void barrier(int threshold) {

Test 2 Review A: Solutions

R2a.1.
  • Shared-memory solutions lead to race conditions and deadlock, which are difficult to find.

  • With many processors/cores, giving each core efficient access to memory is quite challenging, including complex issues such as cache concurrency; it becomes impractical beyond, say, 16 cores. (One even gets into memory models where one thread's code may say to modify one memory location after another but another thread may see the second memory location updated before the first.)

R2a.2.

The receive function takes a variable number of functions as parameters, each parameter function corresponding to a different message category. Each parameter function receives a different combination of parameter types. The receive function blocks until another thread sends a message, whereupon receive goes through all the parameter functions to see which one takes parameters matching the message contents; receive then invokes that function.

(In the Web server example, the cache-management thread's call to receive includes two functions as parameters — one corresponding to query messages, the other to update messages. The functions were distinguished by different parameter types: The first takes a thread ID and a string (representing the query filename), while the secondtakes two strings (the filename to update and the new value to associate with it).)

R2a.3.

If we could send non-immutable values, then we in fact have a language where programmers could accidentally develop shared-memory systems. In particular, one thread could pass a memory pointer to another thread, and the sending thread could then modify the contents in that memory which would be visible to the other thread.

R2a.4.

We would have 11 threads. The first is the “dispatcher thread,” to which any job requests are submitted, while the other 10 are the “worker threads.” The dispatcher thread maintains a list of ready worker threads and a list of pending job requests.

  • When the dispatcher receives a job request, it checks the list of ready threads. If that list isn't empty, it removes one worker thread from the list and forwards it to that worker thread. If the list is empty, it adds the request to its list of pending job requests.
  • When a worker thread receives a job request, it executes the job, it sends the result to the requesting thread, and it sends a message to the dispatcher notifying it that it is ready for another job.
  • When the dispatcher receives notification that a worker thread is ready, it checks its list of pending job requests. If that list isn't empty, it removes one of the pending requests and sends it to the worker thread. If the list is empty, it adds the worker thread to its list of ready threads.
R2a.5.
void barrier(int threshold) {
    Tid[] waiting = [];

    while (true) {
        Tid new_tid = receiveOnly!Tid();
        waiting ~= new_tid;
        if (waiting.length >= threshold) {
            foreach (Tid old_tid : waiting) {
                send(old_tidtrue);
            }
            waiting = [];
        }
    }
}