printable version
Test 1
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
Problem X1.1.
[12 pts]
List all possible outputs for the below program,
and explain how each could occur.
public class ThreadExample extends Thread {
private int x = 0;
private int y = 1;
private int z = 2;
public void run() {
y = 4;
x += z;
}
public void doMain() {
z = 8;
x += y;
}
public static void main(String[] args) throws InterruptedException {
ThreadExample sub = new ThreadExample();
sub.start(); // kick off 'run'
sub.doMain(); // simultaneously do 'doMain'
sub.join(); // 'doMain' is complete; wait for 'run' to complete
System.out.println(sub.x);
}
}
In fact, I executed this program 10,000 times and got the
following outputs:
output | # runs |
9 | 9,692 |
6 | 297 |
8 | 6 |
12 | 4 |
4 | 1 |
To understand these, observe that run
and doMain
will compile to essentially four instructions each.
run | doMain |
1. | store 4 at y |
1. | store 8 at z |
2. | load x into R0 |
2. | load x into R0 |
3. | load z into R1 |
3. | load y into R1 |
4. | add R1 into R0 |
4. | add R1 into R0 |
5. | store R0 at x |
5. | store R0 at x |
To get 9: Do all of doMain
, which adds 1 to x
,
then do all of run
, which adds 8 more to x
.
To get 6: Do all of run
, which adds 2 to x
,
then do all of doMain
, which adds 4 more to x
.
To get 8: Do steps 1 to 3 of doMain
so R0 and R1 hold
0 and 1; then do steps 1 to 3 of run
so R0 and R1 hold 0
and 8; then complete doMain
, storing 1 into x
;
then complete run
, storing 8 into x
.
To get 12: Do step 1 of run
and of doMain
;
then complete run
, adding 8 to x
;
then complete doMain
, adding 4 more to x
.
To get 4: Do steps 1 to 3 of run
so R0 and R1 hold
0 and 2; then do steps 1 to 3 of doMain
so R0 and R1 hold 0
and 4; then complete doMain
, storing 2 into x
;
then complete run
, storing 4 into x
.
There are three less obvious outputs that are possible,
though they did not happen to occur on the testing computer.
To get 1: Do steps 1 to 3 of run
so R0 and R1 hold
0 and 1, but run
's write to y
is not seen by
doMain
, either because that thread is working from another
cache with an old y
value, or because the compiler
decides to reorder instructions so that the store to y
comes last (since nothing in the program says that the compiler
is required to store to y
before going on).
Then do steps 1 to 3 of doMain
so R0 and R1 hold 0
and 2; then complete run
, storing 1 into x
;
then complete doMain
, storing 2 into x
.
To get 2: Do steps 1 to 3 of doMain
so R0 and R1 hold
0 and 1, but doMain
's write to z
is not seen by
run
, either because that thread is working from another
cache with an old z
value, or because the compiler
decides to reorder instructions so that the store to z
comes last (since nothing in the program says that the compiler
is required to store to z
before going on).
Then do steps 1 to 3 of run
so R0 and R1 hold 0
and 2; then complete doMain
, storing 1 into x
;
then complete run
, storing 2 into x
.
To get 3:
Suppose the compiler reorders instructions in run
so that they are 3–2–4–5–1.
(Nothing in the program requires the compiler to store to y
first,
and similarly the compiler can decide to load z
before it loads x
.)
Start with step 3 run
so R1 hold 2;
then complete doMain
, adding 1 into x
;
then complete run
, adding 2 into x
.
Problem X1.2.
[6 pts]
Define the term race condition as it applies to
multithreaded applications.
A race occurs when the behavior of a program depends on the
scheduling of concurrently running processes or threads.
Problem X1.3.
[16 pts]
“Event listeners” are a particularly common pattern found in
object-oriented programming. The below interface and class set up a
simple example, which features an integer value wrapped in the
MonitoredInteger
class, where other
objects can listen for changes to the wrapped integer through
implementing the MonitoredListener
interface and
registering it with the MonitoredInteger
.
public interface MonitoredListener {
public void valueChanged(MonitoredInteger source, int oldVal, int newVal);
}
import java.util.ArrayList;
public class MonitoredInteger {
private int value = 0;
private ArrayList<MonitoredListener> listeners = new ArrayList<MonitoredListener>();
private Object listenersLock = new Object();
public void addListener(MonitoredListener listener) {
synchronized (listenersLock) {
listeners.add(listener);
}
}
public void set(int newValue) {
synchronized (listenersLock) {
int oldValue = value;
value = newValue;
for (MonitoredListener listener : listeners) {
listener.valueChanged(this, oldValue, newValue);
}
}
}
}
The programmer has carefully wrapped accesses to the
listeners
array, so that a call to addListener
by
one thread won't interfere with the iteration through
listeners
by another thread calling set
.
Unfortunately, this simple implementation can lead to deadlock
if a class's valueChanged
method requests a different lock
that might already be held by another thread that wants
listenersLock
.
a. Without using locks on anything but myLock
,
complete the below class so that it can lead to deadlock
when one thread calls runA
at virtually the same time
that another thread calls runB
.
public class MyClass implements MonitoredListener {
private MonitoredInteger myInt = new MonitoredInteger();
private Object myLock = new Object();
public MyClass() {
myInt.addListener(this);
}
public void runA() { // called by thread A
}
public void runB() { // called by thread B
}
public void valueChanged(MonitoredInteger source, int oldVal, int newVal) {
}
}
b. Explain how your above example class could lead to
deadlock.
c. Show how to repair MonitoredInteger
's use of
listenersLock
so that this deadlock would not occur.
(One way to repair it is to use a ConcurrentLinkedQueue
in place of ArrayList
, but your answer should preserve
the ArrayList
and simply be smarter about lock
acquisition.)
a.
public class MyClass implements MonitoredListener {
private MonitoredInteger myInt = new MonitoredInteger();
private Object myLock = new Object();
public MyClass() {
myInt.addListener(this);
}
public void runA() { // called by thread A
myInt.set(4)
}
public void runB() { // called by thread B
synchronized (myLock) {
System.out.println("changing to 5");
myInt.set(5);
}
}
public void valueChanged(MonitoredInteger source, int oldVal, int newVal) {
synchronized (myLock) {
System.out.println("changed to " + newValue);
}
}
}
b. A thread enters runA
, which enters
set
and acquires listenersLock
before a context
switch to another thread who enters runB
and acquires
myLock
and then enters set
. The runB
thread
cannot proceed through set
, since listenersLock
is
unavailable, so we proceed back to the runA
thread, which
calls valueChanged
and thus requests myLock
, which
is held by the runB
thread and so the runA
thread
must stall as well.
c. We can solve the problem by modifying set
so
that it creates a copy of the listeners
array while in
the synchronized
block, and then it iterates through
the copy for notifications outside the synchronized
block.
public void set(int newValue) {
int oldValue;
ArrayList<MonitoredListener> listenersCopy;
synchronized (listenersLock) {
oldValue = value;
value = newValue;
listenersCopy = new ArrayList<MonitoredListener>(listeners);
}
for (MonitoredListener listener : listenersCopy) {
listener.valueChanged(this, oldValue, newValue);
}
}
Problem X1.4.
[14 pts]
Using wait
and notifyAll
,
and without any busy-waiting,
complete the requestStep
and run
method below so
that any thread's call to requestStep
will lead
to run
(being executed continuously by a different
thread) entering doStep
; requestStep
should
return without waiting for doStep
to start.
public class Stepper implements Runnable {
public void requestStep() {
}
public void run() {
while (true) {
// wait for next step request
// next step request received - do it
this.doStep();
}
}
private void doStep() {
// complex computation here - details unimportant to the problem
}
}
public class Stepper implements Runnable {
private boolean requested = false; // ADDED
private Object requestedLock = new Object(); // ADDED
public void requestStep() {
synchronized (requestedLock) { // ADDED start
requested = true;
requestedLock.notifyAll();
} // ADDED end
}
public void run() {
while (true) {
// wait for next step request
synchronized (requestedLock) { // ADDED start
while (!requested) {
try { requestedLock.wait(); } catch (InterruptedException e) { }
}
requested = false;
} // ADDED end
// next step request received - do it
this.doStep();
}
}
private void doStep() {
// complex computation here - details unimportant to the problem
}
}
Problem X1.5.
[10 pts]
A baboon troop has only one food source, which is reached
from atop a precarious ladder that can support only one baboon at a time.
Describe an algorithm for the baboons to decide who
should get to use the ladder next when multiple baboons are hungry.
Explain how your algorithm avoids starvation.
Whichever baboon has been waiting the longest to use the
ladder should be the next to get it. As long as each baboon
who goes up the ladder eventually comes down, this will ensure that any
baboon will eventually get to use the ladder: For any hungry
baboon X, there will be some particular number of baboons who
started waiting for the ladder before X;
once all those have used the ladder, X will get to
use it.
Problem X1.6.
[12 pts]
Java's built-in Semaphore
class implements semaphores, including
a constructor taking an int
parameter giving the initial value,
a no-argument acquire
method equivalent to the
DOWN operation,
and a no-argument release
method equivalent to the
UP operation.
Using no other concurrency tools except the Semaphore
class, complete the following Monitor
class so that any
thread calling await
will stall in the method until some
other thread calls signal
. Note that if multiple threads
are stuck in await
, then all should awaken when another
thread calls signal
.
public class Monitor {
public void await() {
}
public void signal() {
}
}
public class Monitor {
private int numWaiting = 0;
private Semaphore numWaitingLock = new Semaphore(1);
private Semaphore waitingSemaphore = new Semaphore(0);
public void await() {
numWaitingLock.acquire();
numWaiting++;
numWaitingLock.release();
waitingSemaphore.acquire();
}
public void signal() {
numWaitingLock.acquire();
for (int i = 0; i < numWaiting; i++) {
waitingSemaphore.release();
}
numWaiting = 0;
numWaitingLock.release();
}
}
Problem X1.7.
[12 pts]
a. Explain a reason why user-level threads can potentially display
better performance than kernel-level threads.
b. Explain a reason why kernel-level threads can potentially display
better performance than user-level threads.
a. A context switch between user-level threads does
not require switching the CPU into kernel mode, but a context
switch between kernel-level threads incurs this an expensive
operation.
b. On a multicore or multiprocessor system, a kernel
can schedule kernel-level threads simultaneously on different
cores/processors; a user-level library does not have the
opportunity to perform such scheduling.
An alternative answer: With user-level threads, the OS may block
the process (and hence all user-level threads) while responding
to an I/O request from a thread, whereas the OS can still allow one
kernel-level thread to continue while another is blocked on an
I/O request.
Problem X1.8.
[10 pts]
We have four jobs arriving on the following schedule.
(All units are in hours.)
job | arrival | length |
A | 0 | 10 |
B | 4 | 3 |
C | 8 | 6 |
D | 11 | 1 |
a. List the FIFO schedule for these jobs.
b. List the SJF schedule for these jobs.
a. FIFO |
time | job |
0–10 | A |
10–13 | B |
13–19 | C |
19–20 | D |
|
b. SJF |
time | job |
0–4 | A |
4–7 | B |
7–11 | A |
11–12 | D |
12–13 | A |
14–20 | C |
|
Problem X1.9.
[8 pts]
In a system where some processes are I/O-bound while others are
compute-bound, how can we improve our scheduling over a simple
Round Robin approach to improve the overall system
performance?
We should prioritize those processes that are I/O-bound, so
that whenever they are ready for the CPU we can spend the small
amount of time it takes so that the I/O-bound process is again
waiting on I/O. This will use the I/O devices more heavily while
having a negligible effect on CPU usage.