Reading 22: Queues and Message-Passing
Software in 6.031
Safe from bugs | Easy to understand | Ready for change |
---|---|---|
Correct today and correct in the unknown future. | Communicating clearly with future programmers, including future you. | Designed to accommodate change without rewriting. |
Objectives
After reading the notes and examining the code for this class, you should be able to use message passing (with synchronous queues) instead of shared memory for communication between threads.
Two models for concurrency
In our introduction to concurrency, we saw two models for concurrent programming: shared memory and message passing.
-
In the shared memory model, concurrent modules interact by reading and writing shared mutable objects in memory. Creating multiple threads inside a single Java process is our primary example of shared-memory concurrency.
In the message passing model, concurrent modules interact by sending immutable messages to one another over a communication channel. That communication channel might connect different computers over a network, as in some of our initial examples: web browsing, instant messaging, etc.
The message passing model has several advantages over the shared memory model, which boil down to greater safety from bugs. In message-passing, concurrent modules interact explicitly, by passing messages through the communication channel, rather than implicitly through mutation of shared data. The implicit interaction of shared memory can too easily lead to inadvertent interaction, sharing and manipulating data in parts of the program that don’t know they’re concurrent and aren’t cooperating properly in the thread safety strategy. Message passing also shares only immutable objects (the messages) between modules, whereas shared memory requires sharing mutable objects, which we have already seen can be a source of bugs.
We’ll discuss in this reading how to implement message passing within a single process. We’ll use blocking queues (an existing threadsafe type) to implement message passing between threads within a process. Some of the operations of a blocking queue are blocking in the sense that calling the operation blocks the progress of the thread until the operation can return a result. Blocking makes writing code easier, but it also means we must continue to contend with bugs that cause deadlock.
In the next concurrency reading we’ll see how to implement message passing between client/server processes over the network.
Message passing with threads
We saw in Locks and Synchronization that a thread blocks trying to acquire a lock until the lock has been released by its current owner. Blocking means that a thread waits (without doing further work) until an event occurs. We can use this term to describe methods and method calls: if a method is a blocking method, then a call to that method can block, waiting until some event occurs before it returns to the caller.
We can use a queue with blocking operations for message passing between threads — and the buffered network communication channel in client/server message passing will work the same way.
Java provides the BlockingQueue
interface for queues with blocking operations.
In an ordinary Queue
:
add(e)
adds elemente
to the end of the queue.remove()
removes and returns the element at the head of the queue, or throws an exception if the queue is empty.
A BlockingQueue
extends this interface:
additionally supports operations that wait for the queue to become non-empty when retrieving an element, and wait for space to become available in the queue when storing an element.
put(e)
blocks until it can add elemente
to the end of the queue (if the queue does not have a size bound,put
will not block).take()
blocks until it can remove and return the element at the head of the queue, waiting until the queue is non-empty.
When you are using a BlockingQueue
for message passing between threads, make sure to use the put()
and take()
operations, not .add()
and remove()
We will implement the producer-consumer design pattern for message passing between threads. Producer threads and consumer threads share a synchronized queue. Producers put data or requests onto the queue, and consumers remove and process them. One or more producers and one or more consumers might all be adding and removing items from the same queue. This queue must be safe for concurrency.
Java provides two implementations of BlockingQueue
:
ArrayBlockingQueue
is a fixed-size queue that uses an array representation.put
ting a new item on the queue will block if the queue is full.LinkedBlockingQueue
is a growable queue using a linked-list representation. If no maximum capacity is specified, the queue will never fill up, soput
will never block.
Like other collections classes in Java, these synchronized queues can hold objects of an arbitrary type. We must choose or design a type for messages in the queue: we will choose an immutable type because our goal in using message passing is to avoid the problems of shared memory. Producers and consumers will only communicate only by sending and receiving messages, and there will be no opportunity for (mis)communication by mutating an aliased message object.
And just as we designed the operations on a threadsafe ADT to prevent race conditions and enable clients to perform the atomic operations they need, we will design our design our message objects with those same requirements.
Bank account example
Our first example of message passing was the bank account example.
Each cash machine and each account is its own module, and modules interact by sending messages to one another. Incoming messages arrive on a queue.
We designed messages for get-balance
and withdraw
, and said that each cash machine checks the account balance before withdrawing to prevent overdrafts:
get-balance
if balance >= 1 then withdraw 1
But it is still possible to interleave messages from two cash machines so they are both fooled into thinking they can safely withdraw the last dollar from an account with only $1 in it.
We need to choose a better atomic operation: withdraw-if-sufficient-funds
would be a better operation than just withdraw
.
Implementing message passing with queues
You can see all the code for this example on GitHub. All the relevant parts are excerpted below.
Here’s a message passing module for squaring integers:
/** Squares integers. */
public class Squarer {
private final BlockingQueue<Integer> in;
private final BlockingQueue<SquareResult> out;
// Rep invariant: in, out != null
/** Make a new squarer.
* @param requests queue to receive requests from
* @param replies queue to send replies to */
public Squarer(BlockingQueue<Integer> requests,
BlockingQueue<SquareResult> replies) {
this.in = requests;
this.out = replies;
}
/** Start handling squaring requests. */
public void start() {
new Thread(new Runnable() {
public void run() {
while (true) {
// TODO: we may want a way to stop the thread
try {
// block until a request arrives
int x = in.take();
// compute the answer and send it back
int y = x * x;
out.put(new SquareResult(x, y));
} catch (InterruptedException ie) {
ie.printStackTrace();
}
}
}
}).start();
}
}
Incoming messages to the Squarer
are integers; the squarer knows that its job is to square those numbers, so no further details are required.
Outgoing messages are instances of SquareResult
:
/** An immutable squaring result message. */
public class SquareResult {
private final int input;
private final int output;
/** Make a new result message.
* @param input input number
* @param output square of input */
public SquareResult(int input, int output) {
this.input = input;
this.output = output;
}
@Override public String toString() {
return input + "^2 = " + output;
}
}
We would probably add additional observers to SquareResult
so clients can retrieve the input number and output result.
Finally, here’s a main method that uses the squarer:
public static void main(String[] args) {
BlockingQueue<Integer> requests = new LinkedBlockingQueue<>();
BlockingQueue<SquareResult> replies = new LinkedBlockingQueue<>();
Squarer squarer = new Squarer(requests, replies);
squarer.start();
try {
// make a request
requests.put(42);
// ... maybe do something concurrently ...
// read the reply
System.out.println(replies.take());
} catch (InterruptedException ie) {
ie.printStackTrace();
}
}
reading exercises
Write the rep invariant of SquareResult
, as an expression that could be used in checkRep()
below.
Use the minimum number of characters you can without any method calls in your answer.
private void checkRep() {
assert REP_INVARIANT;
}
Evaluate the following code review comments on the code above:
“The Squarer
constructor shouldn’t be putting references to the two queues directly into its rep; it should make defensive copies.”
(missing explanation)
“Squarer.start()
has an infinite loop in it, so the thread will never stop until the whole process is stopped.”
(missing explanation)
“Squarer
” can have only one client using it, because if multiple clients put requests in its input queue, their results will get mixed up in the result queue.
(missing explanation)
Stopping
What if we want to shut down the Squarer
so it is no longer waiting for new inputs?
One strategy is a poison pill: a special message on the queue that signals the consumer of that message to end its work.
To shut down the squarer, since its input messages are merely integers, we would have to choose a magic poison integer (everyone knows the square of 0 is 0 right? surely they’ll never ask for the square of 0? don’t use magic numbers) or use null (don’t use null). Instead, we might change the type of elements on the requests queue to an ADT:
SquareRequest = IntegerRequest + StopRequest
with operations:
input : SquareRequest → int
shouldStop : SquareRequest → boolean
and when we want to stop the squarer, we enqueue a SquareRequest
where shouldStop
returns true
.
For example, in Squarer.start()
:
public void run() {
while (true) {
try {
// block until a request arrives
SquareRequest req = in.take();
// see if we should stop
if (▶▶A◀◀) { ▶▶B◀◀; }
// compute the answer and send it back
int x = ▶▶C◀◀;
int y = x * x;
out.put(new SquareResult(x, y));
} catch (InterruptedException ie) {
ie.printStackTrace();
}
}
}
reading exercises
If the operations above are implemented as instance methods, fill in the blanks in the run
method…
(missing explanation)
Using the data type definition above:
SquareRequest = IntegerRequest + StopRequest
For each option below: is the snippet of code a correct outline for how you would implement this in Java that takes maximum advantage of static checking?
interface SquareRequest { ... }
class IntegerRequest implements SquareRequest { ... }
class StopRequest implements SquareRequest { ... }
(missing explanation)
class SquareRequest { ... }
class IntegerRequest { ... }
class StopRequest { ... }
(missing explanation)
class SquareRequest {
private final String requestType;
public static final String INTEGER_REQUEST = "integer";
public static final String STOP_REQUEST = "stop";
...
}
(missing explanation)
It is also possible to interrupt a thread by calling its interrupt()
method.
If the thread is blocked waiting, the method it’s blocked in will throw an InterruptedException
(that’s why we have to try-catch that exception almost any time we call a blocking method).
If the thread was not blocked, an interrupted flag will be set.
The thread must both handle any InterruptedException
s and check for the interrupted flag to see whether it should stop working.
For example:
public void run() {
// handle requests until we are interrupted
while ( ! Thread.interrupted()) {
try {
// block until a request arrives
int x = in.take();
// compute the answer and send it back
int y = x * x;
out.put(new SquareResult(x, y));
} catch (InterruptedException ie) {
// stop
break;
}
}
}
Thread safety arguments with message passing
A thread safety argument with message passing might rely on:
Existing threadsafe data types for the synchronized queue. This queue is definitely shared and definitely mutable, so we must ensure it is safe for concurrency.
Immutability of messages or data that might be accessible to multiple threads at the same time.
Confinement of data to individual producer/consumer threads. Local variables used by one producer or consumer are not visible to other threads, which only communicate with one another using messages in the queue.
Confinement of mutable messages or data that are sent over the queue but will only be accessible to one thread at a time. This argument must be carefully articulated and implemented. Suppose one thread has some mutable data to send to another thread. If the first thread drops all references to the data like a hot potato as soon as it puts them onto a queue for delivery to the other thread, then only one thread will have access to those data at a time, precluding concurrent access.
In comparison to synchronization, message passing can make it easier for each module in a concurrent system to maintain its own thread safety invariants. We don’t have to reason about multiple threads accessing shared data if the data are instead transferred between modules using a threadsafe communication channel.
reading exercises
Leif Noad just started a new job working for a stock trading company:
public interface Trade {
public int numShares();
public String stockName();
}
public class TradeWorker implements Runnable {
private final Queue<Trade> tradesQueue;
public TradeWorker(Queue<Trade> tradesQueue) {
this.tradesQueue = tradesQueue;
}
public void run() {
while (true) {
Trade trade = tradesQueue.poll();
TradeProcessor.handleTrade(trade.numShares(), trade.stockName());
}
}
}
public class TradeProcessor {
public static void handleTrade(int numShares, String stockName) {
/* ... process the trade ... takes a while ... */
}
}
What are TradeWorker
s?
(missing explanation)
Suppose we have several TradeWorker
s processing trades off the same shared queue.
Notice that we are not using BlockingQueue
! Workers call poll
to retrieve items from the queue.
Javadoc for Queue.poll()
Which of the following can happen?
(missing explanation)
Deadlock
The blocking behavior of blocking queues is very convenient for programming, but blocking also introduces the possibility of deadlock. In a deadlock, two (or more) concurrent modules are both blocked waiting for each other to do something. Since they’re blocked, no module will be able to make anything happen, and none of them will break the deadlock.
In general, in a system of multiple concurrent modules communicating with each other, we can imagine drawing a graph in which the nodes of the graph are modules, and there’s an edge from A to B if module A is blocked waiting for module B to do something. The system is deadlocked if at some point in time, there is a cycle in this graph. The simplest case is the two-node deadlock, A → B and B → A, but more complex systems can encounter larger deadlocks.
Deadlock is much more common with locks than with message-passing — but when the message-passing queues have limited capacity, and become filled up to that capacity with messages, then even a message-passing system can experience deadlock. A message passing system in deadlock appears to simply get stuck.
Let’s see an example of message-passing deadlock, using the same Squarer
we’ve been using so far.
This time, instead of using LinkedBlockingQueues
that can grow arbitrarily (limited only by the size of memory), we will use the ArrayBlockingQueue
implementation that has a fixed capacity:
private static final int QUEUE_SIZE = 100;
...
// make request and reply queues big enough to hold QUEUE_SIZE messages each
BlockingQueue<Integer> requests = new ArrayBlockingQueue<>(QUEUE_SIZE);
BlockingQueue<SquareResult> replies = new ArrayBlockingQueue<>(QUEUE_SIZE);
Many message-passing systems use fixed-capacity queues for performance reasons, so this is a common situation.
Finally, to create the conditions needed for deadlock, the client code will make N
requests, to get the squares of the numbers from 1 to N
, before checking for any of Squarer
’s replies.
Here is the full code:
private static final int QUEUE_SIZE = 100;
private static final int N = 100;
/** Use a Squarer to square all the integers from 1 to N. */
public static void main(String[] args) throws IOException {
// make request and reply queues big enough to hold QUEUE_SIZE messages each
BlockingQueue<Integer> requests = new ArrayBlockingQueue<>(QUEUE_SIZE);
BlockingQueue<SquareResult> replies = new ArrayBlockingQueue<>(QUEUE_SIZE);
Squarer squarer = new Squarer(requests, replies);
squarer.start();
try {
// send the requests to square 1...N
for (int x = 1; x <= N; ++x) {
requests.put(x);
System.out.println(x + "^2 = ?");
}
// collect the replies
for (int x = 1; x <= N; ++x) {
System.out.println(replies.take());
}
} catch (InterruptedException ie) {
ie.printStackTrace();
}
}
It turns out with N
=100 and QUEUE_SIZE
=100, the code above works and doesn’t reach a deadlock.
But as N
grows larger and larger, our client is making many requests without reading any replies.
If N
is larger than QUEUE_SIZE
, the replies
queue fills up with unread replies.
Then Squarer
blocks trying to put
one more reply into that queue, and it stops calling take
on the requests
queue.
The client can continue putting more requests into the requests
queue, but only up to the size of that queue.
If there are more additional requests than can fit in that queue – i.e., when N
is greater than 2*QUEUE_SIZE
– then the client’s call to requests.put()
will block too.
And now we have our deadly embrace.
Squarer
is waiting for the client to read some replies and free up space on the replies
queue, but the client is waiting for Squarer
to accept some requests and free up space on the requests
queue.
Deadlock.
Final suggestions for preventing deadlock
One solution to deadlock is to design the system so that there is no possibility of a cycle — so that if A is waiting for B, it cannot be that B was already (or will start) waiting for A.
Another approach to deadlock is timeouts. If a module has been blocked for too long (maybe 100 milliseconds? or 10 seconds? how to decide?), then you stop blocking and throw an exception. Now the problem becomes: what do you do when that exception is thrown?
Summary
Rather than synchronize with locks, message passing systems synchronize on a shared communication channel, e.g. a stream or a queue.
Threads communicating with blocking queues is a useful pattern for message passing within a single process.