Accessibility

6.033--Computer System Engineering

Suggestions for classroom discussion


Topic: Andrew D. Birrell. An introduction to programming with threads. Digital Equipment Corporation Systems Research Center Technical Report #35, January, 1989. 33 pages. (Appears also as chapter 4 of Greg Nelson, editor, Systems Programming with Modula-3, Prentice-Hall, 1991, pages 88-118.)

By J. H. Saltzer, February 19, 1996, from notes of a 1994 recitation.


This paper is actually a tutorial, rather than a research paper, so it is written in the style of a textbook. But it is a tutorial for people who have a lot more experience with systems programming than the average 6.033 student.

The paper presents the semantics of a particular thread package and discusses how to program useful things using those semantics, but it says nothing about how a thread package might be implemented. To tie the semantics to the lecture that describes thread implementation, and also to understand concretely what the semantics really mean, it is helpful to sketch an implementation in the course of discussion. The discussion suggestions below follow this implementation-centered theme. Note also that they really only explore the first ten pages of the paper, which is about as far into this paper as one can get in an undergraduate class in one hour.

  • Who is the author? (Birrell is at DEC Systems Research Center, a place where there seem to be a lot of smart systems people. We note that there is another paper with his name on it coming up next week, and that Lampson, the author of the Hints paper, was connected with DEC SRC.)

  • Whom does he acknowledge? (Schroeder, another author in our list, Lampson.)

  • Whom does he cite? (Guttag [MIT], Clark [MIT], Lampson, Saltzer [1966 paper!]. He seems to know where to find good stuff.)

  • What language is Birrell using? (Modula 2+. This is a problem, because most 6.033 students have never seen the particular syntax, and worse, only those who have taken 6.170 have seen a language with semantics like this. And even those who have may be confused by the syntax.)

  • So what does this mean?

                             VAR  x: INTEGER;
                              |   |     |
       I am defining a variable   |   This is its type
                                  |
                     This is its name
    

  • OK, that was too easy. How about this one:
                      PROCEDURE Fork (proc:Forkee; arg:REFANY): Thread;
                            |      |     |    |      |    |        |
    I am defining a procedure      |     |    |      |    |     This is the
                                   |     |    |      |    |     type of the
                    This is its name     |    |       \  /      value that
                                         |    |         |       the procedure
    This is the name of its first argument    |         |       returns
                                              |         |
         This is the type of its first argument      second argument
    

  • What's TYPE REFANY? (It is a built-in type, a pointer to anything.)

  • How about TYPE Forkee? (This is a type defined by the program we are looking at. The definition is
              TYPE Forkee = PROCEDURE(REFANY):REFANY
    That is, if we see something that is declared to be of type Forkee, we know that it will be a procedure that takes as an argument one pointer of unspecified type and returns as its value a pointer of unspecified type.

  • Can you find anything that is declared to be of type Forkee? (Yes, it is buried in the PROCEDURE Fork definitio. The first argument for the Fork procedure is a variable that is confusingly named "proc" that has type Forkee.

  • In those terms, what's this?
                 VAR t: THREAD;
                     t:=  Fork(a,x);
    (Fork is a PROCEDURE. It takes a Forkee as its first argument, a REFANY as its second argument, and returns a THREAD as its third argument.)

  • Yes, but what does it do? (Lots of stuff. It helps here to draw a picture on the board of a thread table containing an entry for the current thread, and as you go through the ensuing steps, add to the table an entry for the newly forked thread.
    1. Create a parallel thread, consisting of
      • An entry in the thread table, including an array of registers
      • A stack, setting the Stack Pointer register in the new thread entry to point to it
      • Fabricate what looks like a call to procedure "a" (that was the first argument to Fork)
        1. Set the Program Counter register in the thread entry to the entry point of "a"
        2. Fabricate a frame at the beginning of the stack so that it looks like "a" was called from the thread manager and will return to the place in the thread manager where it handles the "thread done" case.
        3. Add to the stack frame so that it looks like the call to "a" was made with the argument x (that was the second argument to Fork)
  • Make the returned value of Fork a thread object, probably implemented as a pointer to the newly created thread table entry.
  • Set up the currently active registers to show that we are about to return to our caller, but don't actually return. Instead,
  • Dump the registers into the thread table entry for the current thread and mark that thread "Ready"
  • Invoke the thread scheduler to decide which thread to run.)

  • What happens when procedure "a" of the forked thread finally returns to its caller?
    1. Place returned value from a(x) in the "result" part of this thread entry
    2. deallocate the stack that this thread was using
    3. Mark this thread entry as "Done"
    4. Run the scheduler to find another thread to execute

  • How does the result part get back to anyone who cares to see it? (If the original forker cares to see the result, it can call q := Join(t), where "t" is the handle of type thread that Fork returned. If thread "t" is marked "Done", the thread manager returns the value in "result". If thread "t" isn't done, the thread manager blocks the current thread until the other thread moves to the "Done" state.

  • The last step of the fork procedure implies that when you call Fork, you lose control of the processor, sort of like doing a yield. Why? (Actually, the paper isn't explicit on this point, and there is an alternative implementation choice, starting with step 3: Return to the caller. This approach is simpler, but it means that the newly-created thread won't have a chance to run until someone else chances to pass through the scheduler, no matter how important the new thread is. The implementation we chose gives the scheduler an immediate opportunity to decide whether to run the new thread, continue the old one, or in the light of this new thead's existence, move the processor's attention to something completely different.)

  • How does the scheduler work? (Almost nothing to it:
    1. Search the thread table for a thread that is Ready.
    2. Mark it Running
    3. Load registers from the thread table [since they include the Stack Pointer and the Program Counter, at the end of the register load we are running the new thread])

  • What are the semantics of mutual exclusion in Birrell's thread package?
    (
             VAR keepout:  Mutex
             LOCK keepout  DO
                           ...       only 1 thread at a time will ever
                           ...       be in this region
                           END;
    
    Note that in the Modula 2+ language upper-case letters are significant. "Mutex" is the name of a library-defined type, and Birrell is using "mutex" as the name of an object that is of type "Mutex". Pedagogically, the combination of case-sensitivity and using congruent labels makes things very hard to follow. You can help relieve some of this confusion if on the board you resist the temptation to name a variable of type "Foo" with the name "foo".)

  • So how does LOCK relate to the thread package? What does the implementation look like? (The LOCK keepout statement translates into a Test-and-set on the variable named "keepout". If the test-and-set succeeds, the program continues into the DO/END block. If it fails, the thread package proceeds as follows...
    1. Mark this thread Blocked.
    2. Put pointer to keepout in this thread's thread table entry
    3. Set the PC to point to the LOCK statement (PC-losering)
    4. Run the scheduler Someday, someone--we haven't figured out who yet--must notice that keepout is no longer set, and move this Blocked thread back to a Ready state. When the thread next starts running it will try to set the lock again. However, by the time it gets there, the lock may be locked again by someone else.)

  • But the lock never got unlocked. (Ah, yes, that is supposed to happen as we exit from the END of the DO/END block. So the right way to think of "LOCK m" is that it surrounds the DO/END block, with one call to the thread package at the beginning and another call to the thread package at the end. Note that the unlock could be done just by storing a zero in variable keepout, but this is actually the right time to scan the thread table to look for threads that are blocked on keepout and change them from Blocked to Ready. So this is the missing someone of the previous discussion. It might also be polite to invoce the scheduler--thatis, yield--on the chance that releasing the lock has made it possible for a more important thread to run now.)

  • So what is a CONDITION variable all about? (I want to wait for something. LOCK allows me to wait till a critical section has no other thread in it. We need a generalization of that idea.)

  • Example? (A data consumer is supposed to wait till some data structure is filled in. The thread that is supposed to fill it in hasn't even gotten around to thinking about it, much less LOCKing the structure. In the following example of the use of a condition variable, the other thread is expected to set the variable "datum" to non-nil by performing a Signal(datum) when the structure is "ready".
           VAR dslock:  Mutex;
           VAR oktogo:  Condition;
           LOCK dslock DO
                       WHILE datum = NIL DO Wait(dslock, oktogo) END
                       ..
                       ..  use the structure
                       ..
                       datum := NIL;          # mark structure empty again
                       END

  • Exactly how does Wait work?
              Thread.Wait:  Save PC
                            set Thread.State to Blocked
                            put pointer to oktogo in thread table entry
                            unlock dslock
                            run scheduler
    when thread continues:  LOCK dslock
                            return to caller

  • Why did it unlock dslock? (To allow another thread to get at the data to fill it in while this thread is blocked.)

  • What if the scheduler later decides that oktogo has been signalled, and returns control to this thread, but meanwhile someone else has locked dslock? How can this thread continue? (It can't. There is a possibility that we will have to go blocked a second time (using the lock protocol we explored a little earlier. General observation about thread coordinatio: being awakened is only a hint that it may be possible to proceed, not a guarantee that the path is clear.)

  • And how about the producer side, which invokes SIGNAL?
            Producer:  LOCK dslock DO
                                   fill in data
                                   Signal(oktogo)
                                   END

  • Sketch an implementation of Thread.Signal.
            Thread.Signal:  Search thread table for pointer to oktogo
                            set that thread's state to Ready
                            return

  • What if there are several consumers, and one of the others has already set datum back to NIL? (That is why the consumer is in a WHILE loop--when the Wait returns, the consumer must test the condition to see whether there really is data available to consume. Again, a signal is only a hint that indicates that your condition might be satisfied now. You must always check to verify that it actually is. Note that the consumer checks the condition under the protection of dslock, so that if there is now data available, no other consumer thread will grab it out from under this thread.)
    Comments and suggestions: Saltzer@mit.edu