6.033: Computer System Engineering - Spring 2004

Quiz 1 Review Notes


  1. Problem (Classes of)
  2. Sources
  3. Coping
  4. Symptoms

Q: The primary way that hierarchy reduces complexity is that it reduces the size of the individual modules. TRUE / FALSE

A: FALSE: Hierarchy has no direct bearing on module size. Instead, it provides structure for the interconnections between modules. Its primary impact on complexity comes from constructing stable subassemblies and thus reducing the number of modules that interact at each assembly point.

Q: The primary way that modularity reduces complexity is that it eliminates incommensurate scaling. TRUE / FALSE

A: FALSE: Hierarchy has no effect on incommensurate scaling. Its impact on complexity comes from reducing the number of interconnections.

We've discussed 4 principles in dealing with complexity: modularity, abstraction, hierarchy, and layering. For each system organization below, choose the most important principle it uses or implements that makes it effective and explain briefly why the system organization is an example of that principle.

  1. Protocol Stack: Layering. Levels transform one set of functionalities into another. Levels also interact only with adjacent levels. Protocol stacks are organized in exactly this manner.
  2. Client/Server: Modularity. Isolates client and server address spaces, don't share fate.
  3. Virtual Memory: Abstraction. Hides the management of disk, page table, etc from the user giving the appearance of a single address space. Also, Modularity. Virtual memory isolates processes from one another by giving them separate and protected address spaces.
  4. RPC: Abstraction. RPC makes client/server computing look like local computing by hiding it beneath a procedure call, hiding the communication mechanism from the user
  5. Microkernel: Modularity. Microkernel partitions an OS's functionality into isolated user-level modules (like the file service, etc) thereby minimizing and making well-defined the interactions between the partitions.

Virtual Memory

Virtual Memory Manager (VMM)
What if we want more than 1 Address Space? How do we Manage Address Spaces? We need to be careful about transferring control to the kernel. Perform these three steps atomically:
  1. User mode -> Kernel mode
  2. Save PMAR & reload with address of kernel page table
  3. Save PC & load with kernel entry point
(In review, covered parts of the solution to 2002 quiz 1 question on virtual memory. [Questions 4-10.] Solution can be obtained from the 6.033 2002 site.)


Threads purpose: enforce modularity for CPU When do threads switch? Relation of threads to address spaces: User versus kernel threads: Coordination and communication among threads: Polling vs interrupts Coordinating access to variables Deadlock

Resource allocation

To deal with memory access bottlenecks, caches are often used to speed up lookup latencies due to the locality of reference.

There is an inverse relationship between access memory speed and capacity. (for instance, ram is expensive and fast while disks are much cheaper but slower).

If can't fit all of your storage into one storage layer(ie your ram), often people use multi-layered caches (such as using a chunk of memory to store parts of the disk for quick access).

To predict access time with caches, we can use the following equation:

  If the cache hit rate is h:
  Average lookup latency = h*(cache access time) + (1-h)*(cache miss time)
Dealing with read latency on disks
there are three factors in read time:
  1. seek latency(must find the track)
  2. rotational latency (1/2 way around the track) (must find the sector on the track)
  3. read throughput (how long to read the bits off the sector)

Page replacement algorithms: There are a bunch of algorithms for determining which objects to keep in the cache. If you have perfect knowledge of the order of accesses, you can pick the optimal replacement ordering using dynamic programming.

Since in most instances you can't predict the future, there are a few approximation algorithms:

Often, people use the "clock" algorithm to approximate LRU.

Another scheduling problem is the CPU. Given a preemptive processor, one must figure out when to run what processes and how long to run them for.

Various topics dealing with CPU schedulers:

Also, there are various Disk Schedulers to determine which writes to do in what order:


See the slides.
6.033 home // Questions or comments: 6.033-help // Last updated $Date: 2004/03/05 00:14:34 $ by $Author: ctl $