6.033: Computer System Engineering - Spring 2004
Quiz 1 Review Notes
Complexity
-
Problem (Classes of)
- Emergent properties
- Propagation of effects
- Incommensurate scaling
-
Sources
- New requirements
- New performance targets
-
Coping
- Modularity
- Abstraction
- Hierarchy
- Layering
-
Symptoms
- # components
- # interconnects
- irregularities, exceptions
- no concise description
- > 1 person
Questions
Q: The primary way that hierarchy reduces complexity is that it
reduces the size of the individual modules. TRUE / FALSE
A: FALSE: Hierarchy has no direct bearing on module size. Instead, it
provides structure for the interconnections between modules. Its
primary impact on complexity comes from constructing stable
subassemblies and thus reducing the number of modules that interact at
each assembly point.
Q: The primary way that modularity reduces complexity is that it eliminates incommensurate scaling. TRUE / FALSE
A: FALSE: Hierarchy has no effect on incommensurate scaling. Its
impact on complexity comes from reducing the number of
interconnections.
We've discussed 4 principles in dealing with complexity:
modularity, abstraction, hierarchy, and layering. For each system
organization below, choose the most important principle it uses or
implements that makes it effective and explain briefly why the system
organization is an example of that principle.
-
Protocol Stack: Layering. Levels transform one set of functionalities
into another. Levels also interact only with adjacent levels.
Protocol stacks are organized in exactly this manner.
-
Client/Server: Modularity. Isolates client and server address spaces,
don't share fate.
-
Virtual Memory: Abstraction. Hides the management of disk, page
table, etc from the user giving the appearance of a single address
space.
Also, Modularity. Virtual memory isolates processes from one another
by giving them separate and protected address spaces.
-
RPC: Abstraction. RPC makes client/server computing look like local
computing by hiding it beneath a procedure call, hiding the
communication mechanism from the user
-
Microkernel: Modularity. Microkernel partitions an OS's functionality
into isolated user-level modules (like the file service, etc) thereby
minimizing and making well-defined the interactions between the
partitions.
Virtual Memory
- Need it to enforce modularity.
- Programs can reference any memory location, so we need to work around this.
- Idea: control which physical memory locations are accessible.
- Interpose a mapping from VAs to PAs.
Virtual Memory Manager (VMM)
- Located in between the processor and physical memory.
- Recording one entry for every VA is too expensive: on the order of physical memory size.
- Typically, we map fixed sized chunks called pages, which map to blocks of physical memory. [see figure 2-11 in the notes]
What if we want more than 1 Address Space?
- Augment processor with PMAR [see figure 2-13 in the notes]
How do we Manage Address Spaces?
- Trusted intermediary - kernel.
- Has its own address space.
- Only the kernel can access the page maps, and write to PMAR.
We need to be careful about transferring control to the kernel.
Perform these three steps atomically:
- User mode -> Kernel mode
- Save PMAR & reload with address of kernel page table
- Save PC & load with kernel entry point
(In review, covered parts of the solution to 2002 quiz 1 question on
virtual memory. [Questions 4-10.] Solution can be obtained from the
6.033 2002 site.)
Threads
Threads purpose: enforce modularity for CPU
- compare to VMM's purpose: enforce modularity for memory
- thread represents state of running program
- threads can be stopped and resumed
- kernel can use time sharing to simulate many virtual CPUs with only one physical CPU.
When do threads switch?
- preemptive vs non-preemptive
- in preemptive system: when timeslices run out
- in either: when thread is blocking on an event
Relation of threads to address spaces:
- multiple threads in a single AS share memory, hence share
variables, hence share fate
- if each AS corresponds to a single thread and vice versa (processes), they do not share fate
User versus kernel threads:
- Key difference is where the scheduler is implemented: by the user program or by the kernel
- Threading systems can be layered/nested
Coordination and communication among threads:
- problem: race conditions
- wait/notify
- event counts
Polling vs interrupts
- reason to choose one over the other: frequency of events vs time it takes to process an event
Coordinating access to variables
- mutual exclusion
- locks: acquire, release
Deadlock
- simplest scenario: thread 1 does acquire(L1), acquire(L2), and
thread 2 does acquire(L2), acquire(L1). Can deadlock if one thread is
preempted in between acquiring locks.
- one way to avoid: number all locks, and always acquire them in ascending order
Resource allocation
To deal with memory access bottlenecks, caches are often used
to speed up lookup latencies due to the locality of reference.
There is an inverse relationship between access memory speed and
capacity. (for instance, ram is expensive and fast while disks
are much cheaper but slower).
If can't fit all of your storage into one storage layer(ie your ram),
often people use multi-layered caches (such as using a chunk of memory
to store parts of the disk for quick access).
To predict access time with caches, we can use the following equation:
If the cache hit rate is h:
Average lookup latency = h*(cache access time) + (1-h)*(cache miss time)
Dealing with read latency on disks
there are three factors in read time:
- seek latency(must find the track)
- rotational latency (1/2 way around the track) (must find the sector
on the track)
- read throughput (how long to read the bits off the sector)
Page replacement algorithms:
There are a bunch of algorithms for determining which objects to keep in
the
cache.
If you have perfect knowledge of the order of accesses, you can pick
the optimal replacement ordering using dynamic programming.
Since in most instances you can't predict the future, there are a few
approximation algorithms:
- LRU (least recently used)
- MRU (most recently used)
- FIFO (first in first out)
Often, people use the "clock" algorithm to approximate LRU.
Another scheduling problem is the CPU.
Given a preemptive processor, one must figure out when to run what
processes and
how long to run them for.
Various topics dealing with CPU schedulers:
- Round Robin
- Priority Inversion
- Shortest Job First
Also, there are various Disk Schedulers to determine which writes to do
in what order:
- FIFO
- Elevator
- Shortest Seek First
Networks
See the slides.
6.033 home //
Questions or comments:
//
Last updated $Date: 2004/03/05 00:14:34 $ by $Author: ctl $