For each paper, by 12AM the evening before lecture:
List (and explain in one or two sentences each) three items from the two assigned papers that were not considered in the original design of the Internet, but which became important concerns later.
(1) Security considerations (many here) (2) Mobiity (3) BGP / Scalability
1. The congestion avoidance method in this paper increases the congestion window (cwnd) by 1/cwnd on each arriving ACK. Explain why this method increases cwnd linearly in time.
2. This paper describes a congestion control scheme where packet losses are the only explicit signal that congestion has occurred. When a queue is full and a packet arrives at a router ("gateway"), suppose the router can drop either the arriving packet, or a packet at the very front of the queue. Which approach would you choose, and why?
3. If a router never drops packets, what happens to the delays experienced by packets?
1. What happens to the drop probability in PIE if the queueing delay is larger than (say, twice) the target delay (QDELAY_REF) for a very long time?
2. If you have a collection of TCP connections from different sources concurrently sharing a bottleneck link whose router runs PIE, would each connection get the same throughput?
3. In your opinion, what was the most compelling idea in the XCP paper?
1. The drop probability will keep increasing, and it will become 1.
2. Recall the TCP throughput formula. Assuming the same RTTs for the connections, the connections will have the same throughput (on average) because the probability of packet drop for each flow is the same.
1. Suppose a buffer of total length 100 packets is organized as 4 queues that are serviced by Fair Queuing. Assume the buffer is full, the service rate is 10 packets/sec, and that no new packets arrive to the buffer. What is the maximum queuing delay for any packet? How does this change if we use Weighted Fair Queueing and the queues have weights 1, 2, 3, 4 respectively?
2. Would you deploy fair queueing if you knew that all traffic consists of TCP connections running end-to-end congestion control?
1. The maximum packet delay is 100 packets / (10 packets/s) = 10 seconds for both FQ and WFQ. In fact, this holds for any work-conserving scheduling algorithm. The reason is that work-conserving scheduling algorithms only affect the order in which packets are transmitted; irrespective of the transmission order, the last of the 100 packets will be sent after 10 seconds.
2. TCP congestion control will converge to fair bandwidth shares over several RTTs, so FQ is not necessary if a roughly fair bandwidth allocation between TCP connections is all that is required. However, fair queueing allows for strict traffic isolation and differentiated service/QoS. It also allows fairness at different granularities (e.g., per user, per application, etc.).
In the interdomain topology shown below, each node is an autonomous system (AS). Denote the set of IP addresses inside an AS X by IPx. Assume standard customer-peer-provider route filtering and ranking rules as discussed in the paper assigned. There are no other autonomous systems of interest in the following questions.
(Notice that directed edges indicate a costumer-provider relationship, and plain edges indicate a peering relationship.)
Consider AS F. From each of its neighboring autonomous systems, F receives some set of routes using BGP.
From each AS given below, for which IP addresses (in IPx notation) does F receive advertisements?
After F processes BGP advertisements received from each neighbor, F routing table entries whose next-hops are routers in a neighboring AS. For each neighboring AS given below, list the IP addresses (in IPx notation) for which that AS's router is F's next-hop.
1. Unlike traditional TCP congestion control schemes, Sprout does not use packet loss as a congestion signal. Why not?
2. Why is Sprout not suitable for deployment on arbitrary Internet paths?
1. Cellular networks rarely drop packets; Delay is more indicative of congestion
2. Assumes isolation between users in the network — which is appropriate for cellular; Also, the bandwidth-evolution model assumed for cellular in Sprout would have to change for the internet.
(a) Cellular networks rarely drop packets; delay is more indicative of congestion.
(b) Assumes isolation between users in the network — which is appropriate for cellular; also, the bandwidth-evolution model assumed for cellular in Sprout would have to change for the internet.
What factors does a node in Roofnet consider when deciding what route to use for packets to a given destination?
Would Mobile IP work if both endpoints are mobile hosts? Explain your answer briefly.
1. Which dataset taught the authors the most about the active probes?
2. Where were most of the probes coming from?
1. The Sybil dataset. Although data collection lasted barely a day, they learned a lot about leaked shared state in the packet headers of the probes. Ultimately, that helped them to conclude that the GFW likely operates only a few physical systems to do the active probing.
2. Almost all the probes were coming from three popular Chinese autonomous systems.
1. Explain briefly how VL2 leverages TCP congestion control to achieve its goals of uniform high capacity and performance isolation. What can go wrong for non-TCP traffic?
2. VL2 uses two types of IP addresses in its architecture: location-specific (LA) and application-specific (AA) addresses. By way of analogy, which addresses in mobile IP play the same role as LA and AA addresses in VL2?
1. TCP is needed to enforce hose model constraints on traffic. Without TCP, multiple senders can transmit to the same receiver at a faster rate than the receiver’s access link (i.e. network-interface card) speed, thus violating the hose model.
2. AA addresses are analogous to home addresses. They identify the node regardless of its location. LA addresses are analogous to the care-of address. They identify the current location of the node, to which traffic is tunneled.
P4 assumes parallel execution of primitives within an action
function, while statements within the control program (4.6) are
executed sequentially. Why would a language mix sequential and
parallel execution this way? Think in terms of the underlying hardware
(the RMT architecture) that P4 is designed to program.
Answer: P4 was designed to support high-performance programmable switching chips. These chips are structured as a pipeline of match-action stages. As a packet progresses through the pipeline, actions on that packet execute sequentially: actions on a packet within a pipeline stage precede actions in subsequent pipeline stages. However, within a given pipeline stage, all header fields in a packet are manipulated in parallel using RMT's VLIW instruction set. P4's semantics mirror RMT's functionality: the sequential semantics in the control flow capture dependencies between pipeline stages, while the parallel semantics within an action capture parallelism within a stage.
List three network problems that Header Space Analysis can detect, and three that it cannot detect.
1. In Chord, what data structure at each node must be correctly maintained in order for the protocol to perform lookups correctly?
2. What data structure in Chord achieves good lookup performance?
1. successor pointer
2. finger table entries