6.033 Reading Report IV
Chi
a
(Janet) Wu
Schindler/Ward
Contrast Ethernet's scheme for packet routing
on a non-switched ethernet and the packet routing scheme presented
in the "High Speed Switch Scheduling for LAN" paper.
The major aspect in which Ethernet differs from other
communication systems for carrying digital data packets, namely
the high speed switch scheduling method, is the scheme for packet
routing. Each offers advantages over another even though the high
speed switch network proves decisive in shaping the future of
local-area networking technology because it creates a brand-new
class of distributed networks that can be more closely coupled
than ever.
The nature of Ethernet's topology is based on a single
broadcast multi-access channel. Since in 1976 it was not yet cost-effective
to implement a reliable intermediate 'router' which would require
redundant and dynamic connections, Ethernet assures reliability
by decentralizing control. A sender station with control to the
'ether' is responsible for preventing/recovering from problems
such as packet contention and physical errors. A station recovers
from a detected collision by abandoning the attempt and retransmitting
the packet after some dynamically chosen interval. A degree of
cooperation among the stations is required to share the ether
fairly, otherwise a station is capable of reducing efficiency
to virtually zero by not adjusting its retransmission interval
with increasing traffic or by sending very large packets. This
way failure of a single active element will not propagate to affect
the communications of other stations. Its generality allows potential
for convenient gradual extension without side effect of incommensurate
scaling. However, despite all its modular simplicity, Ethernet
has an ultimate constraint on performance- the need for the entire
channel during a single transmission.
High Speed Switch Scheduling, on the other hand,
is built based on the recent advent of CMOS and FPGA technologies,
which makes possible the implementation of a dynamic router such
as a banyan switch. Packets placed into a banyan network are automatically
delivered to the correct output based solely on the information
in the cell header. A Batcher sorting network used in combination
with a normal banyan network would guarantee that a packet may
be sent from any input to any output provided no two packets are
destined for the same output. In the case where more than one
packets are destined for the same output, the Batcher schedules
one of them to be passed to the Banyan and places the rest in
a queue waiting for the Banyan to complete current transmission.
Hence scheduling and queuing are mandatory issues. Internal buffers
is preferred over retransmission because it allows the sender
to transmit to another destination while its previous packet is
still in the buffer. This indicates a potential for a later packet
to arrive early. To summarize, High Speed Switching network has
the potential for lower latency by shortening path lengths and
eliminates the need to acquire control over the entire network
in order to begin transmitting. The high availability of channels
and the existence of multiple paths between hosts are results
of an asynchronous router tightly organized to maximize performance.
Overall, Ethernet and High Speed Switching are two
distinct forms of distributed system. Even though the packet delay
for both systems becomes unbounded as utility increases, this
rate of increase is much higher for Ethernet. The parallelism
and flexible approach of High Speed Switching is a concluding
factor for the Switch to support a large number of hosts. Ethernet,
however, is capable of offering similar performance at less cost
as long as the number of hosts is moderate. In the near future,
the majority of small networks would be likely to continue to
use Ethernet.