IEEE 1394 and RFC 2734; a viable HSI for hypercubes.

 


Kurt L. Keville

OPNSRC

kurt@gnu.org

 

Robert Tompkin

Clarkson University

tompkinr@clarkson.edu


 

 


Abstract

 

IEEE 1394 (FireWire) compliant cards and cables make for good, cheap cluster High Speed Interconnects (HSI) within some strict confines. Since much of the traditional network gear (switches, routers, etc.) is not available under IP-over-FireWire, this standard lends itself most to directly connected hypercubes and similar constructs. By virtue of the fact that you are restricted to this arrangement, though, you can make some network configuration refinements that ameliorate some of the shortcomings of this implementation. Many operating systems tuned for clustering already support an implementation of IP-over-FireWire, and most of these are coming into compliance with the applicable standards specifications, including RFC 2734, IPv4 over IEEE 1394.

 

What is FireWire?

 

FireWire (also called iLink) is a peer-to-peer high speed serial bus standard. It was developed jointly by Sony and Apple and has gained acceptance among PC manufacturers since standardization. It is a specification like USB but capable of higher speeds (up to 400Mbps in the original design). Like the SCSI standards, one can daisy-chain peripherals without regard to sequence. But unlike SCSI, you can hot-swap and add up to 63 devices. The dream of FireWire was to interconnect various peripherals (that could not be interconnected with earlier protocols), with less worry over electrical shorts or other concerns, using the same cables. Today, you can buy (external) FireWire drives and other peripherals including scanners. Since the original 1394 standard developed in 1995, newer versions have envisioned faster speed (using fiber cables) and a number of different printer adaptations like the p1394 series, the Imaging Protocol Standards.

 

Various manufacturers have implemented 1394 in hardware. Texas Instruments has their PCILynx family of chipsets and many developers subscribe to the OHCI (Open Host Controller Interface) specifications.

 

Why is FireWire appropriate for clustering?

 

We looked at a couple of trends in PC clustering technologies before initiating experiments with FireWire; the movement towards using multiple NICs and therefore hypercubes (or at least Flat Networks), the trend towards integrated NICs, the proliferation of PCI slots on current motherboards, and the increasing speeds (often achieved through the overclocking options in contemporary BIOS settings) on 32-bit PCI buses. The bottleneck in hypercube clustering performance has long been thought to be the network. While you can get good High Speed Interconnects between nodes, or cheap ones, you could rarely combine the 2 in the same network topology. FireWire promises to deliver at least one 400Mbps network connection per card. This would represent a good bang for the buck, considering A) the low cost of PCI-based FireWire cards and B) the gap between inexpensive network topologies and fast (or at least low-latency) topologies. Proprietary interconnects have filled in this gap between Fast Ethernet (802.3) and Gigabit Ethernet (802.3z) traditionally. Considering that there is about a Gigabit per second of theoretical bandwidth available from the PCI bus (most ATX motherboards have 33Mhz / 32 bit PCI buses and all OHCI cards are 32 bit), it would seem that there would be some way to get more throughput for not much more money. Enter FireWire cards… cheap, ubiquitous, and with plenty of promise. Compared to earlier attempts at using IP over other serial standards, FireWire benefits not only from faster throughput but also from a greater number of devices supported on a given bus, a larger array of devices, ease of interconnect, and reuse of multipurpose equipment.

 

 

FireWire network infrastructure: theory.

 

Many modern ATX motherboards have 5 or more PCI slots. Assuming you use all of them for FireWire cards, as you are likely to do with a hypercube, and you used 2 ports per card, you would realize all of your theoretical bandwidth even with today’s drivers. In fact, it is more likely you would run out of IRQs before you maximized your bandwidth usage. There are FireWire hubs, but not much more networking gear. As of this writing, there are no FireWire switches, routers or other traditional packet switched equipment. This was part of the foundation for the case for building a FireWire hypercube.

 

FireWire network infrastructure: practice.

 

Operating Systems

 

Most of the operating systems associated with PC clustering offer some level of FireWire and IP-over-FireWire support. Microsoft, for instance, has had IP-over-FireWire implemented for some time. Unibrain (www.unibrain.com), in particular, has supplied users of the modern Microsoft operating systems (from Windows 98 on) with drivers for most IEEE1394-compatible cards. With the release of Win ME, Microsoft has embraced networking over FireWire and will ship it stock with subsequent operating systems.

Unibrain also offers a port of their FireNet Station driver to support the current generation of Apple computers which have FireWire connections built in. The Unibrain driver does not follow the RFC 2374 standard and will therefore not fit in seamlessly to a hybrid or heterogeneous network, but as long as you are connecting units into a Unibrain FireNet, you shouldn’t have intercommunication problems. Unibrain also offers a FireNet server product for server editions of the popular operating systems.

 

FireWire and IP-over-1394 under Linux.

 

The Linux implementation, while not as seasoned as the Windows variety, does subscribe more fully to the RFC and will probably catch up in performance measures soon. Currently, you can get 100Mbps solid using the eth1394 driver and 120Mbps to 130Mbps using ip1394 for certain packet sizes. Configuration is fairly predictable after you have the module installed. Here again, the OS would support multiple cards and multiple ports on cards. Below is our test setup and methodology.

 

Test configuration (hardware)

 

We performed throughput and latency tests on the following motherboards, usually in multiples of 2. Our tests show that the number and arrangement of boards, and therefore chipset, did not impact the results noticeably.

 

A) Abit BP6, Dual Celeron 500 Mhz

B) Abit KT7, Duron 700 Mhz

C) Asus A7A266, Athlon 1.33 Ghz

 

As you can see, we tested a number of different configurations to see if FSB speed (or the other variables associated with the various motherboards) had an impact upon performance. They did not. We used at least 128Megs of RAM in these configurations (of either PC100, PC133, or PC2100, respectively). Also, it did not appear that performance either suffered or improved on the SMP boxes. For FireWire cards, we used generic OHCI-compliant cards.

 

Test configuration (software)

 

The software to use FireWire with Linux is available via the Linux1394 Project (linux1394.sourceforge.net) and associated links. Internode communication tests were performed on machines that were running the Linux 2.4.6 kernel with the ip1394 and eth1394 device drivers (available from the aforementioned site).

 

Test methodology

 

We used Guido Fiala’s softnet ports of the eth1394 and ip1394 modules from earlier this year and recreated his test scenario (www.s.netic.de/gfiala) with better machines so we wouldn’t hit the CPU roadblocks he hit. While we tested many different configurations, we took his advice and set the MTU to 2030 in our maximum performance tests. We have a proprietary tool in-house that opens a TCP/IP connection between cards and then calls write() 4096 times with the same 4096-byte buffer. This ran stably and returned similar results to Netpipe. Netpipe (which ran unstably with the ip1394 module) is what we will use to present our data. Netpipe (www.scl.ameslab.gov/netpipe) does have an NT port but the FireNet drivers currently require Unibrain cards so we were not able to perform this test.

 

Test results

 

Under Linux we had 100Mbps running under eth1394 consistently and stably. We could get up to 130Mbps in certain configurations under ip1394 but these would not hold for all scenarios (large packet sizes) and would eventually crash the kernel. We could use ping and telnet with ip1394 but ftp and most other TCP/IP applications would not work. Netpipe results for the range of packet sizes from 1000 bits to 10000 bits using ip1394 are shown logarithmically.  Here are the 2 throughput vs. blocksize graphs for the 2 modules.

 

 

 

 

 

As you can see from the ip1394 graph, we hit a maximum throughput (135.5 Mbps) at a packet size just under 25000 bytes.

 

You will notice that the Netpipe test for the eth1394 module went through to completion but it leveled off at its cap bandwidth (100Mbps) at 10000 bytes.

 

Both graphs show an irregularity at the 2000 – 3000 byte point suggesting that the packet fragmenting algorithm works best with packets larger than the MTU size.

 

Conclusions

 

IP-over-1394 is ready for primetime within some restrictions. If you are running a Microsoft operating system, like Windows 2000 Advanced or Datacenter Server, then a hypercube may satisfy your intercommunication needs. The bandwidth will certainly be greater than a similarly constructed Fast Ethernet cube. A popular driver for this architecture, FireNet, will be RFC 2734 compliant in version 3.0.

If you are using Linux as your OS, then you can also use your 1394 cards as NICs. Driver development is continuing and promises to make IP-over-FireWire a viable alternative network choice for directly-connected clusters. It is unlikely we will ever see a switch for FireWire, since this would only be useful for this application.

The future of this use of the IEEE 1394 standard may depend upon the price and performance advances of competing technologies and upon commodity hardware market trends but research continues at a breakneck pace. There are also some new standards that have come into being recently and are being awarded close scrutiny by the FireWire development community. These are RFC 2855, DCHP for IEEE 1394, and investigation of IPv6 over IEEE 1394. We will post updates to the Linux drivers as they come available to extreme-linux.com.