Proposal for a Boston Area gigaPoP

Project Summary

 

This proposal is in response to: "NSF 96-64 -- Connections to the Internet." Specifically it addresses the requirement "C. High Performance Connections for Research and Education Institutions and Facilities."

The objective of this proposal is to create a Boston Area "gigaPoP" to provide the Next Generation Internet service to Research and Educational Institutions in the Boston Area and ultimately in New England.

The "gigaPoP" is an Internet Network which connects institutions at a speed in excess of that which is available today via commodity Internet Service Providers. The gigaPoP network will also provide advanced networking technology that will provide specialized types of service, such as low delay transmission. It will use leading edge technologies such as Internet Protocol version 6 (IPv6) and Resource Reservation Protocol (RSVP).

The gigaPoP will be a joint project of MIT, Harvard and Boston University. Boston University has already received a grant under this program. Harvard and MIT are submitting separate proposals in order to separate out their differing meritorious educational and research applications.

In addition to MIT, Harvard and Boston University the following other institutions have expressed interest in becoming part of the Boston gigaPoP: Northeastern, Yale, University of New Hampshire, Umass Amherst, University of Maine, and Dartmouth College. We envision that the management structure of the gigaPoP will evolve along the lines of the earlier NEARnet network, as this model worked well once before.

The basic structure and architecture of the gigaPoP is common to both proposals and the three schools are collaborating in its construction.

Project Description

Introduction

Boston University, Harvard University and the Massachusetts Institute of Technology provide leadership networking in the greater Boston area and throughout New England. In 1989 we founded the New England Research and Academic Network (NEARnet) which is now part of the BBN Planet network infrastructure. NEARnet served as the foundation of the New England regional Internet which has evolved into a commodity Internet service.

It is now time for us to take the next step in network connectivity amongst the institutions of higher learning in the Boston Area. This proposal is being initiated by MIT, however it is being written in collaboration with Harvard University, which is submitting a parallel proposal. We are also working closely with Boston University which has already received an vBNS interconnection grant.

The purpose of our proposals is to establish a Boston Area "gigaPoP," a high speed data network. Initially it will provide connectivity to Harvard, BU, and MIT with the intention of connecting to other educational and research institutions in the Boston Area and the rest of New England as rapidly as possible.

By high speed network we propose an OC-3 (155Mb/sec) network which we expect to grow as technology becomes both available and affordable.

Boston Area gigaPoP

We propose to construct a Boston Area gigaPoP connecting institutions in the greater Boston area at OC-3 speeds. The primary infrastructure will be an ATM network "cloud" provided by NYNEX. NYNEX has offered a reasonable connection rate, independent of distance covered to connect to the NYNEX ATM network.

In addition to this network, we may have opportunities to acquire cost effective connectivity between some institutions. Although not committed yet, Continental Cablevision is in discussions with us which may lead to a dark fiber connection between Harvard University and MIT.

We propose that NSF sponsor (possibly with cooperation of DARPA and the CAIRN project) an OC-3 link between the vBNS and our proposed Boston Area gigaPoP. We would prefer that this connection terminate in Boston at one of Harvard University, Boston University or MIT. These three schools have a tradition of working together on networking projects, the most important of which was the founding of the New England Academic and Research Network (NEARnet) in 1989.

We are aware of Boston University's proposal and grant in this area and we are working in coordination with John Porter at Boston University. However we are also aware that MCI may be charging a substantial per institution "port fee" for each connecting institution regardless of whether or not an additional port is required on the vBNS ATM network. We therefore include that cost within this proposal.

Mission of the Boston Area gigaPoP

The Boston Area gigaPoP will be part of the Next Generation Internet, a network providing high speed connectivity using advanced networking technologies. Applications which require high speed connectivity often have strong requirements upon the type of service that can meet their needs. For example video applications require low delay delivery, and more importantly, low variability in that delay, in order to provide a smooth non "jittery" image. Similarly real-time measurement and control of remote laboratory experiments also require low delay, low jitter communications.

Another potentially important use of the Boston Area gigaPoP is to provide high speed connectivity to the homes of faculty and staff of the connected institutions. Boston is one of the markets that Continental Cablevision will soon be providing high speed Internet access via cable data modems (speeds in the 3Mb/sec range are projected, which can be considered high speed when it comes to delivering bandwidth to a residential user). The gigaPoP can provide the transit path between this cable television based network and the connected institutions. By using the gigaPoP we can better assure maximum performance by avoiding the vaguerities and congestion often found on the commodity Internet.

To accommodate these various users of the gigaPoP and yet provide the services required, the gigaPoP will make use of IPv6 [IPv6] and Resource Reservations Protocol (RSVP) [RSVP] now being defined by the Internet Engineering Task Force (IETF).

The gigaPoP will also provide direct ATM connectivity between member institutions and to the MCI provided vBNS service (and possibly the CAIRN network).

We also expect to provide connections to commodity Internet Service Providers from the gigaPoP. This will simplify the required routing technology and expertise required by member institutions. They will be able to simply route all of their non-local internet traffic to the gigaPoP which will route traffic appropriately, including traffic bound for commodity service providers.

In Summary:

"The Mission of the Boston Area gigaPoP is to provide high speed advanced technology Internet service to Research and Educational Institutions in the Greater Boston Area initially and the New England Area ultimately."

Architecture of the Boston gigaPoP

Figure 1 shows a schematic diagram of the proposed gigaPoP.

Figure 1

The NYNEX ATM network interconnects to routers at each Institution. A cisco 7507 router will be located at the institution which hosts the vBNS backhaul connection. The router will have an ATM interface to interconnect to the vBNS backhaul circuit and an ATM interface connecting to the NYNEX ATM network. It will also have an interface for a direct connection to the host's campus network. This is most likely a cisco HSSI interface to connect via a point to point circuit to a co-located institutionally owned router.

This router will have its own Autonomous System number and will be the embodiment of the gigaPoP itself. It will communicate via BGP4 with the vBNS routing infrastructure (or other protocol as appropriate). It will also use BGP4 to exchange routing information with Harvard, BU, MIT and our commodity internet service providers (BBN Planet in this case).

However, we will be able to support additional campuses that do not wish to run a complex routing protocol such as BGP4. We will be able to accommodate such campuses provided that they are "leaf nodes" in the Internet routing architecture (i.e., they do not provide transit service to other institutions that will wish to access the gigaPoP). We can accommodate such institutions via static and default routing.

The primary bearer service for the gigaPoP will be IP and we anticipate that most applications will use either IPv4 or IPv6. However we also anticipate that some institutions may wish to experiment with Asynchronous Transfer Mode (ATM). It is therefore likely that institutions will also make use of the NYNEX provided ATM service to provide native ATM capability between their campuses.

MIT will also host a link to the Collaborative Advanced Internet Research Network (CAIRN) and we hope to make this network available as appropriate from the gigaPoP. MIT also has a connection to the Department of Energy's Energy Sciences Network (ESnet) which would be a candidate to connect to the gigaPoP (assuming approval from the Department of Energy).

gigaPoP Management

At the Internet 2 meeting on January 22-23 1997 a group representing Boston Area institutions met to discuss plans for the Boston Area gigaPoP. A planning meeting has been scheduled for February 19th to discuss the organizational structure of the gigaPoP. We expect that a management model similar to NEARnet will evolve.

Detailed Budget for the gigaPoP

The spreadsheet below articulates the expenses for which we expect to spend the $350,000 grant from NSF. Note that we do not include staffing costs here nor do we include the cost of the gigaPoP ATM switch. We expect to finance these items ourselves under our cost sharing arrangement.

All of the values provided here are based on today's best proposed solution. However the communications industry is in a state of flux and we fully expect to use the most efficient and capable technologies available when we actually install the gigaPoP itself.

Our current estimate for the cost of a backhaul circuit connection to the vBNS is $120,000 per year. Assuming that we divide this cost by three institutions MIT's share is . The vBNS port fee is our understanding of the current MCI per institution port fee. This may change as negotiations continue with MCI.

Finally we assume that the cost of the gigaPoP router will be shared three ways.

Expenses in excess of the $350,000 grant we will finance through our cost sharing.

 

 

 

Boston Area gigaPoP Proposal

     
     

Recurring Costs (annually)

   
     

Description

Cost

MIT Share

NYNEX ATM Connection/per site

$54,000

$54,000

Backhaul circuit to vBNS

$120,000

$40,000

MCI vBNS Port Fee

$129,600

$129,600

     

Totals (recurring)

 

$223,600

     

One Time Costs (for two years)

   
     

Description

Cost

MIT Share

Router for gigaPoP (cisco 7507)

$90,000

$30,000

     

Totals (one time)

$90,000

$30,000

     

Total Project Cost (2 years)

 

$477,200

 

MITnet: The MIT Campus Internet

The MIT Campus Internet, referred to as "MITnet" provides high speed Internet access to all academic buildings as well as on-campus dormitories. Off-Campus living groups are connected to MITnet via 56Kb/s frame relay circuits. The sections that follow outline the current MITnet topology as well as discuss near term plans.

MITnet provides a congestion free backbone network that interconnects the campus. Today most congestion seen on inter-institutional network connections is the result of the collection of commodity Internet Service Providers that are used through the United States and the world to connect institutions. Rarely is perceived congestion the result of congestion on the MIT campus and never as the result of congestion of the backbone network itself (unless a hardware failure in a backbone router has artificially limited its bandwidth).

Strategic planning for MITnet is performed by the Network Operations Group (described below). Today's immediate goal is to ensure continued congestion free service to the entire campus. Along with this goal is the goal of providing advanced services such as RSVP where appropriate.

MITnet Operation

MITnet Operations is a small group of dedicated networking professionals with an average experience of over 10 years of operating IP networks. The groups leader, Jeffrey Schiller (c.v. included later), has been involved with MITnet since its inception in 1984 and plays a leadership role in the Internet Engineering Task Force.

Jeffrey Schiller is one of the authors of MIT's Kerberos Authentication System. He is an expert in networking technologies with an emphasis on network security technologies. He is the Area Director for Security on the Internet Engineering Steering Group (IESG) where he oversees the formulation of Internet security standards. His recent work has been involved with building a Public Key Infrastructure, including recent work to integrate Kerberos with X.509 public key technology. His security expertise will be a plus when it comes to developing and deploying real world systems that make use of QoS and RSVP technology. Because RSVP is about reserving a scarce resource, security will play an important role in ensuring that the resources are available to authorized users when needed.

The first MIT wide network was built in 1984. It consisted of a fiber backbone running Pronet-10 at 10Mb/sec. In 1991 the network was upgraded to FDDI in a dual attach configuration running at 100Mb/sec. Prior to 1991 network routing was accomplished by dedicated microvax computers running MIT written and maintained routing software. The installation of the FDDI network also included the installation of commercial (cisco) routers.

The MIT Network January 1997

Today MITnet consists of a Dual Attach FDDI ring which connects approximately 10 Cisco routers serving approximately 20,000 host systems. At the time of this writing we are in the process of upgrading the routers from AGS+'s to recently purchased 7513's. This FDDI ring serves as the backbone network of MITnet.

Connected to this network of cisco routers are approximately 200 local area Ethernet networks. Every academic building, all dormitories and most officially recognized off campus living groups are connected via these local area networks (LANs). Figure 2 depicts this topology graphically. Interconnections between building LANs and the backbone routers are made via multi-mode fiber optic cable which is installed to all academic buildings from three central points on campus where we have routers are located.

Figure 2

Off campus living groups are connected at 56Kb via a NYNEX provided frame relay network. This private frame relay network is connected to MITnet via a T1 line (1.5Mb/sec) and to each of 32 off campus living groups at 56Kb.

External connectivity from MITnet is provided by a cisco 7507 router which is connected to the MITnet backbone FDDI network and an "external" FDDI network upon which routers from BBN Planet, Alternet and until recently MCI are connected. The bandwidth available between MIT and the rest of the Internet is limited to 100Mb/sec by the external FDDI LAN. However this limitation is not in practice an issue because ultimately the bandwidth provided by BBN Planet and Alternet determines the net amount of outflow. The "external" FDDI network is also a transit network for traffic from BBN Planet customers to MCI and Alternet (as well as to each other in some cases). Figure 3 depicts this connection.

Figure 3

The MITnet backbone network is operated by MIT's Information Systems (IS) department as a resource for the entire MIT Community.

Each academic building has one or more Ethernet networks depending upon each department's requirements. Some of these networks are operated by Information Systems while others are operated by individual departments. IS operated Ethernets consist of a building-wide Ethernet backbone serving phone closet located SNMP Manageable ethernet repeaters. These repeaters then provide 10-base-T connectivity to individual network drops in each office or in many cases at each desktop. The exact technology in use in each location depends upon the age of the equipment at that location.

As better technology become available we make use of it in new installations and upgrades. IS managed networks are upgraded on a three year cycle.

Departmental networks contain a range of technology as decided by each individual departments. Many of these networks are legacy networks that date from before MIT had a central network infrastructure (circa 1984). However many are in place to provide specialized service to researchers whose requirements differ from the traditional network user. MIT IS works in cooperation with departments to provide the services they require. Several of these networks make use of Ethernet switching, as well as FDDI technology to provide high speed intra-departmental communications.

Near Term MITnet Plans

We are in the process of upgrading the MITnet backbone network as well as getting ready to provide 100Mb connectivity between building local area networks (including departmental networks) and the MIT backbone. As in interim step we have purchased a Cabletron NMAC+ FDDI switch to increase the aggregate bandwidth available on the MITnet backbone. We also continue to upgrade our cisco AGS+ routers with the newly purchased model 7513 routers. The new backbone network will consist of FDDI links connecting the 7513 routers to the NMAC+ central hub. Figure 4 depicts this situation.

Figure 4

We are also planning to upgrade our 56Kb frame relay network used to connect off campus living groups to T1 speeds.

Meritorious Applications at MIT for a high speed network

We present below abstracts of current research programs and educational projects which can immediately benefit from the proposed connectivity award. They are examples of the broad range of network dependent activities on the campus.

Development of the MIT Quantitative Microscopy and Imaging Network

MIT Principal Investigators: Prof. Elazer R Edelman, Prof. Ian W Hunter,Prof. C. F Dewey,Prof. Douglas A Lauffenburger

Microscopy is ubiquitous among measurement methods in the biological sciences and related engineering disciplines. Virtually all laboratories at MIT utilize some form of microscopic technology, yet these images are commonly obtained, stored, and analyzed on individual machines using commercially-available systems and standardized processing software. Interpretations often remain qualitative, and storage capabilities available to individual users limit data transfer speed and the number of raw and processed images that can be maintained. Today, however, computerized networks can greatly enhance and extend image acquisition, processing, analysis, storage, and interpretation. Accordingly, this project aims to develop a Quantitative Microscopy & Image Processing Network (QMIPN) that will enable access to facilities capable of acquiring, transferring, storing, and recalling images rapidly and efficiently. Advanced methodologies for quantitative image interpretation will be available from both central and remote sites. QMIPN will also serve as a platform for developing new imaging modalities, further enhancing hardware and software image processing modules, and establishing automated and remote forms of image acquisition and analysis. Three phases are planned:

PHASE I -- The Principal Investigators will use relevant research projects as prototypes to define network capabilities. Light, fluorescent, confocal, atomic-force, and transmission electron microscopes will be connected to a central server/router. These microscopes will be adapted to provide common interface, data storage, and analysis paradigms, along with automated remote access and image manipulation. Local computer terminals will control each microscope and sample, communicate with the central processor, and perform basic image filtering. Acquired images will be transported via high-speed ATM connections and downloaded to large capacity systems so that more complex, computer-intensive processing methods can be employed.

PHASE II -- This network will be broadened to involve an increased number of investigators and projects, employing either the core microscope facilities or their own individual laboratory microscope facilities. Inexpensive new CD-ROM writing capabilities will be used to create a large database of stored images. By developing indexing and retrieval methods, these images can be a world-wide source of information.

PHASE III -- The network will be extended to an even wider group of investigators both on and off campus. Accessibility of our network through the Internet will provide a powerful teaching tool that can be tapped by other less-advantaged institutions with a relatively small investment. This would raise the possibility of developing innovative outreach programs to enrich curricula, e.g., at smaller minority schools that do not have strong research programs. Three major benefits will grow as the network develops: 1. Increased research efficiency and innovation, resulting from multi-user facilities involving investigators with diverse interests and skills, in terms of both applications and techniques; 2. Increased research productivity, resulting from wide dissemination and application of state-of-the-art image acquisition and processing methods; 3. Increased cooperation among scientists and engineers on and off campus in terms of both research and teaching, resulting from the ease of access to visual images display and analysis from electronic information networks. Substantial matching funds have already been raised from a variety of institutional and other non-federal sources, standing ready to be leverage this grant.

Networked Multimed Information Services (NMIS)

MIT Principal Investigators: Prof. Steven Lerman, Prof. Richard Larson

Massachusetts Institute of Technology (MIT), Dartmouth College and Medical School, and Carnegie Mellon University (CMU) are jointly developing the Networked Multimedia Information Services (NMIS) system.

NMIS will bring together creative, teaching, network and system experts from participating academic and corporate organizations to explore the creation and distribution of multimedia content on the national information infrastructure and to develop actual operational prototypes of programming and services. NMIS produces digital, network accessible multimedia programming and services for scientists and engineers, K-12 educators and students, and health care professionals. It will assess the uses and benefits of this content, examine Next Generation Internet (NGI) requirements for multimedia delivery, analyze local access technologies for high bandwidth NGI attachment, and conduct related research on multimedia and NGI standards and policy. To support this effort, industry will provide multimedia servers, video editing systems and post production equipment without charge. This equipment will be located at MIT, Dartmouth and within the NGI core network infrastructure at gigaPoPs or switching centers. These resources will be accessible nationwide via the Internet and emerging NGI. In addition, the project will consider issues relating to the creation of new programming and services; the delivery of these to previously underserved groups; and the maximization of the economic, social and political benefits of the NGI.

Deliverables will include programming, services and technologies for nationwide use, and papers, presentations and seminars to disseminate project findings. To assess project results, we will hold workshops at the end of six months, one year and two years, with participation from university, industry and government. The purpose of the first two is to refine the research design and evaluation methods. The third will be an initial evaluation of NMIS programming and services effectiveness. To disseminate research results beyond MIT, Dartmouth and CMU, we will include broad representation at the workshops from the scientific and engineering, K-12 education, health care, and government communities. In addition, lead user groups will provide continuous feedback to NMIS researchers. Finally, the MIT Research Program on Communications Policy and Carnegie Mellon University's Information Networking Institute will analyze interface issues affecting policy, hardware development and service provision, and will disseminate these analyses to the public.

Advanced X-Ray Astronomical Facility

MIT's Center for Space ResearchMIT Principal Investigators: Prof. Claude Canizares

AXAF, the Advanced X-ray Astrophysics Facility, is the U.S. follow-on to the Einstein Observatory. Originally three instruments and a high-resolution mirror carried in one spacecraft, the project was reworked in 1992 and 1993. The AXAF spacecraft will carry a high resolution mirror, two imaging detectors, and two sets of transmission gratings. Important AXAF features are: an order of magnitude improvement in spatial resolution, good sensitivity from 0.1 to 10 keV, and the capability for high spectral resolution observations over most of this range.

The AXAF Science Center is a collaborative project between MIT, Harvard, and TRW. Software development and testing also takes place at "outposts" located at the University of Hawaii, the University of Chicago and Stanford University.

The spacecraft is scheduled for launch in August of 1998. Once in orbit on board experiments will be remotely controllable from a facility at MIT as well as at other collaborating institutions. This will require a high speed data network with guaranteed minimum of delay and bandwidth.

Spacecraft calibration - which is in progress now - generates large amounts of data. These data are transferred from Huntsville, AL, to Harvard and archived there. High speed access between Harvard and MIT would facilitate real time manipulation of the calibration data.

Advanced Visualization Projects

MIT Faculty members are engaged in designing new ways to use computers and networks to improve the quality of teaching in general. Included here are a sampling of projects related to this effort:

DEPARTMENT OF EARTH, ATMOSPHERIC AND PLANETARY SCIENCES

Professor Rick Binzel

Professor Binzel is involved in a project to bring planetary exploration into the classroom. In this project students learn that digital images are more than just pictures, rather they can explore and interact with them. Binzel and postdoctoral assistant Stephen Slivan are developing a set of exercises that allow students to access the latest spacecraft images of other planetary worlds. Through these exercises, students from the K-12 through college levels may be exposed to the excitement of discovery and will be trained in the patience of careful measurements necessary to decipher detailed physical models of the process acting on planetary surfaces. Binzel and colleagues are currently building the software on the Web. From there students will have access to images from existing NASA CDs and databases.

DEPARTMENT OF MATHEMATICS

Professor Alan Edelman

Since 1990 Edelman has arranged for students to access large parallel computers -- what was then one of the fastest computers in the world -- so they could have the power of such a machine in their own hands to run computer programs. (He believes this is the first instance in the entire world of the use of such a computer in an undergraduate mathematics setting.) Since then, his interest in advanced technology in teaching has grown considerably.

This year, Professor Edelman began team teaching a class with Jim Demmel at UC Berkeley using a video link, computers, the Internet and the Web. The course is an advanced interdisciplinary introduction to modern scientific computing on parallel super computers, with emphasis on the myths and realities of what's possible on the world's fastest machines. He hopes to develop information for the Web in the future (see last bullet below). Specifically, he proposes the need for "the classroom of the future":

The ability to video link to multiple places for team teaching

Computer equipment with large screen display for remote classroom projection or reception

"Smart" microphone equipment at student desks (that could filter out coughing, but recognize voice)

Cost Sharing at MIT

The MIT portion of the proposed gigaPoP budget comes to over the two year life of that remains after an NSF appropriation of $350,000 will be paid for by MIT as part of our cost sharing arrangement.

Today the weakest link in the MIT network is the local area Ethernets used to deliver network service to individual computers. MIT will use its cost sharing funds to improve service for users of meritorious applications. Our goal will be to provide at least 100Mb/sec service to those systems hosting applications that require such bandwidth. We will also provide direct ATM connections at 155Mb/sec where called for.

Total Cost sharing funds exceeding $350,000 will come from MIT general operating funds as administered by the Information Systems organization, which is the submitter of this proposal.

MIT formally makes a cost sharing commitment equal to the funds requested from NSF ($350,000). This commitment will be used to ensure that the gigaPoP itself has the funds necessary to fulfill its mission. We will also use cost sharing funds to ensure that network connections from on campus researchers to the gigaPoP meet or exceed the data handling capabilities of the gigaPoP itself. In otherwords it is our intention that our campus network, both backbone and in-building infrastructure not provide a bottleneck to traffic requiring a high level of service and performance.

MIT's Commitment after year 2

As we did in the past with NEARnet, MIT, in cooperation with its partners and other institutions in New England, plans to evolve the gigaPoP into a member supporting organization. This will not be without its challenges, but we believe that by creative use of technology and the potential commercial leverage of a healthy consortium, we will be able to provide a high speed self supporting infrastructure.

References Cited

In lieu of NSF Form 1361

[IPv6] S. Deering, R. Hinden, "Internet Protocol, Version 6 (IPv6) Specification", RFC1883, December 1995.

[RSVP] R. Braden (Ed.), L. Zhang, S. Berson, S. Herzog, S. Jamin "Resource ReSerVation Protocol (RSVP) Version 1 Functional Specification" draft-ietf-rsvp-spec-14.txt, November 1996.

 

Facilities, Equipment and Other Resources

The network in this proposal will benefit from the installed MITnet infrastructure (described in the Project Description). This multi-million dollar infrastructure provides at least 10Mb/s internet service to over 20,000 computers located on the MIT Campus.