Front page Table of Contents Abstract Chapters 1 2 3 4 5 6 7 8 9 Bibliography
What does it take to share geographic information? That question back in Chapter 1 was motivated by the observation that it’s rare for planners or public managers to share their working information with each other across organizational boundaries. It’s rare even in environmental planning and policy, where it would seem especially beneficial due to key ecological relationships across jurisdictions, industries, and government hierarchies. It’s rare despite the nature of geographic information, whose high cost and potential for widespread re-use should discourage duplication of efforts; and even though the benefits of combining multiple geographic datasets for a given location should discourage isolated, parochial approaches. The difficulty seemed to be adjusting both technological and organizational systems to accommodate shared geographic information. One likely solution was an infrastructure for sharing geographic information—that is, a package of organizational and technological choices geared towards helping organizations make use of each other’s information resources in their work. Learning how to design and grow such infrastructures seemed key to effectively sharing geographic information for environmental planning and policy. The resulting research effort had two primary thrusts: case studies and a prototype data service.
b. Findings from the case studies
The case studies examined three real-life examples of geographic information sharing infrastructures linking environmental agencies, and compared their technological and organizational characteristics and their growth patterns over time. The cases highlighted the importance, in launching geographic information infrastructures, of a convergence between shared norms, resources, and people to articulate these norms and leverage the resources. Once launched, the cases showed, infrastructures risked getting stuck at an intermediate "scaffolding" stage of development, with few tangible impacts on planning and policy. At these and other choice points, they needed someone to integrate many views of infrastructures and information sharing, so as to grow the organizational and technological complexity needed to affect real decisions and to be sustained over the long term. Indeed, given a complex and rapidly changing context, the choice of a growth path and the unfolding of other decisions over time seemed more important in the cases than a set of initial factors or an a priori blueprint. Nonetheless, a laissez-faire approach was inadequate: some evolving standard (a geographic reference system, or functional standards such as metadata or queries) was important to build convergence among participants. Finally, deeper-than-expected organizational changes seemed necessary to capitalize on a new "data services" model of information sharing, in which data management and communications were merged, and interdependence and teamwork governed a complex "ecosystem" of government agencies. Yet because organizational and technological changes were interdependent, both kinds of change were more likely incremental than radical.
c. Findings from the orthophoto prototype
The second thrust of the research put into practice some ideas about data services for geographic information sharing. Built from simple, freely-available software components, the orthophoto browser provided an efficient, customized online service for geographic information, through "just-in-time" extraction and compression of image snippets, along with header files for integration with local mapping software. This opened up use of orthophoto data to a much wider audience, in a way that encouraged convergence of geographic data among different sources. This service suggests an expanded conceptualization of the National Spatial Data Infrastructure (NSDI), as a collection of networked services and not just static datasets. It also foretells a need to shift the NSDI’s standards focus, and that of other similar information infrastructures, from traditional data standards to newer functional standards that govern the interaction between information systems. The experience of building the browser also highlighted the many key design choices involved, and their ephemeral nature given the pace of technological change and the loose coupling between clients and servers. In particular, it seemed that designers of geographic data services would often face the following challenges: providing useful features vs. reaching a wide audience; building for a widely diverse set of users and uses; and tuning the service for current hardware and widely variable networking and client software. The resulting prototype also provided a tangible view of the organizational changes implied for the three cases and other similar contexts, as agencies using data services redistribute responsibilities for information collection and management.
d. Synthesis of the two studies
Together, these two research efforts—case studies and orthophoto prototype—provided insights that were both grounded in real-world organizations, and sensitive to coming technological changes, to help public agencies reap the promise of shared geographic information for ecosystem planning and policy. This chapter draws on both of these research efforts to articulate strategic choices of technology, organizations, and policy for geographic information infrastructures.
Section 2 below shows that in building geographic information infrastructures, standards are strategic choices that define the nature, scope, and effectiveness of the infrastructure as a whole. Particularly in the geographical arena, choosing what to standardize is a complex, multidimensional decision. On the organizational front, Section 3 suggests that new structures and relationships are both necessary and likely, and that agencies face the dual challenge of redistributing information responsibilities and balancing incremental vs. radical change. Section 4 shows several ways in which government policy can foster open, flexible infrastructures; and several ways that these infrastructures may enhance environmental policy. The last section sketches important questions for further research.
2. Technology implications: choosing strategic standards
In reflecting on the case studies and the prototype, one significant theme emerges: the importance, and the challenge, of setting standards for sharing geographic information. In an inter-organizational context, a laissez-faire approach is attractive: let everyone do as they please and translate things as needed, with no change required of infrastructure participants. However, this approach has its limitations. In the Great Lakes and Gulf of Maine cases, for instance, the infrastructure imposed no standards other than World Wide Web connectivity: but the Great Lakes Information Network had trouble tapping into the region’s geographic datasets in any systematic way, and the Gulf of Maine EDIMS never did spawn the additional data sources that had been hoped for. In contrast, in the Pacific Northwest, the choice to use a standard geographic reference for the Northwest Environmental Database, and standard criteria for river assessment, was essential to bringing together the states’ river information for mutual benefit.
These examples suggest that although setting standards may require some adjustments on the part of an infrastructure’s participants, a little standardizing can go a long way: the benefit of collaboration based on shared information may far outweigh the cost of well-chosen standards. Furthermore, standardization may be less onerous than translation for data that are complex or require approximation and interpretation—for instance, when rasterizing a vector map, conflating a network to a different map scale, or reconciling land-use codes. Therefore, counting on translators to minimize the need for standards isn’t always practical, especially in the case of geographic or scientific data which is an always-imperfect representation of a physical reality. Instead, it’s more important to choose standards strategically in light of the goals and constituents of the infrastructure. In this sense, to employ structuration lingo, a standard is not just a rule restricting the kinds of data, interfaces, or languages that the infrastructure will support, but also a resource that enables certain kinds of joint work through information sharing. (Admittedly, this isn’t a terribly new concept: Dertouzos (1997b) quotes Simon (1981) to suggest that the makers of interconnected computer systems need to define the "laws" that will make them useful and encourage their orderly growth.)
The question becomes, then, not whether to standardize, but what part of the information sharing to standardize: what standards are likely to give the biggest payoff (in terms of collaborative work based on shared information) in relation to their cost (restructuring databases, translating keywords, etc.) for what a certain set of people want to accomplish. Depending on the purpose of the infrastructure, and its intended participants, it may be useful to standardize data structures, for instance, or units of measure, human or machine languages, or data collection methods. Data services such as Chapter 8’s orthophoto browser may shift the focus of standards to metadata elements, query languages, object and method specifications, or shared basemaps for geographic reference. An infrastructure may employ several of these standards at once, in a layered fashion that supports multiple uses and allows standards to be changed independently of each other (Solomon and Rutkowski, 1992). For instance, regardless of whether the Pacific Northwest River Reach are in Arc/Info or Intergraph format (or distributed by ftp, http, or diskettes), they have a known, consistent topology and scale, and their attribute data in NED has consistent quality categories.
The standards chosen affect the infrastructure’s constituency (who can take part in it) and its performance (what they can do with it). The chief goal in choosing among various standards is to find a balance between the breadth of the audience served and the depth of functions delivered. It’s relatively easy to make the standards choices needed to provide high-level information functions to a well-known, homogeneous, stable audience; or simple functions to a wide, uncertain, changing audience. Choosing standards that balance reaching a large audience with providing useful functions is a difficult strategic choice for each infrastructure. The choices will (must) also evolve with changing technology, as new standards become commonplace.
The next three sections present some examples of geographic information infrastructures, the standards choices they represent, and the infrastructures that result. This is followed by some comments on evolving a standard for a changing context.
a. National Geospatial Data Clearinghouse
Since the early 1990s, the Federal Geographic Data Committee has shepherded the development of a distributed National Spatial Data Infrastructure (NSDI), with standards applied to building a National Geospatial Data Clearinghouse (cf. Chapter 8).
The current NSDI Clearinghouse, with its focus on metadata, follows a "reference library" metaphor. Users look up the basic descriptors of an item of spatial information in the Clearinghouse: whether it exists, whether a particular item will suit their needs, and where to obtain it. They then request the dataset by telephone, e-mail, or fax from the appropriate librarian; and obtain a copy of it to read or use, often with some paper trail and an agreement about acceptable use. By standardizing metadata elements (FGDC Standard) and the query protocol (Z39.50), the Clearinghouse facilitates searching for data items by allowing a single client to search many different sources at once. Putting Clearinghouse search forms on the Web makes them accessible to a wide set of users with little or no adjustments required on users’ part (though some changes were required to adapt the Z39.50 to geographical searching). By (thus far) omitting a standard for retrieving the data items themselves, the Clearinghouse maintains a hands-off policy (for now) on data access and distribution: custodians of data may store it in whatever form is most convenient, and may choose to release it for a fee, license it, etc. By not standardizing data access methods, the Clearinghouse includes many data providers in the Clearinghouse who would otherwise be hesitant or otherwise ill-equipped to release their data; but it excludes casual, occasional users (who are less willing to make a formal request or to wait for delivery). Consider the prototype reported in Chapter 8: although the orthophotos were available on CDs from MassGIS, several municipal agencies chose to use the more immediately available orthophoto service we built, rather than make a telephone call and fill out order forms—and then have to worry about processing them for use. A future Clearinghouse, with direct, standard "access paths" to information resources (and access fees or user restrictions), may allow the Clearinghouse to serve a much wider audience as a reference desk for data services and not just for static data.
Shifting to online geographic data services is more than just retrieving entire datasets: it implies querying and otherwise using remote geographic from client software system via some query protocol. Where client and server are independently designed, scalable solutions require standards to define the interaction between machines. To this end, the Open Geodata Interoperability Specification, or OpenGIS, defines standard data models and geographic processing services using an object-oriented model. The primarily industrial OpenGIS Consortium has thus far released an Abstract Specification and is currently (in 1997) applying it to data catalogs and image data, with semantic translators to follow (OGC, 1996). This standard promises to unify different vendor-specific client and server software tools and to allow users from many different organizations and industries to share a global "data space" (Buehler and McKee, 1996). OpenGIS-style interoperability would clearly make geographic information sharing more immediate and available in a wide range of inter-organizational environments. In particular, this sort of interoperability would make Chapter 8’s orthophoto prototype accessible from a wide variety of software systems instead of merely a Web browser.
The chief drawback of OpenGIS is that all its benefits are expressed in the future tense. Indeed, defining a universal set of geographic data types and services is a lengthy undertaking, which has taken several years already with only minor impact on software systems as yet. One important reason for this is the newness and complexity of geographic information systems: the generic geoprocessing services at the heart of OpenGIS are still being defined, so codifying them is difficult and may even be premature. Nonetheless, ongoing coordination between the OpenGIS consortium and NSDI efforts will be important in preparing the NSDI for a networked data services environment, for instance by providing the next level of data access for the NSDI Clearinghouse.
c. Proprietary protocols: GIS-network integration
Third, the case studies and prototype highlight an emerging need to integrate GIS with network protocols. This includes both networked data access built onto GIS software, and geographic data services built into network access such as the Web. In 1996-1997, several GIS vendors began devising a variety of client-server "layers" to build networked data access into their software. These proprietary client-server tools include the Environmental Systems Research Institute’s Internet Map Server (ESRI, 1997), MapInfo’s ProServer (MapInfo Corp., 1997), and Intergraph’s GeoMedia (Intergraph, 1997). The query protocols underlying these systems are not standards in the sense of the NSDI or OpenGIS: they are less openly published or documented, and their design is decided and controlled by one vendor rather than through industry consensus. Their advantages are those of any turnkey system: quicker start-up, more predictable deployment, and clear support by the vendor. Traditionally, these are "closed" systems of fixed components linking client and server: they require a particular GIS server, with its special networking interface, a unique protocol, a client networking interface and so on. In exchange, however, a much greater set of query and analysis functions are supported. As long as all partners in an infrastructure use the same set of software, the proprietary approach may work: for instance, in the Pacific Northwest case in Chapter 6, ESRI’s Arc/Info is well established among the infrastructure participants, so the ESRI client-server suite might be promising. The case studies do show an instance of proprietary networked GIS solutions in use, in the Great Lakes: the RAPIDS air-pollution inventory team has experimented with both MapInfo’s ProServer and ESRI’s ArcView Internet Map Server. The vendor-specific approach may fare well in sharing information among the RAPIDS team, which is small and has a very specific focus on air pollution information, sharpened still by the three years they spent jointly developing RAPIDS software under a joint grant from EPA. It may not be as viable a solution for the more diverse, loosely connected structure of GLIN as a whole, let alone the Gulf of Maine EDIMS. Indeed, the case studies suggest that such "bridging" between independent information systems is frequently needed for inter-organizational information sharing, given that different agencies are unlikely to converge on a single set of software or data conventions. Also, Chapter 8’s orthophoto prototype shows the advantages of mixing various client and server tools to use the best component for each function. Thus, vendor-specific software suites may enable rapid, predictable installation within corporate "Intranets," and may speed the deployment of existing GIS information resources over networks. Yet the promise of proprietary systems for inter-organizational information sharing may be limited: they don’t allow much diversity in data systems, and specific pieces can’t be swapped in and out as needed. One shift in recent years has been a blurring of standards vs. proprietary solutions, whereby a vendor may publish the details of a communications protocol or application program interface.
The preceding examples consist of adding networking functions to a Geographic Information System. In contrast, several firms are tucking their GIS behind increasingly sophisticated World-Wide Web sites with a mapping component. These sites, accessible to any user with a Web browser, perform geographic services such interactive mapping and trip routing. Early pioneers in this area were the Xerox map server (mapweb.parc.xerox.com) and the US Census Bureau’s TIGER Mapping Service (tiger.census.gov). Commercial examples now include www.MapQuest.com, by GeoSystems; www.MapBlast.com, by Vicinity; and www.MapsOnUs.com, by Lucent Technologies; related services include client-server address-matching (www.Geocode.com, by Etak). Some of these services may have only a limited role as infrastructure components in their current form, given that their only mode of use is an interactive HTML forms interface, using ad hoc query parameters. Thus, they cannot perform batch processing or function as callable objects embedded in other software systems. But this is not far off: several sites do provide maps that can easily be embedded into other Web pages (e.g., MapBlast), or batch services driven by special client software (e.g., Geocode.com). All of these sites perform specific user services reliably, and have become quite popular among a wide cross-section of users. While not supporting much interaction with client software, nonetheless they perform well at the specific service they provide. They also provide unique prototyping opportunities: their specific services (routing, geocoding, etc.), once fully developed, could be made accessible through more standardized access-paths such as OpenGIS. The lessons learned from these sites may prove useful in building unbundled, distributed GIS services.
Both the Great Lakes and the Pacific Northwest cases now feature "home built" interactive Web-based mapping tools: StreamNet’s Map Builder (http://www.streamnet.org), CIESIN’s Great Lakes Map Server (http://epaserver.ciesin.org/arc/map-home.html), and the EPA’s Maps On Demand (http://www.epa.gov/enviro/html/mod). As firms like GeoSystems, Vicinity, and others turn mapping-on-the-Web into more of a commodity, these agencies, and others like them, may be able to shift their development efforts beyond simple map interfaces and towards more focused problems of data and software interoperability between client and server machines.
d. Evolution of a standard
One important challenge, highlighted by the Pacific Northwest case study in particular, is to keep readjusting the choice of strategic standards in light of new technologies, conventions, (or other standards) becoming widespread. As Pacific Northwest states began to do a lot more work with GIS, their expectations rose sharply and they demanded standards at a much more detailed scale than the 1:250,000 Reach Files. Bonneville did anticipate this several years in advance and began building more detailed Reach Files at 1:100,000; but if they’ve been able to standardize on something else entirely, such as dynamically segmented events or even just coordinate pairs, the outcome might have been quite different. Generally, choosing a package of strategically chosen standards is not only difficult, context-specific, and rich in consequences, it can also be made obsolete quickly by changes in the technological context. There are at least three ways to approach this challenge.
One approach involves setting a standard that is well ahead of current technology, to avoid being obsolete at the time of implementation. (Though it’s not a perfect analogy, this might be called the "Intel approach," after the leading chipmaker’s strategy of funding not one, but two development teams to keep it two chip generations ahead of the market (Peters, 1992).) For instance, the Information Resources Dictionary System (IRDS) (Law, 1988), defined for data repositories in the mid-to-late 1980s, employed concepts that only became common parlance five or ten years later with the growth of data warehousing. Not surprisingly, IRDS designers at the National Institute of Standards and Technology had difficulty getting users and implementers to comprehend the standard, much less comply with it (Newman, 1992); several years later, only a few repository systems even claimed to be IRDS-compliant. The OpenGIS Specification resembles this example in its complexity and its long time horizon; however, unlike the IRDS, OpenGIS is being built by an industry consortium rather than a government laboratory: this may ease its adoption by developers, but it could also succumb to competitive forces.
Another approach to evolving a standard is to limit its use to only a small community, so as to prototype it without getting locked into one version by a large base of users. One might call this the "X9 approach," after the X Window System, whose designers had the luxury of building nine different versions of it before they released it to the public (Scheifler and Gettys, 1992). (These versions were built for only a handful of research groups: this gave X developers room to define and work out a vendor-neutral, asynchronous protocol and server, without yet having to support a wide variety of uses. When X10 was publicly released, it was widely adopted; and a prior choice to keep X freely available allowed X11 to become an industry standard, maintained by the X Consortium.) This approach might provide a tangible, well-defined alternative to the comprehensive and abstract OpenGIS standard, which would speed the development and adoption of standard interfaces for sharing geographic information.
A third approach to evolving a standard is an extensible, "layered" approach, in which several specific functional standards define how components work together in an open system. As Solomon and Rutkowski (1992) describe for the case of telecommunications, "layered" standards can accommodate new features as new needs or technologies arise (new datatypes, longer addresses, wireless media, etc.)—in contrast with dimensional standards that define all communication levels at once and rely on specific pieces of hardware or software. Two prime examples of layered standards are the Internet’s TCP/IP and the Open Systems Interconnection (OSI), which define independent protocols for physical transport, network connectivity, and applications. Another example is the X Window System, mentioned above: despite user pressure to broaden X’s scope, its developers kept it tightly focused on the protocol and server design, deliberately leaving "details" of user interfaces or inter-client communication to individual vendors or to other standards.
In the case-study contexts of previous chapters, this "layered" approach may hold the most promise for a set of standards that can be devised and implemented quickly, and extended as new technologies become available and new standards begin to take hold. These agencies are not information-systems researchers or data suppliers: few can afford the lengthy experimentation that led to the X standard; or commit significant staff time to defining standards far in advance of current technology. Yet it’s those closest to the information and its use that are most likely to decide which elements of the data sharing process to agree on, and to define a strategic set of standards that will balance the costs of standardizing vs. the benefits of collaboration, and breadth of audience vs. depth of functions served. Indeed, as mentioned earlier, while a little standardization can go a long way, the crucial decision is not what standard to choose, but what elements of the information sharing to standardize for a particular set of participants and goals—and how to keep adjusting that choice in response to a changing context.
3. Organizational implications: towards dynamic interdependence
The case studies and prototype emphasize the value of bringing together information on different topics, from different sources, describing a given location. In so doing, the study presages a not-so-distant future in which organizations will be linked in myriad complex ways, and suggests a number of strategic choices for those who will shape organizational structures and relationships for this coming context. Nonetheless, distributed responsibilities for the collection, maintenance, and analysis of geographic information may be difficult to initiate and to sustain without some attention to the incentives, costs, and benefits of geographic information sharing to participants.
a. Inter-agency teamwork and collaboration
Chapter 7 emphasized the importance of organizational forces and choices in building inter-agency infrastructures, and showed the role of shared information in inter-organizational learning and collaboration. Chapter 8 showed some flexible ways to distribute responsibilities for information ,maintenance and analysis, and suggested the dichotomy between distributed data services and conventional organizational structures. These findings have several implications for organizational choices in an increasingly inter-related and "informated" environment: in particular, new structures are likely to emerge that may extend today’s partnerships model to a more dynamic, chaotic "marketplace" of interconnected players.
ii. From partnership to marketplace: dynamic, chaotic organizational relationshipsBy extrapolating from the case studies and prototype a few years into the future, it seems likely that links between organizations (partnerships) and between technologies (interfaces) will grow in number and complexity, well beyond what any one plan or architecture can account for. Indeed, as organizations begin to use interoperable information services, today’s partnerships, usually fewer than a dozen entities with long-term, well-defined governance, may give way to something like Dertouzos’ (1997) "information marketplace" or Hock’s (1995) "chaordic" (i.e. self-organizing, nearly chaotic) systems. In this view, hundreds or even thousands of organizational entities might form ephemeral alliances as needed for myriad joint projects, with informal governance and unpredictable influences from the broader economic or political context. Moore’s (1996) related "ecosystem" metaphor, discussed briefly in Chapter 7, seems particularly apt in the environmental management world of the cases. Interestingly, this ecological metaphor also appears in the context of layered standards and open systems (Solomon and Rutkowski, 1992), and fittingly describes today’s rapidly-shifting networked technologies and the ever-larger set of "gateways" between them. Blundon (1997) provides an early-1997 glimpse of the "ecology" the frequent redefinitions it requires, and the difficulty of predicting future outcomes.
b. Redistributing responsibilities, costs, and benefits
An important consequence of building and using information infrastructures is a redistribution of responsibilities for collecting, maintaining, and analyzing data. This begs the question: what makes such redistribution worthwhile? And the follow-up question: when information sharing is worthwhile to a region, but not to any one agency, what mechanisms are appropriate to get the infrastructure started and to keep it going?
i. When is geographic information sharing worthwhile?An EPA manager in the Great Lakes expressed the simplest way in which information infrastructures may seem worthwhile to their users: "I get information for free." To his disappointment, however, mere enthusiasm for what could happen in a world of shared information failed to motivate many in his agency to put their information online. In general, the costs of building an information infrastructure (particularly one for geographic information) are immediate and large: lengthy learning, giving up some privacy and control, investing in new software or hardware, and people to keep these running. In contrast, the benefits of such infrastructures are more distant and uncertain: changed decisions may not occur until the infrastructure has become part of everyone’s work, which may take several years; the benefits of such changes may not accrue to any of the participants themselves; and the risk of making the wrong technical and organizational choices early on is high.
Benefits of such infrastructures to participants may be more obvious where there is a clear trans-boundary issue: for instance, the nature of anadromous fisheries in the Pacific Northwest made it very clear that working together and sharing information was preferable to going it alone. The Great Lakes case presents similar ecological flows between its participants; these interconnections have been "amplified" by several inter-state and international agreements over several decades. Chisholm (1989, Ch. 4) discusses this kind of interdependence in depth based on overlapping bus and subway networks. The Gulf of Maine case, however, is focused on coastal habitat and shellfish, which stay put and thus present less obvious interconnectedness. In the Gulf of Maine case, regional and federal bodies are more interested in the regional view; states and provinces have little interest in, say, trading off their habitat against that of their neighbors.
The economic view, assessing costs and benefits to participants and to their broader society, is an important consideration. Nonetheless, the decision to share information, particularly in the public sector and in the environmental arena, is also a political and social one, subject to argument, persuasion, and informal relationships. Given that decisions are made by people, not by equations, they may be affected just as strongly by a clear articulation of the need for information sharing as part of a greater "stewardship" norm (or, conversely, by a call to maintain the status quo as part of a tradition or mandate), or by interpersonal affinities (or lack thereof), as by any economic analysis.
ii. Incremental vs. radical changeGiven the complexity and uncertainty of organizational and technological changes, and the significant investments required, many organizations favor an incremental approach to development rather than radical changes to tasks, processes, or organizational relations. In the cases, this incrementalism was further encouraged by the intertwined nature of technological and organizational choices. Indeed, organizational change often requires technological change, and vice versa; one rarely gets very far ahead of the other. In this view, to borrow an example from political science, "it is better to work for the possible rather than to make plans for the impossible." (Reese, 1986) Without more radical changes, resulting from a creative integration of several goals and several technologies into changed organizational structures, the incremental approach may risk building only a "scaffolding" with efficiency gains such as cost savings, rather than a structure that would allow greater effectiveness through new kinds of environmental policy decisions, or new processes for reaching these decisions. Even though more radical technological and organizational change holds promise for improved planning and policy, it’s unlikely to come about easily, and may require outside pressures (and often outside resources as well). Examples of these pressures are congressional acts (this was perhaps the Northwest Power Act’s most important consequence), litigation (applying the Endangered Species Act applied to spotted-owl habitat in Oregon and Washington forced the Forest Service to change its modus operandi; applying it to salmon may do the same for many others), or inter-agency competition (as seen in the Great Lakes between CIESIN, EPA, GLIN, and others).
iii. Maintaining infrastructures for the long termOnce these complex infrastructures are launched, and organizational change is underway, are outside pressures and resources needed to sustain the shared infrastructure? How valid is the judgment of one NED participant, "No money, no conductor"? In each of the three cases, the "hub" organization had hoped to assume less and less of the infrastructure’s cost after getting it started and proving its utility to participants. Yet GLIN has continued to depend on grants from Ameritech and the Environmental Protection Agency; Bonneville funds will support most of StreamNet; and the Gulf of Maine EDIMS remains limited to minimal-cost maintenance activities. When Bonneville partially divested itself of NED, it saw each state’s component of the regional infrastructure diverge (though an increasingly obsolete shared standard was partly to blame). It may be some time before these infrastructures become worthwhile to their participants, but it’s a feasible scenario. As new technology allows participants to provide their information to the infrastructure more easily, and as they take a more ecosystem-based and data-centered view of their work, the costs and benefits of contributing to the infrastructure may line up much more favorably. This does depend on whether the infrastructure’s choice of information and distribution channels matches that of its participants: this becomes an important design consideration for long-term sustainability.
4. Policy implications: role of government, impacts on government
Public policies on information and research will be key to facilitating flexible, interoperating infrastructure components. The prototype and especially the case studies in preceding chapters offer glimpses of likely changes in environmental policymaking as a result of shared information infrastructures: wider participation, improved consensus, and detailed scientific analysis both before and after policy decisions are made.
a. Government policy in infrastructure development
Another set of policy implications concerns what public agencies could do to foster more effective infrastructures for sharing geographic information. The next paragraphs highlight the role of government agencies in providing base information to fuel myriad applications, and in setting standards to facilitate cooperative work before the market cements a de facto standard.
i. Information to fuel applications; rethinking ownershipThe importance of a common reference system for sharing information is unique to geographic information, and suggests an important role for government agencies in promoting geographic information infrastructures. In NSDI parlance, a Framework of digital geographic data, such as roads, hydrology, terrain, or orthophotos, will encourage the creation of other geographic data that is geographically consistent and thus increases the value of all derivative works. This government role was clearly seen in the production of the Topographically Integrated Geographic Encoding and Referencing system (TIGER), a systematic map of streets throughout the US, with topology and address ranges. Although initially devised for the 1990 Census, it has seen use and derivative works in all sectors of public and private management, and commercial mapping products. Government could play a key role in coordinating mapping efforts (both private and public) by location, so as to produce the Framework’s base data layers, and to ensure that these layers are made as widely available as possible. One key mechanism for this would be seed money and cost-sharing. Seed money in particular would assist with developing initial Framework data in an area; once some "critical mass" of Framework data became available, other data providers in the area would voluntarily choose to follow Framework specifications themselves because of the added value of combining their information with that already available. Cost sharing would be one mechanism for ensuring that data are freely distributed once developed.
As information networking and data services become more widespread, producers of information will need to find new ways to define their ownership of information. This is especially true for geographic information, which is being created in large volumes mostly by public agencies. Based on experiences in the United States, Lopez (1995) groups current forms of public information dissemination into three groups: cost recovery, private distributors, and open access. He argues that only open networked access satisfies two peculiar requirements of government information: it encourages demand, and complies with the public’s "right to know." In this view, networked access is key to open access in that it limits dissemination costs and makes government information immediately available to the public. Networked open-access information may require many government agencies to rethink their views on ownership of the information they’ve produced, including copyrights, licensing, usage, and redistribution. A move from information products to information services may require further redefinitions.
ii. Setting de jure standardsStandards are another important role for public agencies in fostering geographic information infrastructures. If a de jure standard, set by an open, public standards process, can be defined before a de facto standard becomes established ("gelled") through market forces, then easy interchange of data, or interoperability among services, becomes possible without giving up the competition among vendors that keeps quality high. Although standards-setting by industry consortia has had successful results as well (e.g. the X Window System), the public standards process is more open to input from a broad set of potential users. (Melton, 1996) This is an important role to be played by the public sector in fostering inter-organizational information infrastructures.
b. Infrastructures in government policy
The first policy implication of this research is its potential to improve the nature and process of environmental policy making. In particular, making detailed information more widely available promises to broaden participation in decisions and make it more conducive to mutual understanding and consensus.
i. Broader participation and more effective consensusOne first-order impact of any information infrastructure would be the chance to involve many otherwise excluded participants in the decision-making process, by providing them with detailed information, opportunities to contact key decision makers, and forums for deliberating different viewpoints. In addition to greater accessibility, current networked communication tools (electronic mailing lists, newsgroups, online chat meetings, and the like) would allow larger numbers of participants (hundreds or thousands) to take part in debates, discussions, and joint decisions. This impact was seen to varying degrees in all three cases; and was an important goal for the Environmental Protection Agency in particular.
But geographic information infrastructures would allow for much more than just efficient verbal debates. First, as proposed in Chapter 7, maps are promising "boundary objects" (Boland and Tenkasi, 1995) for articulating and reconciling different world-views. Furthermore, thanks to networked data services of the kind suggested in Chapter 8, participants might interact with dynamic models to sketch the outcomes of various assumptions or scenarios. This would also offer a natural way to break up complex physical phenomena or lines of reasoning into more easily understandable pieces. Thus, adding online interactive models to map displays may prove to be powerful facilitators of consensus in complex decisions—thus improving both the process and the outcome of policies.
ii. Improved scientific analysis before and after policies are madeA second-order benefit of geographic information-sharing infrastructures is particularly relevant to environmental policy and planning, although it would apply to any complex science-intensive decision-making. Science-intensive policy decisions depend on obtaining and interpreting trusted, high-quality, up-to-date information. Traditionally, this would be done by copying all relevant data sets to a single location, or by replicating fragments of the data sets onto myriad individual machines. In either case, the data may be misused by generalists, and is unlikely to undergo regular updates. Most commonly, however, detailed "context" data is just too difficult to obtain, and so simpler models are used, which make simplifying assumptions about existing conditions. Information-sharing infrastructures would allow detailed information to be left in the care of those who collect it, maintain it, and know it best. This would maintain its accuracy and proper use; while allowing users to obtain the information directly from the source when needed. Detailed information for modeling would be helpful not only prior to policy decision, but afterwards as well, so as to evaluate the effect of policies and thus use policy measures as a means to learn about an ecosystem’s complex behavior. This is a key emphasis of so-called adaptive management, as put forth by Lee (1993), an important goal expressed particularly in the Pacific Northwest given that region’s long struggle to manage a still poorly understood ecosystem.
5. Hypotheses for further research
Many of the implications sketched in this chapter rest on predictions about the future; they leave several questions unanswered about technology, organizations and policy related to geographic information infrastructures. Thus, this study suggests a number of fruitful areas for future research. The following is a list of some of the most crucial hypotheses to be tested, culled from the likely scenarios sketched in various parts of this chapter and the previous ones.
Because the organizational and technological sides of a geographic information infrastructure are interdependent, parallel, incremental change in both seems to be the most likely pattern. Is this necessarily the case? What opportunities might there be for "leapfrogging" over several development stages, to acquire the needed complexity much more quickly?These and other questions would enlighten the future development of geographic information sharing infrastructures, and their role in devising improved public policy and planning based on the best available information.
Organizations that can undergo "deeper" changes in structure, in favor of interdependence with other agencies based on geographic information, stand to benefit more from geographic information infrastructures. What are these deep changes, and the impacts to be expected?
Geographic information infrastructures are leading from today’s partnerships to more ephemeral relationships among large groups of organizations that share a particular region or location. Are there emerging examples of this already? What kinds of norms, rules, incentives, and standards would keep these infrastructures together? How dynamic and chaotic can such a geographic information infrastructure get, while still allowing its participants to benefit from each other’s geographic information?
Given that a comprehensive functional standard such as OpenGIS is such a difficult undertaking, partial functional specifications would seem important in the short- to medium-term in facilitating the sharing of geographic information between independent systems. What are the most crucial elements of such a partial specification?
What long-term institutional structures are appropriate for geographic information infrastructures? Can they ever be self-sustaining, or will they always require significant outside support?
Front page Table of Contents Abstract Chapters 1 2 3 4 5 6 7 8 9 Bibliography