Information Systems and

Organizational Success: A

Quantitative Modeling Approach

September 1994 TDQM-94-08

Rajat Chakraborty



Total Data Quality Management (TDQM) Research Program

Room E53-320, Sloan School of Management

Massachusetts Institute of Technology

Cambridge, MA 02139 USA

Tel: 617-253-2656

Fax: 617-253-3321





© 1994 Rajat Chakraborty

Acknowledgments: Work reported herein has been supported, in part, by MIT's Total Data Quality Management (TDQM) Research Program, MIT's International Financial Services Research Center (IFSRC), Fujitsu Personal Systems, Inc., Bull-HN, Advanced Research Projects Agency and USAF/Rome Laboratory under USAF Contract, F30602-93-C-0160, and the Naval Command, Control and Ocean Surveillance Center under the Tactical Image Exploitation (TIX) and TActical Decision Making Under Stress (TADMUS) research programs.

Glossary of Acronyms

AOI Area of Interest

CSF Critical Success Factor

CT Coordinator Unit, the vessel in charge of the Battle Group

FOTC Field Over-the-horizon Tracking Coordinator

FOTC Operator The person who operates the FOTC

FOTC Unit The vessel containing the FOTC and Operator

OTC Officer in Tactical Command

OTCIXS Officer in Tactical Command Information Exchange Subsystem

OTHT Over The Horizon Targeting

PT Participant Unit, a vessel under control of the CT

TADIXS Tactical Data Information Exchange Subsystem

TDP Tactical Data Processor

Chapter 1: Introduction

1.1 General Problem Statement

In today's world, accurate, timely, and relevant information is a fundamental requirement for most organizations. This "high-quality" information is critical in conducting daily operations and maintaining competitiveness, even viability, in the modern marketplace. Key decision makers within these organizations rely on information systems to provide them with the capability to acquire, communicate, and analyze huge volumes of data. Thus, effective information technology research has come to the forefront in the realm of critical business issues.

Recent years have seen tremendous advances in technology and communications. More data can be processed and transmitted faster than ever before. However, many users complain that information systems are still not capable of fulfilling their specific needs. Users such as Captain William Rogers of the Navy Aegis battle cruiser, Vincennes, consistently emphasize the role of information systems in accomplishing his mission [1]. During battle, he requires rapidly acquired, high-quality data that is tailored to his needs in the battlefield. In the military, clear, concise, and timely information can mean the difference between life and death. Similarly, survival in today's business world requires the ability to react rapidly to dynamic information about complex and changing environments. Consistent delivery of high quality information proves to be a challenging task. The problem is further exacerbated when multiple users with differing information needs depend on the same resource-limited system to provide relevant and timely information.

The common solution to this problem has been to supply a greater quantity of information to the users in the hopes that the required data will get through [2]. This has resulted in the development and construction of massive communications networks throughout the world. Even small companies with limited information needs are now capable of accessing tremendous quantities of information with relative ease. Unfortunately, this solution has created an even greater problem. First, the associated hardware costs are tremendous. Second, the users are encountering a data overload. They do not want large quantities of potentially useless data to wade through. Instead, they require a consistent stream of relevant data that will facilitate their jobs. In other words, "data quality" has a much higher priority than "data quantity."

The challenge, therefore, lies within providing quality data that will empower organizational leaders to accomplish their goals using the limited information technology resources available to them [2]. In order to accomplish this task, an approach is needed to quantify organizational goals and link them directly to the performance of the underlying technology. This research was undertaken to develop just such a quantitative approach--one that will tie information technology to organizational success.

1.2 Nature of Solution

The problem is a complex one indeed. How can we, first, quantify such concepts as goals, success, data quality, communications, and hardware; and, second, link all these diverse aspects into a comprehensive, coherent modeling scheme? The task seems daunting at first glance, but there is a logical way to make sense of this convoluted jumble.

The solution is derived using a top-down approach to systems analysis. The problem is broken down into several layers that, individually, provide an intelligent insight into a broad area of concern within the organization [1]. These four layers and their implications are fully explained in Section 2.2. They are:

Critical Success Factors (CSFs)

Business Processes developed to support the CSFs

Data Flow within the organization to facilitate the Business Processes

Underlying Infrastructure (i.e. IS Platforms) that supports the Data Flow

After the layers are identified, they are then modeled in a computerized environment such that they create a coherent representation of activity in their particular arena. Once these layers are specified, interrelationships among components of the various layers may be identified. These inter-layer relationships are added to the model to create links between the four distinct layer representations. This concept of layer connectivity is expanded in Section 2.3.

After the model is complete, several important insights can be derived from it. This type of analysis is documented in Section 4.4. Any aspect of any layer can be adjusted, and the repercussions in other layers can be quantitatively modeled and predicted. For example, using this modeling approach, one can accurately predict how a slight change in the telecommunications system (Infrastructure) of an organization will affect sales (Critical Success Factors) in six months. By the same token, an adjustment of sales and marketing strategies in a corporation (CSFs) can trigger the need for increased data processing or communications capability (IT) three months into the future. Using this modeling approach, the corporation can simulate, understand, and affect the technological enhancements that will be required before their current systems are overloaded and begin failing.

This approach will also provide a concrete basis for continued enhancement or total reengineering. An organization will be empowered with the ability to rigorously analyze its business strategy and develop a suitable information system to support that strategy. The model can then be subjected to a variety of scenarios to see if the marriage of strategy and systems is robust enough for the company's purposes. The result of implementing this approach will be a carefully planned and thoroughly integrated organization that can function optimally and anticipate and react to crisis situations with very few surprises.

This project will draw on a variety of business analysis theories and methods to provide an insight into the capabilities of this modeling approach. The goal is to create a methodology that will facilitate the planning, implementation, or enhancement of information systems such that they are intimately tied to organizational processes. The layered organizational model will be capable of predicting the shortcomings of the corporation so that they may be rectified before they become a significant issue, a crystal ball of sorts.

1.3 Document Organization

This project is documented in five sections. Chapter 2 gives a broad overview and background on the research. It also outlines the general motivation behind the development of this modeling approach. A theoretical framework for modeling and simulating an organization is presented in Chapter 3. In Chapter 4, we give a demonstration of the approach as applied to U.S. Navy battle group and its Over the Horizon Targeting (OTHT) System. Chapter 5 discusses the usefulness of this research and provides a direction for future development. Finally, the Appendices contain potential future directions as well as a graphical representation of the methodology used to represent the OTHT System.

Chapter 2: Background

2.1 Information Technology in Business

Corporations rely on information to survive. Today's dynamic business environment has created an unprecedented dependency on information technology. Organizations must constantly acquire and process electronic data to assist in day-to-day activities and future planning. Optimally, the company should have a unified vision of the future, establish processes to realize this vision, and bring in an information technology infrastructure to support these operations. Unfortunately, companies often do not have an integrated system that will optimally support the various processes that make up the operations of the organization [3]. This may be the result of a variety of scenarios.

One such scenario is a lack of structured information system planning. Usually, an organization is made up of several departments. As these departments have grown and matured, they have developed their own style or flair for doing business. The same is often true for their information systems. The departments have incorporated the data sources and tools that they need to accomplish their specific goals without regard for the corporate system as a whole [4]. At some point in time, high level management comes to the conclusion that the system should be integrated as a cost-cutting or productivity-increasing measure. This can be a recipe for disaster.

Sometimes, we see organizations implement systems that actually decrease productivity or profitability. This results from insufficient or incorrect application of technology to accomplish the mission. Poor planning and incomplete understanding of business processes, people, and office politics can often lead to these grave errors [5].

This research is geared toward giving an integrated picture of an organization so that these drawbacks can be minimized or eliminated altogether. From high level Critical Success Factors to the underlying Infrastructure, a coherent picture of the factors that drive the company can be derived and then modeled to create the optimum use of technology to support business goals.

2.2 Abstraction Layers

2.2.1 Critical Success Factors

This layer sets the tone for all of the others that follow. Developed by Rockart [6] at Sloan, the CSF method provides a good starting point for analysis and development of a cohesive information system. The Critical Success Factors for an organization are "the limited number of areas in which results, if they are satisfactory, will ensure successful competitive performance for the organization. They are the few key areas (three to eight) where things must go right for the business to flourish." Developing a high level list of CSFs helps to focus efforts on solution of the entire problem, rather than small portions of a much larger problem [7].

2.2.2 Business Process Support of CSFs

In order to successfully achieve the general CSFs, organizations set up specific Business Processes [7]. They are the life blood of the organization. These processes provide the members with a step-by-step methodology that can be carried out on a day-to-day basis. Businesses Processes can consist of either repetitive tasks or decision making. In either case, these tasks either generate or are facilitated by information [8]. In order for the processes to be carried out efficiently, this information must be of high quality. That is, the data must provide the best support possible for the specific process that uses it. Data Quality parameters include: timeliness, consistency, relevance, accuracy, and confidence levels. They provide a general framework for evaluating the quality of data flow within an information system. Armed with an understanding of these parameters and how they relate to information demand, we can address the issue of data flow through the organization.

2.2.3 Data Flow Through Organization

As the Business Processes are analyzed, an ideal Data Flow pattern through the organization begins to become apparent. This pattern draws data from both external sources and internal activity, does the necessary processing, and brings the final product to those who will use it. Ideally, the hardware, software, and communications networks of the information system will enable us to effectively establish this data flow within the data quality parameters described above.

2.2.4 Underlying Infrastructure

The Infrastructure implemented to conduct the data flow through the organization in critical. The components of the system must not only enable the proper people receive high quality data, but it must also support the organizational spirit and processes in a way that adds value. All too often, a system is implemented that goes against the grain of the organization. This results in confusion and apprehension in the users and eventually leads to resentment of the technology itself.

An information system consists of several parts. They include: data sources, communications networks, processing nodes, user interfaces, and data storage. Each component has its own quirks, and must implemented such that it supports both the data flow requirements of the organization and the organizational essence.

2.3 Layer Connectivity

Even though we have abstracted the above layers from the organization, the reality is that the layer components are intertwined through complex relationships. The CSFs are intimately related to the Business Processes just as the Data Flow is related to the Infrastructure. Picture the layers in this hierarchical structure:

Figure 2.1: Hierarchical System Diagram

Each layer is driven by the one below it, and directly influences the layer above. This is represented by dropping hooks from components of a layer into critical points in the adjacent layers. Such a methodology allows us to quantitatively model the effects of one layer on each of the others.

Analysis of the abstraction layers within the organization can be a long and convoluted process. Individuals within an organization tend to think of it as a complex network of goals, processes, and technology [9]. The layers of the organization usually aren't apparent to them. The analyst's task then becomes one of establishing a framework for the modeling process. In Figure 2.1, we show a simplified representation of the system. In reality, the model is much more complex. The individual layers are driven by information supply and demand. Incorporated into each layer, are essential parameters that not only drive the layer itself, but create a relationship with the other layers and the outside world. They are:

Control Variables within the layer

Environmental Variables external to the model

Layer Requirements on the other portions of the system

The control variables within the layer are modeling parameters that supply the essential intrinsic characteristics of the components. These variables can include everything from processing speed or communication channel bandwidth to the decision capabilities and rates of a high level manager. Environmental variables represent values used by the model that are beyond the control of the organization itself. Examples range from interest rates to the weather. The third parameter, layer requirements, is more complicated. Layer requirements are those aspects of the particular layer that allow it to optimally perform its duties. Layer requirements are the "needs" of the layer.

Thus, we develop a more complex concept of the individual layer that includes the three parameters that drive it:

Figure 2.2: Detailed Layer Diagram

The "layer requirements" parameter within each abstraction layer is an interface that conveys the needs of the individual layer to the rest of the system model. Therefore, it is this portion of each layer that drops hooks into the functional area of adjacent layers to retrieve values for the necessary variables.

2.4 Rapid Simulation

Simulation of the entire organization is the primary goal of this methodology. Once the model has been constructed, an adjustment can be made to any component of any layer, and its effect on all the other layers can be simulated. These adjustments can be made using the control variables. If there is a need to simulate external influences on the system, the environmental variables will handle that task.

This empowers the organization to predict future needs or results based on policy or technology changes in the present. It also gives the organization the capability to bring their current goals, processes, and systems into a coherent structure enabling them to strategically plan for the future.

Chapter 3: Theory

3.1 Business Strategy Analysis

Business strategy consists of a variety of components. These components can be categorized into two groups. They are Critical Success Factors and Business Processes. These two groups work hand-in-hand to achieve the goals of the organization. In this section, we will discuss the analysis and quantification of these entities and develop a theoretical basis for modeling them.

3.1.1 Critical Success Factors

The CSFs of an organization can be determined through interviewing its key decision makers. Normally, these executives will state corporate goals or philosophies as CSFs. Once these goals are quantified, and the most important ones are filtered out to the top, we are in a position to begin modeling them as Critical Success Factors. This model layer will then be able to drop hooks into the critical points in the Business Process that drive it.

3.1.2 Business Process Support of CSFs

Once the CSFs are identified, organizations develop Business Processes to accomplish them. These processes usually are a logical sequence of steps that involve a variety of tasks. Each step is modeled to represent the specific way in which the tasks are accomplished. The successful completion of these individual tasks intimately depend on the quality of the data that is supplied. Therefore, the "data quality parameters" described in Subsection 2.2.2 support the tasks and are modeled within this layer. This model layer is now able to drop hooks into the Data Flow and Critical Success Factors in order to assess the successful completion of each task.

3.2 Information System Analysis

The information system is comprised of two main layers as well. They are Data Flow and Infrastructure. These layers represent inherently quantitative aspects of the organization that will drive the business strategy layers. In this section, we will show how the technology and data flow can be modeled, and we will develop an understanding of how the limitations of this underlying infrastructure affects the organization as a whole.

3.2.1 Data Flow Through Organization

The availability of data throughout the organization will determine its ability to perform successfully. Therefore, this layer models the various aspects of data flow in the system. Data paths are clearly defined and system phenomena such as loss and feedback are analyzed. The major components of the data flow structure include: sources (both stable and volatile), communication networks, data received, data lost, data used, and discarded data.

The sources are the beginning of the data flow and are of two types: stable and volatile. Stable sources produce a constant or predictable stream of information regardless of time or situation. Volatile sources generate information that varies depending on circumstances. The communications networks are the means for conveying the data to the users or processors. These networks usually have a predetermined capacity, and can become "clogged" if unusual demands are placed on them. Data users pull in data from the networks. The users attempt to pull in all the data that is relevant to their missions and use them to conduct business processes. If the data input capabilities of the users are not of sufficient capacity, information is oftentimes lost. This loss is modeled in the data loss structure.

Finally, the user then takes the data and begins to incorporate it. As this process occurs, some of the incoming information is used, and the rest is discarded. The goal of the system is to maximize the data used (high quality data), and minimize the discarded data. Data that is discarded by the system (low quality data) unnecessarily taxes the entire flow path. If this component is minimized, the efficiency of the system is increased, and the data loss component is also minimized. The components in the data flow layer are direct products of the structure of the underlying Infrastructure, and are modeled by dropping hooks into that layer as well as the Business Process layer.

3.2.2 Underlying Infrastructure

Modeling this layer is perhaps the least difficult of tasks in this methodology. The infrastructure consists of several different types of hardware and software. The modeling is accomplished through analysis of the hardware and data routing by the system software. The communication, processing, and storage of the data is relatively simple to model, and provides the true driving force to all of the other layers. This layer also incorporates feedback from the Data Flow layer.

3.3 Modeling Methodology

This section will give a general overview of modeling an organization using our approach. It will explore the various aspects of the analysis process and provide guidelines for conducting modeling and simulation of the system.

3.3.1 System Description

The first step in developing a coherent model is to develop a solid understanding of the organization and all of its layers. Several of the key aspects of each layer may already be strongly defined, but the analysis will often involve research and interviewing in order to get the complete picture. The duty of the analyst is to create a quantized representation of each layer. This can be achieved through a combination of causal loop diagrams and flow patterns. This representation should incorporate the components of the layer in a way that demonstrates the stand-alone integrity of that layer. Hooks are then added to the layers to drive the components with the performance of their underlying layers. This quantized diagram can then be entered into a computerized modeling tool that will simulate the intricacies of the system.

3.3.2 Computerized Modeling

In order to derive any kind of tangible benefits from our analysis, we must model the system representations on a computer. This allows us to run simulations at a rapid pace and effectively predict the future with a degree of certainty. In our example, we use a Macintosh Quadra 700 computer running the decision support tool ithink. ithink is a tool that enables us to quantitatively model both flow patterns and causal loop diagrams in order to simulate the system.

Layer-by-layer, the system diagrams are now entered into the modeling tool. The layers will each consist of the components given in Figure 2.2. The layer functions will consist of the actual components of the layer. The variables will drive the layer, while the layer requirements provide connectivity for the underlying layers. Debugging and documentation are also inherent aspects of this step.

3.3.3 Analysis of System Response to Inputs

Once a quantitative computerized model is developed, the system must be tested for performance under a variety of situations. These situations are normally represented by defining and modifying the environmental variables. In this manner, the model can be used to predict the performance of the organization under a variety of external scenarios. The scenarios may include such things as: increased activity by the enemy in a military operation, declining interest rates in a financial organization, or changing weather conditions for NASA. The simulations would provide guidelines for making adjustments to the actual system to optimize performance under these varying external circumstances.

Another type of analysis could be done by adjusting the control variables. This method would illustrate the impact of changing the intrinsic traits of the model's components. In this manner, the model can be used to predict internal changes within an organization that will improve performance. The control variables may include adjustments like: increased bandwidth on a communication trunk in the infrastructure layer, addition of personnel to conduct tasks in the Business Process layer, or a change in relative importance of a CSF.

Analysis of the system under these various scenarios and adjustments is the critical step in this modeling approach. First, the model must hold up to the changes in the variables in any possible configuration. Once the viability of the model is established, the simulations take on a different role. Now, we will use the variable adjustment methods to represent actual scenarios. After many iterations adjusting the control and environmental variables in this manner, an optimum structure for the organization will begin to emerge.

3.4 Performance Intuitions

There are literally an infinite number of configurations for the representation of an organization. At this point, the model of the system is so complex that it would be impossible for the human mind to grasp its full functionality in one sweep. Frankly, that is the reason for creating a computerized model. We can, however, develop an intuition about small portions of the model from observing their performance. Certain traits of the model will become apparent with repeated analysis. Sections of the model will begin to fall into patterns which may include: feedback loops, accumulative storage, delays, and filtering. Intuitions about these patterns and their combinations can lead to a more thorough understanding of the model and may save time in optimizing organizational performance.

Chapter 4: Application of Theory-Navy OTHT System

4.1 Business Strategy Analysis

The business strategy of a Naval Battle Group is very different from that of a traditional corporation. However, we can apply the methodology described in the previous chapters to this system as easily as any other type of organization. As a matter of fact, the analysis of a military organization can often be much easier than a corporation. This is due to the military's highly structured organization and thoroughly documented methodologies. In this section, we will analyze the CSFs at a superficial level. Due to the scope of this project, the real meat of the analysis is contained in Section 4.1.2, the Business Process analysis, and Section 4.2.1, the Data Flow analysis.

4.1.1 Critical Success Factors

The CSFs of the Battle Group are clearly defined by the Navy. In order to be considered successful, the Group and its OTHT System must accomplish the following:o Destroy enemy threats or targets

Protect allied forces

Avoid neutral forces and resources

These are straightforward CSFs, and do not require much explanation. If these three tasks are not performed successfully, the Group could be in big trouble.

The layer requirements for this CSF layer are quite simple as well. In order to destroy the enemy, the commander requires the following from the underlying layer:

Location and identification of the enemy

Sufficient force and ability to engage enemy

Confirmation of the enemy kill and battle damage assessment

To protect allied forces, the commander requires the following:

Location and identification of the enemy

Location and identification of friendly forces

Sufficient force and ability to engage enemy threats to friendly forces

Confirmation of enemy kill and battle damage assessment

To avoid neutral forces and resources, the commander requires:

Location and identification of neutral forces

As is readily evident, these layer requirements can be boiled down to several key components. These components will be the focal point for the discussion of the business process layer that follows. They will drop hooks into the business process layer in order to drive the CSF layer.

The other driving factor for this layer are its control and environmental variables. These variables can include a variety of internal and external values that can be tweaked to fine tune the CSF layer model. These variables are explicitly detailed in the model itself, and, therefore, not explained separately here.

4.1.2 Business Process Support of CSFs

The business processes required to achieve the CSFs are also explicitly described by the Navy [10]. The processes are:

Planning

Surveillance

Search and Localization

Correlation

Tactical Assessment

Assignment to Warfare Commander

Engagement Planning and Execution

Tactical Feedback

These business processes, when successfully accomplished, enable the organization to optimally achieve the CSFs. We must design this layer while keeping the layer requirements from the CSF layer in mind.

The layer requirements for this layer are slightly more complicated than those of the CSFs. If each of the processes is taken separately, we see that each of these is influenced directly by the quality of the data flowing in the underlying layer. Therefore, the data qualities described earlier are precisely the components of the layer requirements.

We will attack the Business Process analysis by first describing how each of the processes are carried out. We will then progress to a detailed analysis of the global Information System that empowers these processes. Figure 4.1 gives a graphical representation of the eight processes that comprise this cycle.

Figure 4.1: Business Process Cycle

The planning phase is a high level process. In this phase, various strategies are set to facilitate the conduct of the entire mission. This section of the cycle is constantly reevaluated as the commanders receive tactical feedback from the forces.

The surveillance phase is a direct result of planning. In this phase, forces are tasked to monitor an area of interest for enemy, neutral, or friendly units. Based upon surveillance requirements, primary and backup sensors are deployed within the area. This allocation of forces determines not only the coverage within the area but also the accuracy and reliability of the data gathered. Figure 4.2 illustrates two possible deployment scenarios. The gray circles show the range of the sensors indicated by the black dots.

Figure 4.2: Sensor Deployment Scenarios

Both scenarios in Figure 4.2 provide coverage of the area of interest (AOI) in very different ways. They are simplified to illustrate a point. The data derived from the first scenario will produce isolated detections of units in the AOI. This will result in a reduced quantity of data transfer. The final effect is an improvement in communication bandwidth usage, but degraded reliability of the detection. Scenario 2 produces a huge number of detections from a variety of sensors. This is due to many of the units being detected by more than one sensor. This scenario, of course, increases reliability while tremendously taxing the communications bandwidths.

The surveillance process results in contact detection. When a unit, whether friendly or hostile, enters into the AOI, it is detected by the sensors. At this point, it is simply a blip on the screen. The Battle Group has no way of determining what it is. Figure 4.3a gives a representation of a detection. At this point, the contact could be anything from background noise to an attack aircraft [1].

Once a contact detection occurs, other units and sensors are cued to verify the report and locate the contact. This search and localization process provides the position of the contact and often generates other relevant information. This is shown in Figure 4.3b.

The localized contact is then given to the correlator to be correlated into a track with all other available information. This process usually generates a clearer picture of the target, and the track provides information about a contact's movement and intent. With this information, shown in Figure 4.3c, the Battle Group moves into the engagement process.

Figure 4.3: a. Surveillance; b. Search & Localization; c. Correlation

The engagement process involves several steps as well. The first, tactical assessment, consists of deciding whether the contact is friendly or hostile. The correlation process should provide the officers in command with sufficient information to make the decision. However, if the information is not clear enough to make a good decision, the contact is correlated further. If the contact is friendly, it is avoided; if it is hostile, it is assigned to a warfare commander to be destroyed.

A warfare commander is selected because he has sufficient resources under his control to neutralize the particular target. The commander then plans and executes the engagement in an attempt to destroy the target. After the engagement is complete, tactical feedback is provided to the commanders to assist in future planning.

Now that we have developed an idea about the nature of the Business Processes, we can begin to analyze them in further detail. For the sake of discussion, we will somewhat simplify the situation. Both allied and neutral forces will be designated Friendly, and enemies will be designated as Hostile. Figure 4.4 sets up the basis for our analysis.

Figure 4.4: Basic Template for Business Process Analysis

There are only two types of units that the Battle Group can happen upon, friendly and hostile. These units undergo the Business Processes of the Group. At the conclusion of this tactical cycle, four distinct situations can result:

The target was Friendly, and it was successfully avoided.

The target was Friendly, and it was accidentally destroyed.

The target was Hostile, and it was successfully destroyed.

The target was Hostile, and it was accidentally avoided.

To arrive at these scenarios, we must carefully plot the course of events that lead to each one. These events will then provide critical points where we will be able to tie them in with the underlying Data Flow layer.

The first process that the incoming units undergo is surveillance. The surveillance process simply produces initial contact detections. In this case, the contact is either detected, or it is not. Figure 4.5 demonstrates the outcomes.

Figure 4.5: Business Process Analysis: Surveillance

This diagram simply illustrates that if a friendly or hostile target is not detected, it is automatically avoided (FD and HD indicate Friendly and Hostile Not Detected, respectively). If, on the other hand, it is detected, the contact is passed on to the other business processes (FD and HD indicate Friendly and Hostile Detected, respectively).

The next process that the contacts encounter is search and localization. This process generates a more reliable picture of the target. It results in either a localization or disposal.

Figure 4.6: Business Process Analysis: Localization

Figure 4.6 illustrates that if a friendly or hostile target is not localized, it is automatically avoided (FL and HL indicate Friendly and Hostile Not Localized, respectively). If, on the other hand, it is localized, the contact is passed on to more business processes (FL and HL indicate Friendly and Hostile Localized, respectively).

The next two processes in the chain are correlation and tactical assessment. Correlation creates a track for the contact with all available information. This track is then used to determine a variety of traits: identity, intent, position, direction, and speed of the contact. The key element of the tactical assessment process is determining the intent of the contact. Therefore, there are a four possible outcomes. Together, the correlation and tactical assessment processes can:

Successfully determine a friendly to be a friendly

Accidentally determine a friendly to be a hostile

Successfully determine a hostile to be a hostile

Accidentally determine a hostile to be a friendly

This process is illustrated in Figure 4.7.

Figure 4.7: Business Process Analysis: Correlation and Tactical Assessment

Figure 4.7 shows that if a friendly or hostile target is assessed as friendly, it is automatically avoided (FF and HF indicate Friendly and Hostile Assessed Friendly, respectively). If, on the other hand, it is assessed hostile, the contact is passed on to more business processes (FH and HH indicate Friendly and Hostile Assessed Hostile, respectively).

The final processes dealing with the contacts is assignment to warfare commander and engagement. These two processes handle destruction of targets that have been determined to be hostile. When an attempt is made to destroy a hostile target, it will take evasive action. Therefore, not all hostile targets will be successfully destroyed. However, friendly contacts that are determined to be hostile will not be expecting an attack from their own forces. Therefore, they will not enjoy the luxury of defensive action and will be destroyed.

Figure 4.8: Business Process Analysis: Assignment, Engagement, Feedback

Thus, we arrive at the complete model of the Business Process layer. The next task is to determine the relationships of this layer to the Data Flow layer directly below.

4.2 Information System Analysis

Within this section we will begin a more rigorous analysis of the Navy system from a technology perspective. The critical aspects of Data Flow and its interrelationships with the Business Processes will be explored in detail. We will also attempt to provide a thorough description of the underlying Infrastructure. However, this discussion will be limited due to the sophistication of the system hardware and its classified status.

4.2.1 Data Flow Through Naval Organization

The data flow through the navy's worldwide OTHT System can be brought into perspective by focusing on a few key issues. Within a specific battle group, we must decipher exactly how information must be transmitted in order for the Business Processes optimally occur. In our model, we have decided to model this flow thusly:

Figure 4.9: High Level View of Data Flow

This representation simply shows the sequence of data flow in a Battle Group. The data flows from worldwide sources to the FOTC unit where it is correlated. After correlation, it is presented to all units within the group. The Officer in Tactical Command (OTC) observes the FOTC broadcasts and decides when there's a possible threat. Using close range sensors, the OTC gets a more reliable picture of the pending threat. After completing this assessment, he assigns the target to a Warfare Commander who is capable of engaging it. The commander uses his local resources, along with information from the FOTC and the OTC, to engage and destroy the enemy target. Once the mission is complete, the Warfare Commander reports back to the OTC with confirmation and damage assessment. From Figure 4.9, the components of this layer are:

Communications Networks

Worldwide Data Sources

Field OTH Track Coordinator (FOTC)

Local Data Sources

Officer in Tactical Command (OTC)

Warfare Commander

Of these components, the most interesting are the FOTC and the Communications Networks it uses. The FOTC or CT unit is highly automated and has very sophisticated on-board systems. It is routinely subjected to a tremendous variety of conditions and inputs, and it must operate optimally in each case. This is the perfect resource to be modeled within our framework. By applying our methodology to the FOTC, we will provide a pathway for enhancements to the system that will result in true optimal performance. The requirements of this layer, particularly the FOTC, are as follows:

Network Bandwidth

Data Processing Capability

These requirements will drop hooks into the adjacent Infrastructure and Business Process layers to derive essential parameters that will drive the Data Flow layer.

The following paragraphs provide detailed analysis of the data flow through the FOTC hardware and communications as they relate to the Battle Group [10]. We will model other members of the Group only inasmuch as they interact with the FOTC. Starting with a description of the global system, we will gradually hone in on the FOTC unit. Figure 4.10 is a general diagram of the entire system.

Figure 4.10: General System Diagram

The units afloat consist of the Correlation Node Afloat, Commanders, Battle Group, and Submarines. In Figure 4.11, we take a closer look at them and their communication links.

Figure 4.11: Data Flow to Nodes Afloat

This diagram shows the flow of data through the nodes afloat. The FOTC receives data from Sensors or Correlation Nodes Ashore and Dedicated or Organic Sensors. It correlates this data and provides a broadcast to the units within its Battle Group as well as to higher commands.

FOTC units are found in a variety of configurations of hardware and software. A common version is the AN/USQ-119(V)16. Upon analysis of this version, we found the system to be in the configuration shown in Figure 4.12.

Figure 4.12: Configuration of AN/USQ-119(V)16 in FOTC mode

In this figure, we note the major communication channels which feed information to the FOTC unit. This information is then correlated within the master TDP with supplementary input from the NTCS-A Workstations [11]. Once the data has been correlated, it is broadcast to the Battle Group via the OTCIXS and HIT networks. Our analysis of the Master TDP itself shows how the incoming information is actually correlated into track files. This correlation process is graphically represented in Figure 4.13.

Figure 4.13: Data Processing within the TDP

This diagram provides a quantitative representation of the exact method by which the data is correlated. The TDP receives inputs at several points during the correlation process. Some inputs are initially filtered out by the system according to relevancy and timeliness. Then, the process routes the data to the appropriate track file. When the Platform Track File is being built, the data is brought in from the inputs to a correlation queue. If this data is capable of being automatically correlated, it is entered directly into the Platform Track File. If it is not, then it enters an Ambiguity Queue, where it awaits manual correlation.From these figures, we can develop a reliable computerized model of the entire process. Once the TDP is modeled, it can be attached to the models of the outside world using the communication network models. In this manner, we slowly begin to capture a functional quantitative model of the system on ithink.

The final component of this analysis focuses on the communication networks that provide data for the FOTC unit. This system is a complex global network of sensors, land links, satellite channels, and processing nodes [12]. The primary networks used by the FOTC unit include the OTCIXS and TADIXS-A. The two acronyms stand for Officer in Tactical Command Information Exchange Subsystem and Tactical Data Information Exchange Subsystem. The TADIXS-A network delivers processed non-organic sensor data from shore data managers to FOTC and OTHT platforms afloat. It is important to note that the TADIXS-A network is primarily a one way computer to computer data link from OTHT nodes ashore to ships at sea. This fact is important in our analysis in Section 4.4. The OTCIXS network is a two-way communication link among the units afloat and shore stations. The two systems are very similar in construction, and are represented in Figure 4.14.

Figure 4.14: OTCIXS and TADIXS Network Components

The satellite clusters cover the entire face of the globe, each with its distinct footprint on the earth. They, in concert with the ground stations, are capable of communicating information from any source around the globe. The individual clusters convey data between those units and ground stations within their footprint. The ground stations are also connected to each other and other shore stations via land links. The global connectivity of these individual systems is achieved through ground stations and satellite clusters around the world.

4.2.2 Underlying Infrastructure

As was stated previously, our discussion of the actual technology has been kept to a minimum in our analysis. This layer of our model will simply provide specific information to drive the Data Flow layer. This information includes parameters such as communication bandwidths, processing speeds, and buffering capabilities of the components in the overlying layer. Due to the nature of this system, much of this information is classified. In areas where the actual numbers were not available, we have used hypothetical values within the appropriate ranges to complete the model. Further investigation will lead to more accurate values in this portion of the model.

4.3 Modeling Methodology

In this section, we will discuss the modeling process for the US Navy's OTHT System. Our goal is to demonstrate the application of the quantitative modeling approach on a real world system. We have covered all four of the model's abstraction layers and have chosen to concentrate on the Business Process and Data Flow layers for detailed study. These two layers and their interrelationships best demonstrate the intimate ties between Business Strategies and Informations Systems. This example will provide a solid basis for the development of a complete model of the entire system. Using the raw information in the previous sections, we will now proceed to develop a computerized model of these layers.

4.3.1 System Description

In order to develop a clear basis for our computerized model, we will use a top-down approach in our analysis. Beginning with the CSFs, we will traverse the layers one at a time, finally arriving at the integrated system. The text highlights the major aspects of the system description. For a complete discussion, please refer to Appendix A.

In this scenario, the CSFs are fairly straightforward. As shown in Figure 4.15, the success of the Battle Group is dependent upon destroying hostile targets and avoiding friendly forces. This is, of course a gross generalization, but it serves the purpose of our model.

Figure 4.15: Simplified Diagram of CSF layer

In this document, we are not concerned with a detailed analysis of the CSFs. Therefore, we have omitted the analysis of the environmental and control variables. From a simplified perspective, these CSFs are a function of the four layer requirements in Figure 4.16.

Figure 4.16: Detailed Diagram of CSF layer

The Business Process layer includes a complex series of steps that were described in Section 4.1.2. The complete diagram is shown here as a point of reference.

Figure 4.17: Simplified Diagram of Business Process Layer

Figure 4.17 demonstrates a specific flow of two types of units through the business processes. The vertical bars represent the specific processes. The units eventually arrive at the final states which are shown on the right hand column in the diagram. It is important to note that these final states are precisely the layer requirements of the CSF layer.

Proceeding with the description of the Business Process layer, we analyze its requirements. We will determine the requirements of each process in turn. The processes are:

A. Surveillance

B. Localization

C. Correlation and Tactical Assessment

D. Assignment to Warfare Commander

E. Engagement and Tactical Feedback

Figure 4.18: Detailed Diagram of Business Process Layer

The processes are ongoing activities that generate information. In order to perform their functions, the processes must have a working capability to accomplish them. Therefore, the process is dependent on capabilities that will be derived from adjacent layers.

The generated information must then be communicated through the Battle Group using a communication channel. In order for transmission of information to successfully occur, there must be sufficient bandwidth available on the network. In this model, we have chosen to handle this transmission on one of two networks. The surveillance processes uses the TADIXS-A network. This network produces a one way broadcast to the units afloat (Figure 4.14), and it is the most likely path to be used for global sensor coverage. All other communications are local. Therefore, they must use the OTCIXS network. This leads into our analysis of the Data Flow layer.

The Data Flow layer is the most complex portion of this analysis. Due to this fact, the presentation here is an overview of the intricacies of the layer.

Figure 4.19: Simplified Diagram of Data Flow Layer

Figure 4.19 reiterates the high level data flow diagram with the addition of specific communication networks. In our model, we will represent each component of the Data Flow layer, emphasizing only the FOTC unit and the communications networks.

We begin with the analysis of the communication networks. A communication network can be simply modeled as a finite quantity of bandwidth. As messages are routed through the network, the bandwidth is depleted. When the messages cease, the bandwidth is again replenished. In our system, there are two major networks: TADIXS-A and OTCIXS. These communication links consist of several 2400 baud channels. We will model each network as two receptacles connected by a "flow pipe."[13] One receptacle will represent bandwidth available, and the other will be bandwidth used. When bandwidth is available, communication becomes possible.

Figure 4.20: Network Representation: Network traffic increasing

Figure 4.20 represents a scenario where plenty of bandwidth is available and more data is coming onto the network. Hence the flow out of the bandwidth available receptacle.

Figure 4.21: Network Representation: Bandwidth not available

Figure 4.21 represents the complete depletion of bandwidth. In this case, communication would not be possible on this particular network.

Figure 4.22: Network Representation: Network traffic decreasing

Figure 4.22 represents a scenario where most of the bandwidth is being used. However, the traffic is decreasing and more bandwidth is becoming available as shown by the flow out of the bandwidth used. This closed system of "stocks" and "flows" will enable us to represent a finite bandwidth and monitor its usage within the system.

The worldwide data sources are sensors external to the Battle Group. These sources monitor the globe and produce contacts. The contacts are then forwarded to the appropriate Battle Group through the TADIXS-A network.

The local data sources are sensors under the control of the Battle Group. These sources are driven by the tactical situation that the Group has encountered, and they produce localization data. The data is transmitted to the units over the OTCIXS network [14].

The FOTC unit is composed of highly complex hardware and software that performs automatic correlation of the incoming data. If automatic correlation is not possible, the data is correlated manually. The FOTC also produces a broadcast of the correlated tracks for the Battle Group on a periodic basis. An overview of the FOTC is given in Figure 4.23.

Figure 4.23: Simplified Diagram of FOTC Unit

As mentioned before, we have chosen to study the FOTC in detail. This critical link in the Battle Group's Data Flow layer is of great interest to us due to its dynamic nature. The complexity of this subsystem prevents us from documenting it in the main text. However, the detailed FOTC diagram is presented in Appendix A.

The roles of the Officer in Tactical Command and the Warfare Commander are not specifically related to the information technology of the Battle Group. Their role in the process has already been clearly defined. In our analysis, their impact on the system is minimal as evidenced by their limited use of the OTCIXS communication network.

The system description phase of the analysis process lays the groundwork for quantitative modeling. As the interrelationships among the layers become more evident, we begin developing an intuition about the organization's inner workings. However, the human mind is not capable of fully grasping the complexity of this vast model. Therefore, we enter it into a computerized tool and simulate the system to gain a better understanding.

4.3.2 Computerized Modeling

We now proceed with entering the system developed in Section 4.3.1 into the computerized modeling tool. In the interest of brevity and clarity, the entire ithink system model is not included in the main text of the document. The model is highly complex, and contains extensive internal documentation. Therefore, the development of the graphical model is presented in Appendix B.

A detailed look at these appendices will reveal the tremendous complexity of a minute portion of the system. It is left to the reader to imagine the incredibly difficult task of representing the entire system in this manner. However, once the modeling process is complete, its simulation capabilities are well worth the effort.

4.3.3 Analysis of System Response to Inputs

There are an infinite number of analyses that may now be undertaken with the aid of this computerized model. We can vary several inputs within the model and see how the system reacts to the changes. These variations could take on a number of forms:

Increase in communications bandwidths

Greater processing speed

Increasing buffer space in the subsystems

Bad weather or communications interference

Improved data compression algorithms

Among other things, we now have the capability to simulate battle conditions that the carrier group may encounter. In this document, we have chosen to analyze the effects of such an encounter on the system. This analysis will give us a valuable insight into the system's shortcomings in a wartime environment. We will then be in a position to make recommendations for enhancements to the system that will increase performance and optimally support the Critical Success Factors of the Battle Group.

In order to simulate battle conditions, we must look at all of the environmental variables that could be affected by such an encounter. Upon inspection of the model, we find only one such environmental variable. This variable establishes the number of actual hostile units present. To simulate a combat situation, we structure the "hostile units" variable as follows:

Figure 4.24: Number of Hostile Units Arrivals plotted against Time

The initial value of the variable is set to zero in order to create a steady state within the model. Once the steady state is established, the number of hostile units is rapidly increased to a maximum. This value is held for a period to represent the arrival of enemy reinforcements. Eventually, the enemy forces dwindle, and the "hostile unit" rate tapers off and returns to zero. Using this input stream, we can begin our analysis of the system.

4.4 Performance Description

It is the intent of this document to outline a general framework for the quantitative modeling approach. There are literally thousands of possible analyses that can be performed with it. Therefore, in our discussion here, we have avoided attempting to inspect every combination in minute detail. That type of analysis would lack a focus, and prove to be extremely confusing. Instead, we have chosen to provide an intuitive description of the system performance with a technical focus on communication networks--specifically, the TADIXS-A network. This analysis will clearly and concisely illustrate our methodology while providing the reader with a tangible foundation for further investigation.

As explained in Section 4.3 the simulations are conducted using manipulation of the "Hostile Units" variable in the Business Process layer. We now take a look at the results of this simulation assuming it to initially be in steady state with no hostile units present.

Figure 4.25: System at Steady State at Initial Time (t=0)

At the initial time, the simulation has been given sufficient opportunity to reach steady state. In the next increment of time, the arrival of Hostile Units in the area abruptly goes to a maximum value. This, of course, results in increased activity on the TADIXS-A network. As more Global Sensors pick up the Hostile Units, the TADIXS-A activity increases. This situation creates a stronger dependability on the reports at the cost of consuming bandwidth. This effects of this trade-off are important to note now, as they will become more and more significant as we continue our analysis of the networks.

Figure 4.26: Abrupt increase in Hostile Unit Arrivals

Figure 4.27: Response of TADIXS-A Usage to Increase in HU

As detections continue, Local Sensors are cued to localize the contacts. At this point, the Battle Group has no way of knowing whether the detections are hostile or friendly. The Local Sensors now begin picking up the units. This, in turn, generates increased activity on the OTCIXS network.

Figure 4.28: OTCIXS Usage after Localization

The localized contacts are constantly being correlated by the FOTC. Therefore, as before, a greater number of sensors will increase reliability. However, in this circumstance, not only is communication bandwidth reduced, but the FOTC begins to get bogged down with large amounts of redundant data. As is evident, the trade-off issues become significantly more important here.

As localized data is correlated and the correlated tracks broadcast, the OTCIXS Usage increases at an even greater rate. This situation coupled with the increased load on the FOTC begins to look like it may create a major hindrance to contact processing.

Figure 4.29: OTCIXS Usage after Correlation

Meanwhile, Hostile Units continue to arrive, and the TADIXS-A activity continues to increase. We will address this aspect later in this section.

The correlated tracks are received by the OTC, and the Hostile Units conveyed to the Warfare Commander. After engaging the target, the Warfare Commander provides tactical feedback to the OTC. All of this communication is conducted along the OTCIXS network as well.

Figure 4.30: OTCIXS Usage after Assignment and Engagement

The slopes in Figures 4.28 - 4.30 have been exaggerated to illustrate a point. Each step of the Business Processes consumes a portion of the available bandwidth. Of course, as the number of Hostile Tracks begins to diminish, more bandwidth becomes available. We see, therefore, that the bandwidth usage with respect to time is a curve similar to Figure 4.31.

Figure 4.31: OTCIXS Usage wrt the entire Time

If the maximum on this curve is less than the bandwidth capability of the OTCIXS network, communications function normally. If, however, the curve reaches the threshold of OTCIXS capability, data will be lost or perish. The model is constructed such that crossing over the threshold is not possible. As a side note, data loss within the actual components is accounted for within the models of the respective components and exerts its influence on the system there.

We now proceed with the detailed analysis of the TADIXS-A network and demonstrate its relationship to the success of the Battle Group. As before, we have the Hostile Units variable and the initial TADIXS-A response.

Figure 4.32: Hostile Unit Arrivals and TADIXS-A response at t = 0

We now consider the case where none of the hostile units are destroyed. They are allowed to build up steadily. In this case, the total number of hostiles (HU) at any given time is the integral of the Hostile Unit Arrival (HUA) rate.

Figure 4.33: Hostile Units with no Destructions

The resulting TADIXS-A bandwidth usage (TBU) is the sum of the number of Hostile Units scaled by some Global Sensors function plus background transmissions (BT) such as the detection of Friendly Units.

Figure 4.34: TADIXS-A Usage with no Destruction of HUs

We now have the hypothetical TADIXS-A Bandwidth Usage. The highest point in this graph must, of course be less than the TADIXS-A bandwidth capability.

Now, we add the periodic destruction of hostile targets to the picture. As hostile targets are destroyed t = a and t = b, the number of Hostile Units decreases.

Figure 4.35: Hostile Units with Destructions

This has immediate repercussions on the TADIXS-A Bandwidth Usage. As the communications channels are freed up, the bandwidth usage follows the number of Hostile Units.

Figure 4.36: TADIXS-A Usage with Destruction of HUs

With this plot, we can see that the number of Hostile Units and therefore TADIXS-A usage remains high even after the Hostile Units have ceased to arrive. In order to understand the full repercussions of this, we will make a few final plots.

Figure 4.37: TADIXS-A Usage related to Hostile Unit Arrivals

The plots in Figure 4.37 show the TADIXS-A bandwidth usage long after Hostile Unit Arrivals have ceased. As Hostile Units flee or are destroyed, the usage of the network decreases. When the usage has returned to or below normal, the wartime scenario has dissipated. There may still be a few Hostile Units in the vicinity of the Battle Group, but the situation is no longer an emergency.

The plots also lead us to the derivation of some important benchmarks. Two key factors in the defensive effectiveness of the Battle Group are:

Tbc: Elapsed time from the start of the attack (t = 0) to situation clear (t = Tc)

Tfc: Elapsed time from the end of the attack (t = Tf) to situation clear (t = Tc)

These benchmarks can now be minimized under a variety of scenarios by manipulation of internal variables. Once established, variations in these benchmarks can indicate a variety of internal problems from weapons damage sustained during battle to the need for upgrading of defensive capabilities. This analysis provides an intuitive grasp of the power of our modeling methodology.

This section has demonstrated the analysis potential that our methodology offers. The type of analysis conducted here can be used on virtually any portion of the system model to ascertain details about infrastructure, data flow, processes, or even CSFs. Manipulation of inputs and internal variables in this manner can lead to a more complete understanding of the actual system's dynamics.

Chapter 5: Conclustion

5.1 Information Technology's Role in Business

Organizations today require information to survive. Whether it is information generated within the firm or by external sources, these organizations must be capable of acquiring, processing, and using it to their advantage. Currently, there are many methodologies for enhancing or implementing information technology to accomplish this goal. These methodologies focus on a variety of issues ranging from optimal processing and communication capabilities to business process reengineering. However, in many cases, they fail to address the entire organization as an integrated unit. In addition, they provide no quantitative measures of the impact that the new information technology will have on the total organization's success. In this research, we have addressed both these deficiencies and have outlined a methodology that provides a basis for the solution to this difficult problem.

5.2 Lessons Learned

Several issues surfaced during the development of this framework and the case study of the Naval System. Among them were:

Complexity

Definition of scope

Availability of information

Division of efforts (distributed analysis)

An organization can be tremendously complex. Attempting to capture the "entire" system is practically an impossible task. Therefore, in order to make this modeling process more manageable, we must extract the key components that drive the system. The initial analysis that this entails can be extremely time consuming. But, executed properly, this extraction will lead to an analysis that mimics the actual system quite accurately. In performing an analysis, it is often useful to define the scope of the project. This generates a set of guidelines that will direct our efforts and minimize wasted time. It also provides a mental framework within which we can function effectively.

A major issue in any type of analysis is the availability of information. As we conduct our investigation to develop a clear idea of the intricacies of the organization, we are faced with numerous gaps in data. Sometimes, these gaps can be reconciled by consulting various organizational documents, but more often, it is necessary to conduct interviews in order to get a clear and current picture.

Finally, the daunting task of organizational analysis is impossible to achieve alone. Distribution of efforts and minimized duplication are necessary to achieve milestones in a timely manner. In order to effectively distribute efforts, we must construct a plan which provides us with responsibilities and time-lines defining the roles of every member of the team. Just as the actual analysis requires this distribution of efforts, so does the development of the modeling methodology. In the following section, a general outline for the future direction of this methodology is provided. Appendix A details this outline. This plan will serve as the basis for further work on the rigorous development of this modeling methodology.

5.3 Future Direction

In this paper, we have outlined a framework for the development of a rigorous methodology that will enable organizational analysts to quantitatively model an organization. This tool will allow an analyst to divine the impact of information technology on organizational performance with relative accuracy. We have also conducted a case study of a system to demonstrate how this methodology can actually provide insight into the true nature of an organization's functionality. This section consists of a discussion of further work that will result in a useful and potentially commercial methodology.

Our work, in this document, consists of analyzing and modeling two major components of an organization: Business and Information Technology.

Figure 5.1: Current: Business and IT Components

We have not looked at another important portion of the organization. This part is best described as the Operational components. These components consist of two layers:

Operations that drive the Business Processes

Resources to support Operations

These additions to our model create additional complexity, but due to the "objected oriented" mind-frame, they should be relatively simple to incorporate. The new model would resemble the original system diagram with one change.

Figure 5.2: Future: Business, IT, and Operational Components

The addition of these Operational components will enable us to expand the model into encapsulating the human factors in the organization. These operations can then be linked into the Business Processes to provide connectivity.

A final enhancement to the model involves financial analysis and modeling of the organization. This portion is truly beyond the scope of this document, and will be discussed briefly. It is, however, a critical consideration in any organization.

When an organization seeks to implement change, be it in business, operations, or technology, the main concern is often the financial implications of this new path. In many cases, it is impossible to quantify the benefits of implementing change in terms of the bottom line. For example, when a new IT venue is proposed for an organization, high-level management can be hesitant to make the investment. The change is seen as a tremendous capital investment with no immediately quantifiable or tangible financial benefits [4]. After all, how does the ability of an employee to point and click on a Graphical User Interface (GUI) translate to increased cash flow in the grand scheme of things. This enhancement could potentially provide the answer.

Figure 5.3: Future: Addition of Financial Components

With a good deal of analysis, the implications of individual pieces of the model components can be derived and integrated into a picture of the financial implications for the organization.

Now that we have shown some possible enhancements, we will provide an outline of further work that must be done in order to stabilize the modeling methodology. The following is an outline of tasks:

Methodology enhancements through case studies

Add Business components

Add IT components

Add Operations components

Add Financial (cost-benefit) components

Methodology stabilization and documentation

Create rigorous methodology for modeling of each component

Identify relevant inter-component/inter-layer links

Develop inter-component communications protocols

Software

Specify and evaluate modeling tools

Create necessary or peripheral software

Develop common interface to link tools

Integrated methodology/software documentation

The outline has purposely been left quite general. It provides a lot of room for additional input. Appendix A will detail the outline and provide future research with a more structured format. A quick description of the outline is provided here.

The initial requirement is to conduct additional case studies within the loose framework provided in this document. This iterative process will test the validity of the model and provide additional enhancements to the various components. With this additional knowledge and experience, we should be able to stabilize and formally document the methodology. Armed with these formal guidelines, the next step is to integrate a software package to support the methodology. Finally, after the integrated package has been stabilized, the various documentation should be integrated and presented in a useful format.

In this document, we have presented a simple framework that enables quantitative analysis of an organization. We have also applied this framework to an actual organization and documented a case study of its information systems. This modeling methodology has a lot of potential. We are confident that further work around this framework will establish a solid foundation for organizational analysis.

Appendix A

Future Direction Outline

The following is a structured plan for further development of this methodology. This outline provides research and planning opportunities in a variety of fields.

Formulate methodology development plan

Methodology Enhancements/Case Studies

Enhance Business components

Add peripheral organizational functions

Add IT components

Data quality sub-layer

Add Operations components

Operations to support Processes

Resources to support Operations

Add Financial (cost-benefit) components

Cost of Processes

Cost of Data Flow and Infrastructure

Cost of Operations and Resources

Return on costs/investments

Integrate possible financial limitations of organization

Methodology stabilization and documentation

Rigorous methodology for modeling of

Business component

IT component

Operations component

Financial component

Identify relevant inter-component/inter-layer links

Develop inter-component communications protocols

Software

Specify and evaluate modeling tools

Business layers

IT layers

Operations layers

Financial components

Create necessary or peripheral software

Develop common interface to link tools

Integrated methodology-software document

Appendix B

Business Strategy and System Diagrams

B.1 Critical Success Factor Diagram


B.2 Business Process Diagram

















B.3 Data Flow Diagram












B.4 Underlying Infrastructure Diagram



Bibliography

[1] Kaomea, Peter. Interviews with consultant on Navy's Tactical Data Quality project. October 1993-May 1994.

[2] Wang, Richard Y, ed. Information Technology in Action: Trends and Perspectives. (Englewoood Cliffs, NJ: PTR Prentice Hall, 1993).

[3] Daniel, D. Ronald. "Management Information Crisis," Harvard Business Review, September-October 1961.

[4] Strassman, Paul A, et al. Measuring Business Value of Information Technologies. (Washington, DC: ICIT Press, 1988).

[5] Ward, John, et al. Strategic Planning for Information Systems. (New York: John Wiley & Sons, 1990).

[6] Rockart, John. "Chief Executives Define Their Own Data Needs," Harvard Business Review, March-April 1979.

[7] Wiseman, Charles. Strategic Information Systems. (Homewood, IL: Irwin, 1988).

[8] Lucas Jr., Henry C. The Analysis, Design, and Implementation of Information Systems. (New York: McGraw-Hill, 1992).

[9] Pratt, Vivian R. "Critical Success Factors and MIS Planning: Integrating Information Needs with Organizational Realities," Sloan Masters Thesis, 1981.

[10] United States Navy. SEW/OTHT C4I System Operating Concept (SOC). TACNOTE UT4012-1-93.

[11] Larsen, Ronald and Herman, Jeffrey. "C3I Tactical Data Quality," Presentation of Decision Support and AI Systems Branch, NCCOSC RDT&E Center, June 21,1993.

[12] Naval Ocean Systems Center. Navy UHF Satellite Communication System Description. (San Diego, CA: United States Navy, 1991).

[13] High Performance Systems. Introduction to Systems Thinking and ithink. Software Reference Manual. (Hanover, NH: High Performance Systems, 1992).

[14] Swanson, Dave. Memo to Dr. Ronald Larsen regarding meeting with Ron Bart (expert on OTCIXS network), January 5, 1994.