SECTION 5 - The Experiment
This section concerns the thesis experiment and case study, and includes the chapters:
1. Introduction;
2. The Case;
3. The Actors;
4. The Experiment Models;
5. The Chronology;
6. The Expert Panel;
7. The Collaborative Tools;
8. The FAQ model;
9. The Institutional Response;
10. The Knowledge Acquisition;
11. The System;
12. The Public Consultation;
13. The Knowledge Gap;
14. Results Summary
|
1. Thesis Introduction 2. Hypothesis and Method 3. Assumptions and Foundation 4. Designing an Experiment |
5. The Experiment 6. Discussing the Experiment 7. The Qualitative Jump 8. Thesis Conclusions |
5. The Experiment
Introduction; The Case; The Actors; The Experiment Models; The Chronology; The Expert Panel; The Collaborative Tools; The FAQ model; The Institutional Response; The Knowledge Acquisition; The System; The Public Consultation; The Knowledge Gap; Results Summary
5.1. Introduction
To conduct the thesis experiment, I set up a fairly large research project to test the use of some specific "state-of-the-art" information technologies in the EIA review process, in particular the public consultation process. The base guidelines for this project followed the experiment design, as described and discussed in the previous section. This section describes the case study in which it is based (EIA review for a Solid Urban Waste Incinerator in Lisbon Metropolitan Area), its institutional context (actors and stake holders and their expectations), the timelines and major milestones occurred, the work of the project teams expert panel, the software prototype ("Intelligent Multimedia System" - IMS) plus Internet components I developed for this purpose, the IMS knowledge content and framework (canonical forms, taxonomies), the EIA review process with public consultation and the use of the IMS prototype, including a controlled experiment. The discussion of the outcome of this experiment is left to the next section.
5.2. The Case
The antecedents: the EXPO'98 "trigger" factor; Enter Valorsul and the CTRSU proposal; The making of a good case study.
The decision to build an incinerator for solid urban waste in the Lisbon metropolitan area had many ramifications (urban waste management strategy, site location, relation with Expo'98, central and local administration responsibilities, institutional process of decision), all of which raised strong controversy.
In the section describing the design of the thesis experiment, I introduced the context and major traits of the adopted case study, including the criteria used for this choice. In this chapter I describe the main settings of the case, concerning what was the object of decision, who was involved in it (main actors and stake holders), how the situation had evolved at the time my research became a part of the process and in which conditions the project was set.
5.2.1. The antecedents: the EXPO'98 "trigger" factor.
For many years, the city of Lisbon, capital of Portugal, had been dumping urban waste in an old-style sanitary landfill (not even complying with regulations) at Beirolas, an old industrial area, on the northeastern zone of Lisbon municipality.
In 1992, the Portuguese Government proposed to host the 1998 World Exhibition, on this part of the city. This proposal was approved and to manage Expo´98 it was created "Parque Expo", a state-owned company ("private enterprise of public capital").
With an exhibition area of about 100 ha, Expo 98 implied the cleaning up of an "intervention zone" of near 310 ha, a land strip with 3 km of river front with heavily polluted soil, including the Beirolas landfill and other industrial polluting sources. An alternative location had therefore to be found for all facilities still operating in the "intervention zone", including the urban waste landfill. Alternatives had to be functional by 1996, to allow time for clean up and build the Expo´98. (Ferraz de Abreu and Joanaz de Melo 2000).
5.2.2. Enter Valorsul and the CTRSU proposal
Under these circumstances, Lisbon and three other municipalities on the metropolitan area (Loures, Amadora and Vila Franca de Xira), with about a million and a half inhabitants, created a "consortium" together with Parque Expo, plus the state owned national electricity utility company, and a state holding. The main mission of this consortium, "Valorsul", was to manage its urban solid waste, and the core of the multi-municipal management strategy was to build an urban waste incinerator, the heart of a Solid Urban Waste Treatment Plant (CTRSU).
Created by decree the 21 November 1994 (Decreto Lei 297/94), the Valorsul "multi-municipal system" was granted a concession contract by the Ministry of Environment the 28 September of 1995, for 25 years. Its shareholders were:
* Parque Expo 98, S.A. (26%)
* EGF-Empresa Geral de Fomento, S.A. (25%) [mostly a state holding]
* EDP-Electricidade de Portugal, S.A. (11%)
* Camara Municipal de Lisboa (20%)
* Camara Municipal de Loures (10%)
* Camara Municipal de Amadora (4%)
* Camara Municipal de Vila Franca de Xira (4%)
Naturally, the timing was critical: Expo 98 had to take place, well, in 1998. No room for the usual deadline slide. This meant the CTRSU had to be in full operation by early 2000, since the Beirolas landfill would be removed by 1996 and sealed before, and the temporary sanitary landfill to be used meanwhile (Mato da Cruz, Vila Franca de Xira) had a small capacity. Plus, and this was no minor factor, European Union funds for the project might be lost if there was no immediate commitment and consistent progress (Ferraz de Abreu and Joanaz de Melo 2000).
|
Fig. 5.2.2.-1 - Municipalities in Valorsul |
Fig. 5.2.2.-1 shows the area of intervention of Valorsul. The four municipalities generated, in 1994, around 590 000 tons of solid waste, corresponding to approximately 19% of the urban solid waste (USW) produced in Portugal. Among the four, Lisbon and Amadora contributed with 66 % of the USW of the region, and were in the most critical situation. (Valorsul 1995) |
Valorsul studied scenarios for 3 solution sets: 1) sanitary waste landfill for USW; 2) composting and waste landfill; 3) incineration, composting and waste landfill. They concluded that all solutions not integrating incineration implied large areas for the waste landfills (between 190 and 340 ha until the year 2020) (Valorsul 1995). In their view, such large amounts of land surface would be problematic in a predominantly urban area, if not impossible.
Therefore they opted for what they called "an integrated solution", with the incineration (CTRSU) at the core, providing also electricity for the national electricity net. Between 1993 and 1994, they selected the site, based on studies by EDP (Electricity of Portugal) and IDAD (Institute for Environment and Development), considering specially factors such as air pollution and road access: S. João da Talha, in the Municipality of Loures (Valorsul 1995).
Aiming to incinerate about 2 000 ton/day of urban solid wastes produced in Amadora, Lisboa, Loures and Vila Franca de Xira, and to remain in service for 30 years, the CTRSU was set to produce different kinds of wastes, such as scoria (non-toxic inert), ashes and smoke treatment wastes.
While the first kind, about 20 % of the waste's weight, could be put in landfills or used in construction, the others, corresponding to around 3% of the waste's weight, were hazardous wastes, requiring therefore special handling. In this pollutant group, there are dioxins and furans derived from smokes, which have such an high toxicity that even small quantities can be extremely dangerous. So even if the issue of building a solid waste incinerator was not as sensitive as the issue of building an incinerator for hazardous waste, it was impossible to evade the "hazardous" word. (Ferraz de Abreu and Chito 1997)
5.2.3. The making of a good case study
Waste-related projects are always highly controversial. Waste is perceived as something bad to have in your backyard (NIMBY syndrome), even if technical studies grade environmental impact as low. But in this case, given the proximity to very dense urban areas, burning of waste could arguably result in significant pollution and public health risks. Therefore, it was reasonable to expect strong concerns of the local population about the CTSRU impact on their lives and health, and an active participation in the public consultation process.
Also, the selected site for the plant was near a very sensitive ecosystem, the Tagus Estuary, established as a Natural Reserve, with classified fauna e flora. If nothing else, this was certain to bring the environmental NGO's active intervention to the scene.
Public Administration handling of the EIA review process was not going to be an easy task. On one hand, the irreversible process triggered by Expo 98, as described above, with full government support, was a powerful factor pushing for an urgent adoption of this kind of facility in Lisbon's periphery; on the other hand, the EIA Review Committee could not ignore the public health risks, and other environmental concerns.
It is useful to refer also to other factors that contributed to such complexity (Ferraz de Abreu and Chito 1997):
- The Ministry of Environment was preparing a Strategic Waste Management Plan, and the project proponent ("Valorsul") was completing a regional operational plan (POGIRSU), having invited experts designated by environmentalist associations to participate. However, the CTRSU solution was adopted before these plans were completed and discussed, which impelled environmental groups to strongly oppose the whole methodology, on the grounds that the absence of alternatives was linked to the absence of a coherent policy on waste reduction and waste management at both municipal and national level.;
- The project proponent, "Valorsul", is a society where local government and state-owned companies hold a majority of vote. This integration of local state and private interests was an obvious determinant to the project choices, and raised the issue of having a review process conducted by the state, where the state itself was involved and had strategic interests at stake;
- A press conference publicized the adjudication of the construction of the incinerator to a contractor, before the beginning of the EIA review, concurring to a certain public mistrust regarding the usefulness of the review process and public participation.
In short, the case settings were such that all actors, including local population, local and national administration and environmental NGOs, seemed well motivated to discuss the issue, although coexisting at the same time with a strong mistrust; there were strong arguments, both political and technical, pro and against the project; part of the information was highly technical and not readily available to the general public; and the environmental administration, in the wake of recent European Union directives transported into Portuguese law, was making an effort to improve public access to information.
This process was concluded with a favorable decision by the Environmental Minister in 96.08.05, on condition that several measures about the CTRSU proposal were to be satisfied (Ferraz de Abreu and Chito 1997). In the next chapters I will present, step by step, the main and most relevant aspects of the EIA review that ended with that decision, as well as the introduction of the new IT in the process, beginning with the actors involved.
5.3. The Actors
Introduction; National Government; Local Government - Municipality of Loures; Local Government - Municipality of Lisbon; Public administration decision-makers; Public administration technical staff; EIA Review Committee; Facility promoter; Environmental NGOs; Local citizens committees; Private consultants that produced the CTRSU's EIA; Consultants in competing EIA private enterprises; CITIDEP; The author; The conspicuous absent; Summary table.
5.3.1. Introduction
With the case study selected (CTRSU of S. João da Talha) and with the basic IT tools to be used in the experiment already available (IMS prototype), I proceeded to meet with the different actors involved, in order to gather their support for the project, characterize more precisely their specific perceptions of the problems that could be addressed by the new IT, and thus map their expectations for this experiment.
I identified initially the following actors: National Government, Local Government-Municipality of Loures, Local Government-Municipality of Lisbon, Public administration decision-makers, Public administration technical staff, EIA Review Committee, Facility (CTRSU) promoter, Environmental NGOs, Local (site) citizen committees. Later, I added two other actors that were clearly relevant: Private consultants that produced the CTRSU's EIA, and Consultants in competing EIA private enterprises. Finally, I added another two that ended up playing a role and were considered as intervening party by other actors: CITIDEP, a not-for-profit private research center that was created in the process and included several members from my project team, and ... myself.
It is worth to note that in the beginning all actors, without exception, were supportive of the experiment and claimed to regard as positive and important the introduction of the new information technologies in the process, even if their views on why were mostly vague and their motivations and expectations varied substantially. While part of the reasons why I obtained their support can be related, in some cases, to my personal and political relationships, as well to their own (actor's) political or market strategy, it became clear that they had a real interest in investing in the introduction of new IT, albeit in different degree and fashion. How this support evolved (and wavered, in a few cases), will be treated in more detail in the following chapters.
5.3.2. National Government
In this case, as in many others, the political decision makers at government level played a double role. Institutionally, they had the responsibility to supervise the EIA review (Ministry of Environment); but on the other hand they (Government at large, Ministry of Industry, Ministry of Planning) had a stake in the object in review, since the promoter of the work was a consortium of municipalities and state controlled companies (EXPO, Electricity, etc.), and a significant part of the funding for the urban waste incinerator (CTRSU) would come from government-negotiated European funded programs for Portugal, that would be at risk if the work did not take place has planned.
Government actions and words indicated that a decision had most likely already been made in favor of building the incinerator. It is therefore understandable that their major concern was the potential political backlash, given the reaction of the population at the selected site, and the risk of such reaction causing critical delays in the implementation schedule (tied-up with EXPO 98, a deadline cast in stone), or even blocking the work. For the Government, the main problem they wanted to address, in the context of the experiments realm, was the predominance of emotional reactions and fears, frequently exploited politically, allied to the difficulty to convey to common, lay people, in a convincing manner, the technical justification for the CTRSU and the selection of S. João da Talha for siting the facility.
Government support was uncharacteristically quick to be conceded: I was received by the Minister of Environment one week after my audience request, and she decided on the spot to fund my thesis experiment and instructed the Head of the Central Environmental Agency (DGA) to implement the mechanisms for the funding and to, in turn, instruct the related services (DGA, DRARN-LVT, IPAMB) to cooperate with my work. It is however important to note that this was at an earlier stage, when the case study concerned an incinerator for hazardous waste, even more controversial than the CTRSU. The Minister had witnessed the violent reactions of locals on pre-selected sites, who prevented EIA teams to complete their work, and my proposal was seen as a timely contribute to address the above referred concerns. Incidentally, the funding itself was more characteristically slow to arrive (more than a year after), but one must make allowances for the fact that meanwhile the Government changed, and with it changed the hazardous waste policy, canceling the projected incinerator, leading to a focus on the urban waste incinerator (CTRSU) case study instead.
The institutional expectations, as represented by procedures and regulations in place, were that a small number of experts from stakeholders would want to consult in detail all the EIA technical data, while the population at-large would be provided with (and better served by) a non-technical summary. The IMS would then be expected to increase the level of acceptance through improving quality and reach of both kind of documents, targeting their corresponding different audiences.
5.3.3. Local Government - Municipality of Loures
Since the planned development, the CTRSU, was sited in the Municipality of Loures, it was only natural that their local government became a key actor in the process.
The main problem they faced was the same as the National Governments (political loss arising from the negative emotional reactions and fears, the need to provide a technical justification for the CTRSU and the selection of S. João da Talha for siting the facility), but in more acute terms. This is why the Municipality of Loures had negotiated a set of pre-conditions and compensations before supporting the CTRSU, and it became very important for them to convey the message that Loures would not accept the facility unless the EIA proved it was harmless for public health and there was full compliance with conditions such as, for instance, the construction of a highway variant, to eliminate traffic problems. In other words, convey the message that by accepting the CTRSU Loures would gain important advantages and suffer no real harm.
Another part of the equation was that EXPO98 ground was partially (although minimally) within Loures jurisdiction. This made Loures Municipality a stakeholder in EXPO98 and all its related problems, including of course the one arising from the urgent need to close de waste dump of Beirolas (as described in the previous chapter). Part of the deal the Municipality was trying to work out, included the promise to transform that part of EXPO98 area, at the time a extremely polluted zone around the estuary of the polluted Trancão river, into a leisure zone, a green area, having in mind to better serve Loures inhabitants and indirectly acting as a compensation for the nearby CTRSU site, with its industrial character.
The Mayor of Loures promptly received me and decided to support the IMS experiment, and instructed other administrators and technical staff to fully cooperate with my work. While it didnt hurt that I was perceived as a potentially politically friendly observer, there was a genuine concern in using all possible means to facilitate explaining the decision, as well as projecting the image that they were supportive of all efforts to increase public consultation transparency. This lead to a genuine interest in supporting the IMS project.
5.3.4. Local Government-Municipality of Lisbon
The Municipality of Lisbon was directly involved in two ways: first, they were the major partner in the planned development, CTRSU, and by far the largest producer of solid urban waste among the four municipalities involved (besides Loures and Lisbon, the other two were Amadora and Vila Franca de Xira, all contiguous "concelhos" in the northern metropolitan area of Lisbon), making it the one that had more at stake in solving the urban waste problem; second, most of the EXPO98 ground was in Lisbon, not to mention most of its impact, So the main concerns were both similar to the National Governments and Loures, with a shift: less concern with justifying the siting, as compared to the much more pressing concern with solving its waste problem, and securing a successful EXPO98.
Just as with Loures, the Municipality of Lisbon was among the actors that quickly and warmly welcomed my project and decided to support it. Again the personal and political factor helped (I knew both the Mayor and the City Council for Environment from student union times, and we liked and respected each other), but nevertheless the objective and genuine interest was very much present, for the same reasons.
5.3.5. Public administration decision-makers
As political appointees, the directors of the public administration agencies are supposed to pursue government policy and orientation, and therefore they followed the lead from the Minister by offering full cooperation with my experiment. This was expressed either in the form of ceding equipment and documentation to my team (IPAMB, DGA), as well as setting-up top level staff meetings to introduce my project (DRARN-LVT), with a clear message of support.
By the same token, their problem formulation and their expectations regarding the introduction of the new IT did not differ from government's. However, these decision-makers are in the front line of whatever practical consequences derive from either policy implementation or pilot experiments. In particular, it is at this level that EIA Review Committees are defined and controlled. This is why I considered them an independent actor; I was counting on some differentiation of their concerns and expectations along the process, as indeed happened, as we will see.
5.3.6. Public administration technical staff
Under the orders of the political appointees (directors and their heads of departments and public services, decision-making level), senior, middle and junior technical staff plan and execute the defined policies, in what concern the technical functions of the public administration at all levels (national, regional or local).
Technical staff in charge of the EIA review sections, or handling tasks related to solid urban waste management, or involved in related environmental monitoring, were supportive, but skepticism predominated. Understaffed, under-funded and overworked, used to unfulfilled promises and some of them well set in their old routines, technical staff from environment public agencies (either national or local) formulated the problem more in terms of these chronicle and endemic shortcomings of public administration. Nothing short of deep policy changes and a much higher slice of the budget would make a dent on their skepticism. This did not stop many of them to warmly welcome the initiative and participate willingly in the experiment (not just because of the stated "official" sponsorship), but the general level of expectation was low, and in consequence I did not expect them to play a major role in the experiment. I was wrong.
5.3.7. EIA Review Committee
The EIA Review Committee is the institutional, formal entity in charge of the review process of a specific EIA. According to law and regulations, it is usually chaired either by DGA or one of the regional agencies of the Ministry of Environment, depending on the nature of the development under review. The same regulations stipulate the presence in this Committee of other related agencies, like (at the time) IPAMB (in charge of the public consultation process), ICN (Institute for the Conservation of Nature), IM (Meteorology Institute), etc.
Although formally an actor in any EIA review process, my observations quickly led me to consider that in fact, this actor did not behave as a homogeneous, separate entity. For an institutional analysis, in this case, I considered more accurate and transparent to consider it integrated in the actor "Public Administration decision-makers", in what concerns the decision level, and in the actor "Public Administration technical staff", in what concerns expert review work.
5.3.7. Facility
(CTRSU) promoterThe promoter of the projected facility was the consortium ("Valorsul") of the municipalities of the northern Lisbon metropolitan area with "Parque Expo" (Expo98) and others, as described in the previous chapter. Expo98 was the major shareholder and the entity that had more at stake in the timely implementation of the incinerator.
This consortium ("Valorsul") had a small staff, led by a small, strong executive body responding to a board of administrators representing the shareholders. After a demonstration of my IMS prototype, they were impressed but blunt: they did not see any advantage in supporting such IT for EIA review, mostly because they saw the danger of it creating a high demand for thorough explanations and raising expectations for real-time responses, which they were not in condition to satisfy. Thus, they risked a negative outcome for them. However, they wanted to present Valorsul as fully supporting public information and were motivated to respond positively to my research efforts. The final result was that they settled for funding a web publication of their EIA summary documentation.
Valorsul formulation of the major EIA problems did not differ much from Government's, since for the most part Valorsul itself resulted from a common Government-Local Municipalities policy and strategy to deal with solid urban waste. Their model of expectations for each of the tools for EIA review were, on the other hand, much more clearly defined.
In my view, they supported web-based information, because they considered it would reach mostly student population and environmental activists already concerned, therefore would not generate any more requests than they were expecting anyway from these groups, and it would show their willingness to facilitate access to information.
In other words, they did not expect the web component of the experiment to impact the local site's population, mostly blue collars unlikely to have access to Internet, as opposed to a real-time interactive system (IMS) available off-line to locals. To deal with the local population's concerns, they favored a series of face-to-face meetings, way before the "official" public audiences. These meetings provided a level of interactivity they could handle and with a timing and agenda of their choice.
Interestingly, as we will see, this actor was one that evolved from a more guarded and skeptical attitude to a more intense participation in the experiment (web component).
5.3.8. Environmental NGOs
There are in Portugal three major environmental NGO's intervening at national level: Quercus, GEOTA and LPN. All three were engaged in this case, and were very supportive of the experiment. The Presidents of the first two (both Ph.D.s) were active participants in my expert panel, and representatives of all the three contributed to IMS content (knowledge base and structure) and use.
The personal factor counted here too, but more from the perception of myself as someone sympathetic towards the environmental cause in general and public participation in particular. While I knew personally well one of the leaders, most of the activists were from a younger generation, with whom I had little contact, emerging during the years since I left Portugal to come to MIT (1986). So the major factor was undoubtedly the direct interest in the use of new IT in general, and of my IMS prototype in particular, after demonstrations I performed, in multiple sessions for small (or individual) audiences.
NGO's formulation of problems to address were different from other actors. In their view, there was a general lack of public participation and deficient spread of information to the public. For this case in particular, they also perceived that the option to build a solid urban waste incinerator had been made without a previous strategic plan for urban waste management, and therefore debating the details of CTRSU EIA was the wrong issue. Consequently, they were concerned with conveying both to the public and to decision-makers the need to concentrate previously on the strategic plan, as well as on the urban waste policy options, and only after that re-evaluate whether a CTRSU in S. João da Talha was an acceptable path.
Their expectations regarding the experiment were concentrated on improving and widening public access to information, in particular to the alternatives offered by environmentalists and the debate between them and the other actors.
5.3.9. Local (site) citizen committees
Inhabitants of S. João da Talha were, understandably, the most mobilized actor in this case. Because many feared the impact of CTRSU on their health and their property value; because many mistrusted state and promoter's reassurances, by past experience; because many felt abandoned and betrayed by their traditional political leaders, given the multi-party, multi-municipal agreement that was behind Valorsul and CTRSU, most of the participants were concerned on how to obtain and use any information and argument that could prevent the construction of CTRSU, or at least contribute to postpone the decision.
Even with a predominance of blue collar workers, self-confessedly unprepared for technical debates and with barely the basic schooling, all the ones I contacted were enthusiastic supporters of the experiment, and expected the new IT to help them bridge the gap between their lack of school education and the technical lingo, so that they could fish useful arguments for their cause. I was intrigued by this expectation, and later decided to complement the case study with another experiment, this one controlled, to collect more evidence.
5.3.10. Private consultants that produced the CTRSU's EIA
Contracted by Valorsul to do the required EIA for the projected CTRSU, these private consultants (working for the hired EIA private companies, or independent consultants providing components like mathematical models) were keen on affirming their professional independence (concerning the EIA conclusions) towards their client, a stand that was always corroborated by Valorsul itself. This was a point of contention, since citizens from S. João da Talha and many environmental activists claimed this independence was compromised by the fact they were paid by Valorsul, and some consultants were bluntly accused of just reaching conclusions that would please the client.
Their formulation of the problems to be addressed was interesting and derived from their role in the process. For them, the focus was on producing the EIA and the most difficult EIA document was the legally required non-technical summary. They claimed they were routinely either accused of being too technical, or of being too superficial, both by the Ministry review committees and by the public.
These consultants were interested in the new IT, as an aid to their professional work and to the public presentation of their reports, and as a possible competitive advantage. One of the companies was producing a multimedia presentation for the non-technical summary and was clearly interested in the type of IT used by IMS, giving a warm, positive evaluation after a demonstration. However, neither of them was interested in participating in the experiment, and did not facilitate access to their documents, as it will be referred later. While there is no hard evidence of the rationale behind this conduct, I am inclined to interpret it as some persons integrating this actor regarding the IMS prototype and myself as competition in the same market. In other words, their expectation vis-a-vis the experiment might have been that I would eventually enter the market of EIA services with my IMS prototype.
5.3.11. Consultants in competing EIA private enterprises
It was interesting to observe that quite a few other EIA private consultants, not contracted for this job, followed nevertheless closely the whole process, and were very supportive of the experiment. In here I include some members of the academia, since many faculty or researchers affiliated in Universities frequently work as consultants in EIA studies.
Their formulation of the problems to be addressed was similar to the previous actor, that is, from the point of view of who is technically responsible to produce an EIA: For them also, the most difficult document was the legally required non-technical summary. Their expectation was that new IT would solve this difficult problem or at least improve the duality of this document. Another big issue was the difficulty of integrating the work of a multidisciplinary team of consultants into a coherent report. Consistently, they were interested in, and attentive to, IT developments. Partly also in result of personal relationships, maybe partly also because they were not in direct competition with the IMS prototype and with the experiment, they were willing and enthusiastic participants, and some of them played a key role in the expert panel.
5.3.12. CITIDEP
CITIDEP - Research Center on Information Technology and Participatory Democracy, was an unforeseen actor, but nevertheless it played a role in the EIA review process, thus becoming one.
The birth of CITIDEP was directly related to the thesis experiment, more specifically to the research project (IMS Project) I set to conduct it. Sixteen researchers and professionals accepted to integrate an "Expert Panel" for this project, and many more cooperated in different aspects of it. During the project meetings, it became clear than many of the participants were very interested in this kind of multi-disciplinary approach and, encouraged by the experience, wanted to prolong it beyond the time frame and substance of the IMS Project.
The general feeling (and I include myself) was that there was a certain lack of an institutionalized support for this multidisciplinary research agenda in academia, and from there (and many other issues, debated in parallel in other meetings) arose the proposal of creating an independent, international research institution, able to work together with both academia and "civil society". Thus was born CITIDEP, first informally, a few months before the EIA review process, and then legally incorporated (as a non-profit research institution) , a few months after. Among the 24 researchers, students and professionals that founded CITIDEP, 9 were from the IMS Expert Panel and another 3 from the IMS Web team.
I was an active party and key element in this process, since in my view it was a good initiative in the long term and the ideal organizational support for the IMS Project and its team in the short term. So after CITIDEP was created, when the time came to obtain funding to publish on the Web a special consultation-ready format of the EIA (part of the thesis experiment), it was formally executed by a CITIDEP team, led by myself.
This way, CITIDEP played a direct role in the EIA review process, even if strictly integrated in the context of the introduction of new IT that was part of the approved experiment.
Besides being an interesting spin-off of the thesis experiment, the motivation and conditions that led to its existence are worth some analysis, and will be addressed later. More relevant details on CITIDEP mission and constitution are left to the thesis appendix.
5.3.13. The author
My original intention was to be an intervening actor only in the sense that I was the source of introduction of the new IT in the EIA review, and remain a simple non-obtrusive observer for all other aspects of the process. This was consistent with the early design stages of the experiment, when I viewed it as changing only one macro-variable -- the IT used in EIA review -- and observe the effect on the other macro-variable -- the EIA review process. But the situation proved to be not so linear.
By the same token, my only original concern was to deal with potential bias in precisely my non-obtrusive observer role. Since I had my own environmental and political views on the topic in review (the incinerator and its impacts), I wanted to make sure I would purge all personal involvement and be as objective as needed. In consequence, instead of ignoring the obvious personal relationship established (or pre-existent) with other actors, including the political or environmental engagé overtones of these relationships, I chose to openly characterize and identify them, 1) as my method to set a demarcation line between the personal factor and the rest, 2) in order to provide the reader with all the information needed to form his or her own critical view of any possible bias in my observations.
This is why, during the above analysis, I included explicit notes of the personal factors involved, whenever was the case. It is important to emphasize that is the sole reason for mentioning them: no one in this case went out of their way, or did something out of character, just because of friendship or political proximity. It certainly helped expedite things, brought more willingness to fit some collaboration in a very busy schedule, and in general facilitated access. That is certainly relevant, but not far from real world conditions, and it certainly did not invert or even changed any basic stand or position on issues of any actor.
As it happens, my role was much more obtrusive than I had anticipated, and by totally different reasons. I must say it took me by surprise, maybe precisely because I was focused on avoiding contaminating my ability to be an independent, objective observer, rather that contaminating the experiment by being an actor in unforeseen ways. In any event, it happened, and paradoxically provided the key to one of the interesting experiment findings, that I will present and discuss later.
5.3.14. The conspicuous absent (political parties)
The presentation of the actors would not be complete without a reference to an unusual absence: political parties.
Given the political nature of many of the issues in this case, and the fact that despite the increasing role of NGOs, political parties clearly dominate the institutional framework of government at all levels, this absence deserves an explanation.
In my view, the major political parties took the back seat in this process, because the contradictions and different positions did not fracture according to party lines. In fact, the CTRSU project and the Valorsul strategy was put in place still during the social-democrat government (1994), before the watch of the socialist government (incumbent when the EIA review took place). Valorsul itself was a partnership where major parties were represented indirectly, through the EXPO98 structure and the most relevant local governments profited from the facility. The government of the Municipality of Loures was held by the communist party; Lisbons Municipality, by a Socialist-Communist coalition, presided by the socialists. Therefore, there was some tacit agreement that kept the political parties somehow distanced from the direct debate.
This is a significant trait of this case, and by no means a common one. As it will be referred during the discussion of the experiment, a totally different situation occurred with the case concerning the handling of hazardous waste, where there was a policy disagreement along party lines (social democrat leadership favored a dedicated incinerator, while the socialist leadership favored a co-incineration solution).
5.3.15. Summary Table
In table 5.3.15.-1 (next page) I summarize the intervening actors, their perception of the problems related with the EIA review (relevant for the experiment), and their expectations for the role of new information technology (IMS Prototype and Internet) in helping to deal with them.
Table 5.3.15.-1
- Actors characterization summary|
Actor |
Problem to Address |
IT Expected Goal |
Expectation Level |
|
Government (national, local)
|
Likely strong local public opposition Exploitation of emotions and fears based on misinformation Need to demonstrate the importance of planned facility |
Convey technical arguments to lay people Focus the attention on technical arguments Promote a perception of transparency in decision-making |
Medium |
|
Public administration decision-makers |
Likely strong local public opposition Need to demonstrate the importance of planned facility |
Convey technical arguments to lay people Focus the attention on technical arguments |
Medium |
|
Public administration technical staff |
Lack of EIA review human resources Deficient EIA review policies and procedures |
Facilitate inter-institutional interaction Provide decision makers with better understanding of policy implications |
Low |
|
Facility promoter |
Likely strong local public opposition Need to demonstrate the importance of planned facility |
Convey technical arguments to lay people Focus the attention on technical arguments |
Low |
|
Environmental NGOs |
Lack of public participation Lack of public information |
Reach and mobilize more public Provide public and decision makers with better understanding of policy implications |
Medium |
|
Local (site) citizen committees |
Fear of facility negative impacts Mistrust of promoter's experts Need of political leverage Difficulty of access and interpretation of technical knowledge |
Facilitate access and understanding of technical data Facilitate obtaining arguments favoring their interests, as perceived by them. |
High |
|
Private consultants that produced the CTRSU's EIA |
Difficulty of producing EIA non-technical summary Difficulty of presenting technical data Importance of maintaining an image of technical neutrality and independence |
Facilitate compilation of technical data Convey technical arguments to lay people Facilitate presentation of technical data for multi-level audiences |
Medium |
|
Consultants in competing EIA private enterprises |
Difficulty of producing EIA non-technical summary Difficulty of presenting technical data Deficient EIA review policies and procedures Difficulty in integrating multi-disciplinary work and teams |
Convey technical arguments to lay people Facilitate compilation of technical data Facilitate presentation of technical data for multi-level audiences Facilitate multi-disciplinary collaborative work |
High |
|
CITIDEP and the Author |
As presented in the chapter on "The Problem" |
As presented in the chapter on "The Experiment Design" |
High |
5.4. The Experiment Models
Introduction; Experiments Models of Expectations; Decision-making process model; Public participation process model; Data and knowledge representation model; Data and knowledge acquisition model; Information system user model; Scope and nature of the experiment models; Model implementation time frame.
5.4.1. Introduction
The approach I used for the Thesis experiment was to introduce a specific set of new IT in the EIA review process (my software prototype, plus Internet components, plus content), with suggested guidelines. To achieve a reliable and meaningful set of knowledge content for the system, I put together a multidisciplinary panel of experts. To keep a focus all through this complex research context, and using also the input from the expert panel, I compiled a set of models (decision making process; public participation process; knowledge representation; knowledge acquisition; IT user behavior and performance) according to precedent in traditional settings in past cases, and then built models of expectations, resulting of the introduction of my system (IT and guidelines). These models are therefore a kind of experiment test plan, derived from the overall methodology but defined in more fine detail, a kind of blueprint for implementing the experiment. This chapter describes such models and the specifics of the experiment methodology.
5.4.2. Experiments Models of Expectations
The general methodology adopted, as described previously, was a case study centered in the EIA review process for a particular development (CTRSU S. João da Talha), in which we introduce a new information system with information technology (IS/IT) previously not in place or in use, and observe both the impact of the technology on the process and the performance / suitability of technology for such process.
Besides a good grasp of the case settings and a thorough understanding of the actors involved and their role, this implies building hypothetical models containing a description of the process as-is (before introducing new IT), of the new IT and system to insert, and then mapping the expected results in what concerns the performance of the new IT and process improvements.
Naturally, such expectations are projected in a scenario where the institutional and regulatory frameworks are left untouched; therefore, any interference observed from these frameworks may affect the outcome and prove to be an impediment to the mapped expectations. In this case, the experiment models serve more like a "proof by absurd" concept, in what concerns this facet of my hypothesis.
Having this in mind, I found it useful to build the following inter-related models for hypothesis generation:
1) Decision-making process model
2) Public participation process model
3) Data and knowledge representation model
4) Data and knowledge acquisition model
5) Information system user model
5.4.3. Decision-making model
In terms of meta-methodology, the first model to define is the decision-making model, since all others depend and sometimes derive from it. In particular, this model defines the universe of IS/IT users targeted in the experiment, that is, the targeted audience.
The chosen approach here was to identify a synthesis of the current decision-making procedures in EIA review, and then to consider which aspects or parts of it could suffer changes deriving from the introduction of the proposed new IS/IT.
Briefly, the decision-making process in place at the time of this experiment consists on:
a) The developer / promoter of the work presents several printed copies of the EIA to the public administration agency / authority that has jurisdiction to process it. In this case, to the DGA (Direcção Geral do Ambiente), from the Ministry of Environment.
b) The public agency in charge verifies in a preliminary overview whether the EIA is in compliance with legal requirements, through a general, standard checklist (Does it include a non-technical summary? Does its scope correspond to the nature of the proposed development? Etc.). If not, the EIA report is sent back to the developer / promoter for further work.
c) The public agency in charge designates an EIA Review Committee with experts from areas related with the proposed development, whose composition is regulated by law and will depend on the nature of the EIA, and who will report to the Ministry of Environment its conclusions and recommendations. Once verified the EIA is in compliance with the preliminary checklist, this Committee begins its work.
d) At some point, it is scheduled the official period of public consultation, which is considered an official and mandatory component of the overall EIA review process; therefore, any (written) public input is a mandatory part of the final EIA Review Committee report.
e) Based on the EIA Review Committee report, but not necessarily in accordance with it (either in part or in the whole), the Ministry of Environment will condone or reject the proposed development / project, or will make approval dependent of a series of conditions, which may include requirements for further EIA studies, changes in the proposed development, minimization and/or mitigation measures, etc.
f) At the time of this experiment, the approval or rejection by the Ministry of Environment did not imply automatically the corresponding final Government decision. In other words, the Ministry of Environment did not have a veto power on developments / projects that failed to obtain the EIA Review approval (since then the law changed and reinforced considerably the weight of the EIA Review).
Since the research experiment had to fit in the current legal procedure, to build the new decision-making model I considered three basic aspects of the current decision-making process where introducing new IT could make a difference.
The first aspect concerns the EIA structure and presentation (delivered by the promoter / developer).
The second aspect concerns the nature of the non-technical summary and its relationship with the overall EIA.
The third aspect concerns the "modus operandi" of the EIA Review Committee, in particular the work division between thematic areas (health, air, soil, etc.), the articulation between the technical review and the public consultation, and the evaluation of the public consultation itself.
Correspondingly, in the new decision-making model, I wanted to test:
1) In what concerns the first aspect, will the new IT allow the promoter / developer to present the EIA directly in digital form and media support and therefore:
a) organize the EIA content and structure in such a way that there is a better articulation between the overall study and its non-technical summary;
b) deliver all or part of the study through Internet and / or CD-ROM, thus providing a better format for EIA review and public consultation than current paper form.
2) In what concerns the second aspect, will the new IT allow one to re-think the nature, form and presentation of the non-technical summary, in such a way that instead of its current limitations (described in the chapters "The Problem" and "The Actors"), it will be possible to produce a digital version able to integrate multiple views, browsed at multiple levels of complexity and detail, according to the reviewer's motivation, concern and technical background.
3) In what concerns the third aspect, will the new IT/IS facilitate the cooperative working procedure of a multidisciplinary EIA Review Committee, help to identify synergetic relationships between different impact domains, and provide a better way of relating public input with the review from the EIA Review Committee's experts.
5.4.4. Public participation model
Although public participation is part of the overall decision making process, I found useful to enlarge this subset and define it as a model itself. While the decision-making model denotes the process from the point of view of the Review Committee, the public participation model gives us the expectations from the view point of the public.
Expanding the public participation component of the described decision-making process, we have:
a) The public agency in charge of the EIA public consultation (in this case, IPAMB, Ministry of Environment) publishes a notice informing the public about the scheduled consultation and general procedure.
b) The EIA (printed copy) can be consulted in a few public offices, such as IPAMB itself and the local municipalities affected by the project.
c) It is also distributed the EIA non-technical summary, by tradition mailed to all relevant NGO and/or local "civil society" organizations (sport and cultural cooperatives, churches, etc.).
d) IPAMB usually promotes one or more public hearing sessions, even if it is not required by law in most cases (including the one in question, CTRSU).
e) During the period of public consultation, around one month, any citizen can ask questions and /or contribute with written opinions. In the end, the public entity in charge of the public consultation (IPAMB) compiles the public input from the hearings and written statements in a "public consultation report," incorporated in the final EIA Review Committee Report. This report is public.
Again, the research experiment had to fit in the current legal procedure for public consultation. To build the new public participation model, I wanted to test that:
1) New IT/IS, including Internet and CD-ROM delivery, will allow wider access to EIA data and promote participation in the public consultation process, translated in larger numbers of citizens involved and wider spectra of audiences, as compared with the usual few participants from the site location and NGO activists.
2) New IT/IS, including the IMS prototype, will allow for better understanding of the EIA issues in question, therefore better informed participation and more relevant questions and public input, mainly through the following advantages:
a) Easier and more detailed access to technical and political explanations and points of view from experts and institutional representatives of all actors involved (promoter, public administration, environmental NGOs, etc.), concerning the EIA and related issues;
b) Better use of the EIA non-technical summary as an entry to more technical material, instead of a frustrating superficial presentation of the EIA with a dead-end when more specific questions arise from the public at large, given the more flexible integration of this summary with the overall EIA, until now reserved for experts.
5.4.5. Data and knowledge representation model
Among the remaining models, the first to build is the one concerning the knowledge representation, since the models for knowledge acquisition and system use depend on the former.
To build this model I considered different representation paradigms that emerged from this field (as discussed in previous chapters), in a series of brainstorming and interviews with the panel of experts. Described in a specific chapter, given the relevance of this topic, I adopted as the main representation paradigm for the IMS knowledge content a "question-answer" model, derived from a common one known as FAQ (Frequently Asked Questions), in lieu of my first choice (in the design stage), the rule-based representation. The choice, as discussed later, derived from factors such as suitability to the kind of knowledge in question, better responsiveness of the knowledge sources to the corresponding knowledge acquisition model, as well as feasibility, considering the short time available for implementation.
The FAQ model presumed the support from the case actors to supply both questions and answers, and implied a special attention to potential built-in biases in both, thus requiring an active, intervening effort from a moderator (myself) to achieve a balanced representation of all different points of view and agendas.
The kind of model adopted was more properly an "Intelligent Multimedia FAQ," since the question-answer template form was not restricted to text, but expandable (on the "answer" component) to other texts hyperlinked between them (including bibliography and contact business cards), sound recordings, digital video, pictures, "data trails," etc., linked and structured in such a way as to benefit from object-oriented properties (class types, inheritance, etc.). In this aspect, it remained very close to the defined in the design section.
I hypothesized that this "Intelligent Multimedia FAQ" model would be able to:
1) Anticipate the kind of questions that will be raised during the EIA review, either by the EIA Review experts or by citizens with different levels of concern and technical background. In fact, I was building an FAQ without knowing the "F" (frequency) parameter, therefore in itself it represented a working hypothesis.
2) Enable a richer understanding of technical complexities by non-experts, translated into more sensible and consistent questions and opinions from public participants, given its form, the multimedia facet and the flexibility derived from its "intelligent" representation.
5.4.6. Data and knowledge acquisition model
Derived from the knowledge representation model adapted ("intelligent multimedia FAQ"), data and knowledge acquisition had to be based on a process of compiling both questions and answers through some structured process, adapted both the representation model and the kind of sources available. Hence the need for a "Data and knowledge acquisition model."
The basic expectation was the feasibility of collecting directly from the sources (mainly experts or administrative and political representatives) answers in some standard FAQ-compatible form, consisting of a written or videotaped summary plus units of information to fill-in a metadata descriptor or header, relating the summary with all other associated multimedia documentation each source would provide (other written documents, photographs, etc.), plus contact information.
To start-up the acquisition process, I planned to ask a panel of experts cooperating with the IMS project to compile an early set of vocabulary and questions and structure them using some kind of taxonomy (concepts developed in other chapter). This set was to be used as a seed in the first round of iterations of interviews (or written requests for answers) with external sources.
Consequently, I built the "data and knowledge acquisition model" in the following fashion:
a) A panel of experts would build a seed structure for the FAQ:
i) Compiling an initial set of related vocabulary;
ii) Defining a taxonomy;
iii) Compiling an initial "question" set, attached to the taxonomy;
iv) Compiling an initial body of knowledge, with answers to the initial question set ("seed") and keywords attached to the taxonomy.
b) Data and knowledge acquisition would then proceed with structured interviews with external sources, where the guideline was:
i) The initial "question set" seed, structured according with the taxonomy;
ii) A standard "multimedia metadata descriptor" form, designed by me in accordance with the knowledge representation model.
c) The acquisition process would consist in several iterations of these interviews (and a few written requests to some sources, such as municipalities and waste-related businesses), where each source would be asked:
i) To suggest more questions to add to the question set;
ii) To suggest rectifications to either question formulations or to the question set structure (taxonomy);
iii) To provide answers to as many questions they would be willing to,
iv) To provide other related multimedia documentation, together with information for their corresponding metadata descriptors.
This model has some built-in assumptions that I wanted to test:
1) All sources from the different actors will be able to agree on a common structure (taxonomy) for the question-answer set;
2) At the end of a few iterations, the acquired knowledge units (question-answer set) will have a balanced representation of all major points of view from the main actors involved, once incorporated all input, including criticism and suggestions from the sources concerning possible bias;
3) It will be possible to acquire a minimal "critical mass" of data and knowledge, enough to allow "real-world" conditions to test the use of the IT/IS introduced (IMS software prototype plus www), within the short period of time available for the EIA review and in particular for public consultation.
Naturally, all these hypotheses (in all models) are in the context of an unchanged decision-making institutional framework. In fact, they serve also as a test whether this current framework allows such improvements.
5.4.7. Information system user model
Building a simple user model was important to set up the interface conditions in order to, on one hand, enable the implementation and test of the public participation model and, on the other hand, allow for some kind of measure of user interaction with the technology and the IMS prototype.
Since the expectations concerning user participation and interaction with the IT/IS varied considerably from actor to actor, I chose to focus more on "tracing" devices to observe and record user action rather than on setting up tests for some specific hypothesis of user behavior.
Consequently, I defined the IT/IS user model the following way:
a) Citizens will interact with the new IT/IS,
a.1) by visiting web-based information, or
a.2) using the IMS prototype installed in several computers in several sites open to public access;
b) Citizen input sent through the new IT/IS made available by the thesis experiment can take the form of
b.1) email messages sent to the public agency in charge of EIA review,
b.2) filling and sending a web-based questionnaire / survey form, or
b.3) typing comments / opinions within the IMS software prototype.
This input would be made public within the same media, meaning email messages would be published on the web, IMS typed messages could be consulted in the IMS itself;
c) Web based information (at least part of the EIA FAQ set) will be organized in such a way as to facilitate consultation at different depths of technical knowledge, and with "visit counters" in all knowledge units (web pages);
d) IMS software prototype will present the user with alternative paths to access content, and incorporate a "trace" function, recording user steps (such as sections and FAQ visited, time spent on each step, etc.).
Again, this model contains some built-in general assumptions, corresponding to loosely hypothesize that different kinds of users will make different use of the available alternate paths to access information, and that tracing user interaction will show some meaningful patterns. Given the non-existence of a specific hypothesis on user categories and user behavior classes, the intention was more to compile potentially useful data rather than test a specific expectation, as referred already above.
5.4.8. Scope and nature of the experiment models
It is important to note that at first, with my earlier hypothesis formulation, these models of expectations were simply a kind of more detailed hypothesis, concerning the performance of each new IT introduced and the improvements at each step or facet of the decision-making process. After my preliminary findings, which pointed to significant constraints imposed by the current decision-making institutional framework, the experiment models were set with a different perspective.
Since the experiment settings do not change the institutional and regulatory framework, the interesting evidence from the experiment is the one that will point, for each of the models of expectations defined here, to one of the following possible outcomes:
a) The new IT failed to perform as expected and / or did not bring any significant improvement to the decision-making process; in either case, with no relevant institutional or regulatory constraints observed. In this outcome, my hypothesis is not proven true and may eventually be proven false.
b) The new IT performed as expected and brought the expected improvements to the process, despite institutional and regulatory constraints. In this outcome, part of my hypothesis, on the role of the new IT, is proven true, but another part of my hypothesis is proven false, since there is evidence we dont need a new decision-making institutional framework in order to profit from the new IT.
c) The new IT performed as expected and there is evidence that expected process improvements were likely to occur if it wasnt for the institutional and regulatory context. In this outcome, my hypothesis is proven true.
Naturally, real world processes are never clear-cut, so it is always possible more complex outcomes, with a combination of these three and other less conclusive ones. This is why the experiment was designed having in mind to focus more on understanding the factors in play, rather than trying to prove rigorous and detailed settings. These models of expectations must be seen in this light. They are essentially a tool to facilitate observation, providing some structure to it.
5.4.9. Model implementation time frame
With these models explicit, and keeping in mind their scope and nature, it is useful to acquire a view of the "ensemble," or synoptic view of the whole experiment, with a time frame of the implementation. The simplest approach for that purpose is to build the relevant timelines, based on the case chronology records, as presented in the next chapter.
5.5. The Chronology
Introduction; Preliminary work; Experiment phases; Chronology table; Timelines
5.5.1. Introduction
In the past chapters, I set the stage concerning this case study, providing the background for the description and analysis of thesis experiment. This included the overview of the case, of the actors involved and of their expectations. Before delving into the details of the experiment itself, it is useful to present a chronology of its main events and actions, establishing a timeline to facilitate integrating the multiple facets of the experiment.
5.5.2. Preliminary work
As already discussed in the chapters on the design of the experiment, an important part of it was the preliminary work, first to characterize candidate case studies, then to select the most adequate, and finally to create the conditions for the feasibility of the case -- from institutional support and funding to the availability of human and technical resources. It was also during this phase that most of the IMS software prototype functions were programmed and tested.
5.5.3. Experiment phases
The feasibility of the selected case established, beginning therefore the thesis experiment as such, we can identify 3 distinctive phases, all of which critical for the understanding of the results: the preparatory period, the period of EIA review, and the period post-EIA review.
In the preparatory period, with the input from the expert panel, plus ad-hoc collaborators, I discussed and defined the knowledge representation and acquisition model, the structure of the knowledge base and of the multimedia data base; compiled a questionnaire (anticipated Frequently Asked Questions - FAQ) and several hundred answers to it; developed collaborative tools to help the acquisition and integration of independent collaboration; collected data and multimedia material; digitized and inserted data into the system (both IMS and on the Web), including a major part of the EIA itself. But, not less important and relevant, I also had to negotiate the terms and support from all the different actors and stake holders (the government agencies, the developers, the EIA consultants, the local municipalities, the environmental NGOs, etc.).
During the official EIA review period, I continued to digitize and insert data into the system; interviewed and assisted users of the IMS prototype, including a group of workers from S. João da Talha, members of Environmental NGO's, staff from the Environmental Ministry and others; recorded the two public hearings and noted the questions raised, and performed a paper-based opinion survey during the said hearings, as well as collected answers from the web-based survey form; introduced several improvements on the prototype user interface, responding to user feedback; participated on a press conference promoted by the Environment Ministry concerning the tools made available to support public review, including a demonstration of the IMS prototype.
During the period post-EIA review, I collected more feedback from different intervening actors (Developer, Ministry, Experts, NGO's, groups of local citizens, etc.) concerning their perspective on the use of the prototype and Internet; produced a CD-ROM with the system and data; discussed with my panel of experts the preliminary results and the design of a controlled experiment with students concerning the IMS prototype; prepared a "knowledge test" for that controlled experiment and performed it, with two groups of students, one of Environmental Eng.. undergrads and the other of younger Psychology undergrads; and then reviewed and discussed the results from this controlled experiment, comparing them with informal use during the public consultation period.
Naturally, it followed a phase of analysis and discussion of the observations and collected evidence.
5.5.4. Chronology table
In table 5.5.4.-1 are listed the most relevant steps and milestones of the thesis experiment. This table was extracted from the "IMS Project Chronology Research Record", a field research document equivalent to laboratory notes.
Table 5.5.4.-1
- Thesis Experiment Main Steps and Milestones|
DATE |
EVENT |
LEVEL |
|
1994/01/01 |
Analysis of possible case studies (EXPO 98, New Tagus Bridge) Development of IMS prototype major functions Encouragement and offer of support from major NGO leaders to IMS Project. |
Research&Development Political |
|
1995/01/31 |
Analysis of a case study on the EIA of a dedicated incinerator for industrial/hazardous waste |
Research&Development |
|
1995/03/00 |
MEETING WITH ENVIRONMENTAL MINISTRY - APPROVAL IN PRINCIPLE OF SUPPORT TO IMS PROJECT , as a case study on the EIA of a dedicated incinerator for industrial/hazardous waste (Director of DGA present) |
Political Institutional |
|
1995/07/15 |
Document: Presentation of IMS project, with problem formulation and IMS prototype images, version 1 |
Research&Development |
|
1995/08/10 |
First preliminary meeting towards foundation of CITIDEP IMS Project seen as a role model for CITIDEP |
Institutional |
|
1995/09/01 |
Document: Presentation of IMS project, final version (portuguese) (Ferraz de Abreu 1995a) |
Research&Development |
|
1995/10/10 |
NATIONAL ELECTIONS IN PORTUGAL CHANGE OF GOVERNMENT (PSD to PS) |
Political |
|
1995/10/16 |
THESIS PROPOSAL APPROVED at MIT |
Research&Development |
|
1995/11/02 |
MEETING CONSTITUTING IMS EXPERT PANEL |
Expert |
|
1995/12/19 |
Phone meeting w/ DCEA PROTOCOL DCEA-DGA on IMS SIGNED. DCEA says Ok to obtain Valorsul complementing funding for specific sub-project |
Institutional |
|
1996/02/01 |
FAQ VERSION 1 |
Research&Development |
|
1996/02/14 |
BEGINNING OF OFFICIAL EIA REVIEW PROCESS (120 business days) |
Institutional |
|
1996/02/15 |
FAQ version 1.2 |
Research&Development |
|
1996/02/26 |
IMS Expert Panel Meeting FAQ ANSWERS VERSION 1 |
Expert Research&Development |
|
1996/03/17 |
Meeting with all EIA Review Committee, for formal presentation of IMS project, lead by the Director of DRARN-LVT (Silva Costa) |
Political Institutional |
|
1996/03/27 |
FORMAL IMS PROPOSAL presented at DRARN-LVT for IMS PROJECT GUIDELINES concerning Institutional cooperation and system use. |
Institutional |
|
|
New IPAMB President: Antonio Guerreiro New DGA Director: Marques de Carvalho |
Political |
|
1996/04/15 |
FAQ version 2.8 |
Expert |
|
1996/04/16 |
Meetings on IMS with Actors (DGA) Meeting IMS at DGA |
Political Institutional |
|
1996/04/16 |
RAISED CONCERNS ON SENSITIVITY of FAQ |
Institutional |
|
1996/04/17 |
Meetings on IMS with Actors (Min. of Environment/Secr. of state) CLEAR FAQ ISSUE and OBTAIN SUPPORT FROM MIN. of ENVIRONMENT TO PROPOSED GUIDELINES |
Political |
|
1996/04/18 |
CONTRACT SIGNED IMS/VALORSUL Project |
Institutional |
|
1996/04/19 |
Document: Guideline on installing and using IMS (version 1) |
Research&Development |
|
1996/05/17 |
Lunch w/ Mayor of Lisboa |
Political |
|
|
Meetings on IMS with Actors (S.Joao da Talha Grassroots) Contact w/ "Comissão de acompanhamento" and "Comissão de luta S Joao da Talha" |
Political Institutional |
|
1996/05/27 |
BEGINNING OF PUBLIC CONSULTATION PERIOD |
Institutional |
|
1996/06/05 |
INAUGURATED FIRST INTERNET ACCESS to EIA Review, at IPAMB (with President of Republic, J. Sampaio) |
Political |
|
1996/06/09 |
IMS Expert Panel working session STABLE VERSION FAQ-IMS (2.9.5) |
Expert |
|
1996/06/10 |
Document: Guideline on installing and using IMS (final version) |
Research&Development |
|
1996/06/11 |
FAQ IMS - Valorsul ON-LINE (web) |
Expert |
|
1996/06/11 |
DEADLINE to deliver internal review statements within EIA Review Committee |
Institutional |
|
1996/06/25 |
PUBLIC HEARING at S. Joao da Talha (~ 150 present at beginning, lasted 6 hours) Available a written detailed description on my notes, my tape recording and an official report from IPAMB |
Institutional |
|
1996/06/27 |
PUBLIC HEARING at LNEC, Lisbon (~ 55 present, lasted 3 hours) Available a written detailed description on my notes, and an official report from IPAMB |
Institutional |
|
1996/07/08 |
INSTALLATION of "final" IMS at IPAMB, Environmental Ministry / Sec State, GEOTA; PUBLIC CONSULTATION SESSION USING IMS at IPAMB with my presence. |
Expert |
|
1996/07/09 |
PRESS CONFERENCE at Min. Environment |
Political |
|
1996/07/09 |
DEMONSTRATION OF IMS Prototype to "Comite Adhoc S. Joao da Talha" (blue collar workers), at IPAMB PUBLIC CONSULTATION SESSION USING IMS at IPAMB with my presence. |
Expert |
|
1996/07/10 |
PUBLIC CONSULTATION SESSION USING IMS at IPAMB with my presence. |
Expert |
|
1996/07/10 |
END OF PUBLIC CONSULTATION PERIOD |
Institutional |
|
1996/08/05 |
Environmental Ministry signs approval of EIA, with conditions |
Institutional |
|
1996/09/14 |
FOUNDATION OF CITIDEP |
Institutional |
|
1997/02/27 |
Tests IMS at Fac. Psychology (students) |
Research&Development |
|
1997/03/04 |
Tests IMS at DRARN-LVT (expert staff) |
Research&Development |
|
1997/03/10 |
Tests IMS at FCT-UNL (students) |
Research&Development |
|
1997/03/18 |
Tests IMS at DRARN-LVT (directors) |
Research&Development |
|
1997/06/15 |
Published article on IMS Project: Ferraz de Abreu, P., Chito, B. (1997), "Current Challenges in Environmental Impact Assessment Evaluation in Portugal, and the Role of New Information Technologies: The Case of S. João da Talha's Incinerator for Solid Urban Waste" , In Machado, J. Reis & Ahern, Jack (eds). 1997. Environmental Challenges in an Expanding Urban World: and the Role of Emerging Information Technologies. National Centre for Geographical Information (CNIG), Lisbon, Portugal. 538 pages, pp 1-11. |
Research&Development |
|
1997/06/15 |
FINAL clean VERSION of FAQ-IMS (3.0) 445 question-answer pairs |
Research&Development |
|
1997/12/31 |
IMS FINAL REPORT (Portuguese version) |
Research&Development |
5.5.5. Timelines
Based on the "IMS project chronology research record", summarized in table 5.5.4.-1, we can build timeline tables that provide a global overview of the experiment phases and milestones.
Table 5.5.5.-1 shows an aggregated timeline view of the case studies (considered, studied, prepared and finally the one implemented, with respective aggregated phases), against the background of the development of the information technologies used in the thesis experiment.
Tables 5.5.5.-2, 5.5.5.-3 and 5.5.5.-4 show a more desegregated view for CTRSU Case Study Timeline, respectively in 1995, 1996 and 1997.
From these time lines, it is clear that, besides the preparatory work in meetings with the main actors involved in the EIA review process, already described in the respective chapter, the other key step to launch the thesis experiment was to assemble a panel of experts to support the IMS project. I will then proceed to describe, in the next chapter, the expert panel and its work.




5.6. The Expert Panel
Introduction; The IMS Expert Panel; Building a vocabulary base; Knowledge Classes or Canonical Representation; Building a domain taxonomy; Building an "issue" taxonomy; Using taxonomies to structure knowledge.
5.6.1. Introduction
An Intelligent Multimedia System, just as any knowledge-based information technology and no matter how sophisticated, is nothing but an empty shell, without the essential: the knowledge content.
This is why one of my first steps since the early stages of the experiment (August 95), was to invite experts from several domains related to this EIA to integrate an Expert Panel for the IMS project.
In this chapter I present the essential of the work done by the Expert Panel and some of the problems raised (and dealt with) in the process, concerning both the knowledge structure and the requirements of a collaborative enterprise.
5.6.2. The IMS Expert Panel
The mission of the IMS Expert Panel was to provide the IMS knowledge content, and a forum for discussion and peer review of my approach to the corresponding knowledge structure and representation. Naturally, the knowledge inserted into the IMS prototype (and/or the web site), could originate from other sources, with the Expert Panel acting in this case as a review / advisory board.
The researchers and professionals that served in this Expert Panel were:
Solid Urban Waste: Engª Ana Teresa Chinita (MSc), Engª Madalena Presumido, Engª Paula Gama, Engª Deolinda Revez,
EIA Methodology: Prof. João Joanaz de Melo (Ph.D.);
Air and Emissions: Engª Luisa Nogueira, Engª Paula Carreira;
Water: Drª Ana Mata;
Traffic and Noise: Engª Maria João Leite;
Environmental Economy: Drª Angela Cacciarru, Engº Pedro Sirgado;
Environmental Psychology: Drª Sofia Santos, Prof. José Manuel Palma (Ph.D.);
Social Service and Public Health: Drª Filomena Henriques, Dr. Pedro Migueis (MD);
Social Anthropology and EIA: Prof. Timothy Sieber (Ph.D.).
The Panel composition was an interesting balance of researchers (faculty) from academia (4), experts from environmental NGOs (4), professionals from environmental private companies (3) and technical staff (+ MD) from national, regional and local (municipalities) public administration and services (4+1+3). However, their presence in the IMS Expert Panel was not in representation of any institution, but as an individual option and in voluntary regime (non remunerated). Besides these experts, many more contributed to the IMS knowledge base. A full list of all persons and institutions that cooperated in the IMS Project is included in the Appendix.
My role in relation with this Expert Panel was to act as the Panel moderator, the knowledge engineer, and the interface with all institutional and formal contacts. All decisions involved were my sole responsibility, including those concerning the choice of the knowledge set and knowledge structure to use in the system, although I always deferred to the Panels opinion in all matters specifically related to their area of expertise.
All panel members were introduced to the project by means of demonstration sessions with the IMS prototype, and a brief presentation of the project objectives and plan. The main support documents were the Portuguese project proposal, and the thesis proposal.
During a first phase (November 95 - January 96), the Expert Panel discussed the target audience for the IMS, set a strategy to organize data and concepts, built and classified a vocabulary base, and contributed to define taxonomies for the IMS knowledge units.
5.6.3. Building a vocabulary base
The first meeting (2 November 95) discussed what kind of users and user applications the prototype could have, and which ones should be defined as a priority. The audience targeted, as the primer users, were: a) individual citizens, b) EIA review committee and staff, c) environmental NGOs activists. From there I suggested a preliminary strategy for organizing data and concepts.
As described in the design section, at the heart of the Intelligent Multimedia System is a knowledge base (KB). In general, we can look at any KB as made of the following base components:
First, knowledge chunks, called knowledge units (by convention);
Second, structure. This structure is defined as the means for classifying and organizing the knowledge units.
In my view, it was preferable to begin by generating a seed of basic knowledge units, and only then move in the direction of a structured format. One of the reasons for this strategy was the different backgrounds of the panel members, usually linked to different reference systems, with its own "language". In order to establish a common reference, it was important to build together a basic language of terms and concepts, shared and understood by all (meaning the same for all), in the process of creating the knowledge base.
Having this in mind, the expert panel proceeded to build a list of vocabulary related with EIA in general, and the issue in question in particular (incinerator for solid urban waste). This was done by means of a brainstorming session, led by a senior expert in solid urban waste management. Participants threw on the table, in an uninterrupted, free, flow, words and sentences that reflected either events, or concepts, or institutions, or objects, felt to be relevant. In table 5.6.3.-1 is a small sample of the vocabulary (full list in IMS CD-ROM).
Table 5.6.3.-1
- Sample of IMS Vocabulary|
Accountability Air pollutant Air pollution control Anthropogenic sources Ash Bag filters BAT -Best Available Technology Chimney effect Compensation Contamination Continuous sampling Droplet separator Effective chimney height Emission rate Flute Fly ash NIABY - Not in Anyones Backyard NIMBY- Not In My Backyard NOTOF - NOt in my Term of OFfice |
Photochemical reaction Photochemical smog Plume Poison Polychlorinated dibenzo dioxin Public consultation Recycle Reduce Regeneration Scrubbing Soil contamination Spot sampling Strategic Planning Toxic waste Unprotected waste sites Water contamination Water reservoir Water supply Zero option |
In a short sequence of meetings we had compiled close to eight hundred vocabulary units, ascending later to eleven hundred. Soon became crucial a process of pruning and weeding out the terms that were really not important and, at the same time, to begin with a first trial at classification. But this was no trivial task. Resulting from the free flow, brainstorming style, the vocabulary was very heterogeneous in all aspects: size (from single words to full sentences), level of abstraction, level of correlated meaning (synonyms), level of interdependency with each other to identify a precise meaning, etc.
To deal with the vocabulary properly, I elaborated and presented for discussion a definition of knowledge classes, or canonical forms of knowledge. This was a critical step, without which it would not have been possible to build a real-world-size knowledge base.
5.6.4. Knowledge Classes or Canonical Representation
My approach to deal with the multiple-domain / multiple source problem in building knowledge bases was to establish a non-ambiguous, mutually exclusive classification of different types of knowledge, in other words, a canonical representation. The rationale is that by encapsulating each and all knowledge units in one of these categories, we create a virtual level of knowledge representation where the dominant traits are not domain-dependent, since we can define them at a syntactic level, instead of a semantic level.
This canonical representation was achieved by reviewing a large set of multi-domain vocabulary (more than one thousand items) and several field taxonomies (from different school curricula, job market demand and supply on domain qualifications, etc.). I did not limit myself to use the IMS vocabulary list, because I wanted to create a general-purpose categorization, not one just applicable to this specific domain. As a result, I identified the following categories: Term; Concept; Definition; Model; Rule; Norm; Procedure; Methodology; Description. In Table 5.6.4.-1, I present my formal definition of these knowledge classes.
Table 5.6.4.-1
- Knowledge Classes or Canonical Representation|
Term: |
Single word or short sentence ; Represents an element of technical, scientific or cultural vocabulary; or a variable in an algebraic expression; May be defined in a simpler and less technical language (Glossary); Does not require extensive explanations or complex theoretical foundation; Definition may contain other terms only . |
|
Concept : |
Word or sentence ; Represents an idea or abstraction (technical, scientific or cultural), or a knowledge domain (class, sub-class, domain); May be explained in lay language, eventually requiring more or less complex theoretical foundation; Explanation may contain terms or other concepts, of similar or lesser complexity. |
|
Definition : |
One or more sentences; Represents the exact, non ambiguous explanation of a term or concept; or establishes an axiom, which should, in this case, be considered a term or concept; There may be more than one definition per concept, and they may or not contradict themselves (if they do, it implies the co-existence of several truth/belief systems); Explanation may contain other terms and concepts, other than the object being defined, of similar or lesser complexity. |
|
Model : |
One or more algebraic expressions (set of variables linked by algebraic or logical operators); May establish an axiom ( variables must also be considered terms). |
|
Rule : |
Regular expression [IF precedent THEN consequent], in which precedent and consequent are a set of one or more conditions linked by the logical operator AND, where condition is a 3-tuple variable-operator algebraic-value; Represents a causal or dependency relationship between phenomena, identified through investigation and not arbitrarily set. |
|
Norm : |
Regular expression [IF precedent THEN consequent], in which precedent and consequent are a set of one or more conditions linked by the logical operator AND, where condition is a 3-tuple variable-operator algebraic-value, and the consequent part may be a set of conditions or a set of procedures; Represents a causal relationship resultant from arbitrary determination. |
|
Procedure : |
One or more phrases or images; Represents a sequence of one or more acts ( operations, interventions) of one or more agents acting on one or more target-objects (people, things, entities, etc.); Is conditioned by rules or norms. |
|
Methodology : |
Set of norms and procedures. |
|
Description : |
One or more phrases, images or sounds; Factually represents things, people, entities, places, events, situations or states ; May contain models, terms, concepts and other descriptions. |
Besides opening the door to build a structured environment for the knowledge base, which was the main reason to do it, creating a classification standard has other potential use. If we can fit all vocabulary units in these categories, and then if we associate each category of knowledge representation to a certain computer implementation of that knowledge representation, and further map the preferences for the corresponding media representations (the media channels that will represent each class of knowledge), this could facilitate automating the knowledge input into an intelligent system. In other words, to add a certain knowledge unit to the system, the user would identify what kind of class it belonged to, and automatically the system would find the best form of representing it in the computer.
To test my canonical representation, I distributed to the IMS Expert Panel a list of the compiled vocabulary, to identify the class of knowledge for each unit. Table 5.6.4.-2 shows a sample of the classification grid, used also for other information. Besides the table columns, it was asked for each vocabulary unit, if applicable: a glossary input, related experts, related institutions. This exercise consolidated, with some refining, the proposed definitions, but also proved to be a very time-demanding task, even with the support of some computer tools I provided.
Table 5.6.4.-2
- Sample of the vocabulary classification table (knowledge class and other information)|
Vocabulary |
Classification (knowledge class) |
Synonyms & plurals |
Domain Sub-Classes (only immediate subsets) |
Terms (Related or associated) |
Domain Class |
|
3 R's |
Concept |
RRR R3 |
- |
reduce re-utilize recycle |
Waste management |
|
Acid fluoridric |
Term |
- |
Acid fluoridric pure Acid fluoridric solution |
acid |
Chemistry |
|
Garbage packaging |
Procedure |
waste packaging |
organic waste p. non-organic waste p. |
waste compacting |
Waste management |
|
Aerosol |
Term |
- |
ozone depleting non ozone depleting |
gas |
Air |
5.6.5. Building a domain taxonomy
In the process of exhaustive classification of all and each vocabulary, as shown in table 5.6.4.-2, I soon concluded this would be too long an effort, with the risk of "burning" my collaborators in an early stage. As much theoretical value these efforts could bring, such exhaustive classification was not indispensable. For the practical purpose of obtaining an immediate result, that is, creating a consistent and large enough set of knowledge units to make a usable prototype for a valid experiment, what we had was enough. So I fine tuned the approach at run time and on the fifth or sixth iteration with the panel of experts, I started focusing on building up a domain taxonomy, taking advantage of my knowledge categories -- and the classification effort made so far -- by using only the vocabulary units classified as terms (or eventually as concepts). This focus was successful, and while it proved to be also a hard task, it was a very useful one.
To finalize this first cycle, I asked the panel to fill in a written internal survey, to establish a starting point on building a common language, common references, and agree on plan and priorities.
Building a domain taxonomy was not as straightforward as I imagined it would be. From my point of view, a domain taxonomy was a simple hierarchy tree, with global domain areas near the root and specialized areas near the leaves.
The first challenge was that many of the terms in the vocabulary were shared by different domains and sub domains, so of course we had more of a general graph than a simple tree. For the purposes of simplification, I insisted with the experts to try to always add a qualifier to the term, in such a way that we could be certain that the term was unique on a tree structure of the taxonomy. For instance, if we had the term "quality control," which was obviously shared by several domains and even sub domains, then we would just add a qualifier, like "quality control of an industrial procedure" or "quality control of the state of the air," or "state of the water" and so on. This worked more or less, but some of the terms became long and cumbersome.
Another problem was that no common, standard way of organizing the domains was universally shared. Even among specialists of the same area and of the same specialization, like some of the panel experts in solid urban waste, or public health, or air pollution. More: each one of them had multiple ways of organizing the domains for each sub class, and were not sure about their own view, so they kept changing the structure.
When working together as a team, the experts would have very lively arguments on whether we should create a tree in function of one criterion or another, such as the functionality (domain applications), or the kind of ways the domain was organized in their professional practices. So, one of the interesting outcomes of this work was to produce and generate a small set of valuable criteria, according to which we could build different taxonomies for the same domain. Two of them emerged as the most serious candidates:
1) One simple way of organizing domains is to consider the "scholar" approach, that is, how the domains are organized in Academia, in terms of general degrees, specialization degrees, course topics and sub topics, etc. Other sources for this approach are the books considered as major domain references, and follow the way they are organized, like chapter structure.
2) The other possible way, or "market" approach, was to have the taxonomy built around the market demand, which supposedly reflected some "real world" organization of the domain. So, if people were hired because they were experts in air pollution, or experts in water quality, etc. and not looked after as experts in natural resources in general, this would reflect some kind of a horizontal or vertical way in which the market divided its needs (both of private enterprises and of public services). One exemplary source was a directory called the "green directory", where many environmental-related professionals were listed under areas of specialization. Such areas reflected the way the market imprinted its mark on the organization of the domain.
It was a difficult choice, so we tried both. In general we could coalesce the academia (scholars') approach with the market approach in a more or less coherent way. Tables 5.6.5.-1 and 2 show small samples of the resulting domain taxonomy (full taxonomy included in Appendix). But, as usual, a new difficulty arose, from another direction.
Table 5.6.5.-1
Domain Taxonomy for Domain "Environment"|
Air |
|
Environmental auditing |
|
Environmental impact assessment |
|
Environmental information systems |
|
Environmental risk analysis |
|
Forestry ecology |
|
Hazardous waste |
|
Human ecology |
|
Soil |
|
Solid waste |
|
Water |
Table 5.6.5.-2
Partial Domain Taxonomy for Sub-Domain "Air"|
Air |
|
|
|
Sub-divisions: |
|
Emissions |
|
Physics and Chemistry of the atmosphere |
|
Atmospheric pollution at global scale |
|
Air quality |
|
|
|
|
|
Emissions |
|
|
|
Sub-divisions: |
|
Emission characterization |
|
Emission classification |
|
Classification of emission sources |
|
Control and reduction of emissions |
|
Emissions from stationary sources |
|
Emissions from moving sources |
|
Emission inventory |
|
Legislation and regulation of emissions |
|
Emissions monitoring |
|
Emission norms |
5.6.6. Building an "issue" taxonomy.
The more organized the domain taxonomy grew, the more clear it became that a total different way of looking upon the taxonomy was to focus on the current issues at stake.
This was an important aspect, if not the most important, in our target for the structuring effort. But if we were going to organize the domain in terms of its specific use, having in view specifically the problem of the environmental impact assessment study of a solid urban waste management problem, in particular the incinerator issue, then we would have to look at it from a total different angle, in which it was not so useful to follow the lengthy, general-purpose academic or market approach.
On one hand, we had created a general taxonomy structure, with domains organized in a composite of "scholar" and "market" ways. For instance, on the top level we had environment, economy, medicine, chemistry, law and so on. Then "environment" was subdivided in several sub-areas and sub-sub-areas, but the "law" domain not nearly as much. So, we end up configuring a somehow unbalanced domain structure. We had a large grid in general, and then we had a much more filled in and detailed tree structure on the specific areas where we had more terms, because they were in some way more related with the topic they were going to be applied to.
On the other hand, we had the issues in question, easier to organize according to the class of problems or the class of questions that were going to be raised and had to do with the different aspects of the development of a solid urban waste incinerator. So, we had issues such as the scheduling, the construction, the impacts, issues of control, and so on.
How could we a find of a compromise, without destroying completely the consistency of each model of building a domain taxonomy?
The best solution was to identify and build two completely different trees:
1) One, that we called the domain taxonomy, based on the scholar or the market way of organizing the field. This would be a more stable kind of a structure in the system, useful not only for this problem, but also in future applications of this prototype to other kinds of EIA problems, even if very remote from the solid urban waste issue.
2) The other, we called an issue taxonomy, a problem-oriented structure, totally focused on the issue at hand. This would allow targeting the specific problems that were raised dealing with a specific solid urban waste management proposal, in particular the incineration. This became later on the natural framework for organizing the compiled list of frequently asked questions.
This duality greatly simplified the process of building both taxonomies, and allowed to focus more in detail on the more relevant, the "issue" taxonomy. Table 5.6.6.-1 shows an initial version of the top level of the "Issue" taxonomy. The final, complete version is presented in a dedicated chapter ("The FAQ Model"), given its special relevance; it was the time when the Expert Panel discussed most of the substantive matters of the case, as the designation "Issue Taxonomy" suggests.
Table 5.6.6.-1
- Initial version of "Issue" Taxonomy, root level|
Advantages |
Alternatives |
Compensation |
|
Construction |
Decisions |
Facility |
|
Impacts |
Minimization |
Operation |
|
Precedents |
Risk |
Sites |
|
Status |
Technology |
Transportation |
Again, the natural tendency of the experts was to try to refine it more and more, and re-question it each time there was a new term. At some point, I called for the closure of the taxonomy, since it was time to move on and begin concentrating on the next step: compiling knowledge units, such as definitions (glossary), rules and question-answer pairs, and integrate them in the taxonomy framework.
5.6.7. Using taxonomies to structure knowledge
How were these taxonomies used, to structure the knowledge within the Intelligent Multimedia System? The basic plan was to link each knowledge chunk that could be collected in the form of a written or recorded statement, a picture, a video of an interview with an expert, etc., with one of these taxonomy terms, therefore positioning the knowledge unit within a common structure.
What advantages has this application of the taxonomy, besides organizing the data and knowledge units in a consistent form?
The immediate advantage, is to provide by default a meaningful keyword linked to each knowledge unit, usable for search and retrieval operations.
Another advantage, less obvious, is more illustrative of the role of a taxonomy, versus the simple use of keywords. For instance, we can apply object-oriented tools, so that if at some point the system tries to gather some information regarding a certain topic, but can't find any, it will move up on the taxonomy tree and find the "thematically nearest" available documentation.
This mechanism is usually referred as "inference by generalization". While the output will be somehow more general, at least it will be more enlightening about the empty slot than if we didn't have this ability. Instead of giving the user some machine feedback like "I am not programmed to answer this question", the system can respond: "I do not have an answer for that specific question, but I do have information on the general topic, here it is", and generate, for instance, a multimedia book with the corresponding data trail.
In this context, the use of synonyms was yet another advantage, because the user may ask questions on a certain term, for which the system might not have any information, and still get an answer. For instance, consider "spot sampling", which is a term within the domain of air pollution control. If the user asks about it, and there is nothing about it within the knowledge base, however, there might exist information on "grab sampling," which is a synonym of "spot sampling" . So the system can move laterally because there is this structure of synonym, or 'brother' node, instead of just being able to move between parent and children nodes. This facilitates the use of a common language for a query in the system.
On the other hand, this posed an extra demand on the system coding and loading, in order to bring those object-oriented tools available to the user interface. As it happened, it became hard to fulfill all its power, within the extreme short time frame available, and only a small subset of .it was implemented. Table 5.6.7.-1 shows the guidelines I distributed to the Expert Panel, and gives a more exact idea of how the taxonomy was used to structure the knowledge base.
Table 5.6.7.-1
- Guidelines for the use of taxonomies to structure IMS knowledge|
1. Taxonomy is used to: a) Classify ("Stamping") documents (text, image, video, sound); b) Identify people's specialization ( responsibilities ); c) Identify competencies (responsibilities) of entities ( and sub-divisions of entities ); d) Classify (catalog) questions and answers; e) Identify targets for incoming mail, comments and opinions [ depending on b and c ]. |
|
2. A term (vocabulary item) must be present in the taxonomy if and only if indispensable for one of the above functions. |
|
3. Apart from the terms present in the taxonomy (and which have a unique place in that taxonomy), others might exist that will function as Keywords, and that might be associated to one or more terms of the taxonomy, at any level. These keyword terms are used to: a) Enrich the system's glossary; b) Ease cross-referencing between documents and answer segments in order to answer "non-anticipated" questions; c) Solve multi-interpretation conflicts (for example, interpretation dependent from context). |
The Expert Panel fulfilled remarkably well its function, with great dedication and enthusiasm. But not without difficulties, some arising from the complex nature of the problem, some more practical but not less formidable, arising from the challenge of merging, in a very short period of time, the contributes from many experts with very different backgrounds, and most of all with very busy schedules. Its major contribution was the successful "FAQ" model, described in a dedicated chapter; but equally important, from a pragmatic point of view, was the way it managed to work together, thanks precisely, and only, to the new information technologies under analysis. Next chapter reports this endeavor.
5.7. The Collaborative Tools
Introduction; Internet for IMS project; IMS vocabulary management tool; Collaborative outcome
5.7.1. Introduction
In order for the Expert Panel to function at all, it was necessary to create a collaborative infrastructure support. Without it, it would not have been possible to obtain the contributions from senior experts, extremely busy with their own normal work. It was also difficult to integrate the work from different perspectives brought by different backgrounds, and here again collaborative tools were fundamental. But the need for these tools extended beyond the Expert panel; it reached institutional actors in charge of the EIA Review, although in a lesser scale and depth.
In reality, this issue is an integral part of the experiment, since it concerns the relationship between the performance of actors in the EIA review process and the role played by new information technology, in this case in the form of collaborative tools. In this chapter I present the conditions that led to install or implement such tools, and the way they were applied.
5.7.2. Internet for IMS project
Today, the use of Internet as the key support for collaborative work seems trivial, but in 1996, in Portugal, Internet use was not spread, and access was restricted and expensive. Even in universities, where most of the Internet access and use was concentrated, only recently was emerging a change of the previously predominant policy, albeit not written, that regarded Internet access and email addresses as a privilege of a few, usually faculty. In all Portuguese society, computer use levels were low, as compared with other EU countries. This is reflected in the official numbers in Fig. 5.7.2.-1

Fig. 5.7.2.-1
- Personal Computers in European Union, 1997 (source: MCT 1999)The panorama was only worse when moving away from the academic world. The large majority of the public administration was not connected to Internet or, if it was, again it was restricted to one or two accounts, for exclusive use of top-level staff. The public agencies involved with EIA reviews were just beginning to install Internet links, and practically all the staff involved had no email and was not familiarized with the concept. According to official numbers, even by 1997 still only 28 public administration organisms were connected to Internet, and only 19 had web pages (Rodrigues 2002). Please note these are not percentages, but absolute quantities...which were much closer to zero in 1996, in a country with a population of 10 million, with one of the largest per capita rates of public administration employment, within the European Union.
It was actually the IMS project web team from CITIDEP that installed the first Internet access and email account at DRARN-LVT, the public administration agency in charge of the CTRSU EIA review, so that EIA Review Committee staff could participate in the thesis experiment, enabling them to communicate within the Committee and eventually with my team. This was, indisputably, one clear instance of introduction of new IT in the process by the thesis experiment.
As for private users, numbers for 1997 place the percentage of Internet clients by 100 inhabitants in 0.9% (ICP 2000), and the overall percentage of Internet users as only 3% of the population (Rodrigues 2002). We can suppose that in 1996 this percentage was even considerably lower, since the Internet "acceleration" began in Portugal that year, with the percentage of Internet users reaching 22% in 2000, and 30% in 2001 (Rodrigues 2002). The same in what concerns web presence, according to Fig.5.7.2.-2 and .Fig.5.7.2.-3

Fig.5.7.2.-2
. Web hosts in Portugal vs. EU and OCDE averages (source: MCT 1999)
Fig.5.7.2.-3
. Yearly increase on web domains in Portugal (.pt domain) (source: MCT 1999)Again, it was the IMS project web team from CITIDEP that arranged for the first email account for Valorsul -- the CTRSU proponent --, as well as registered their web domain (www.valorsul.pt), along with designing and implementing their first web page. This was an indispensable step, in order to later publish EIA information on the web, a pioneer initiative. Curiously, according to the figure 5.7.2.-3, both CITIDEP and Valorsul domains were among the first thousand to register in Portugal.
It was therefore no surprise to find that nearly half of the Expert Panel members had no regular access to Internet, and some no access at all. The first step towards regular communication was to acquire several modems and two portable computers, and ask one University (FCT-UNL) to create email accounts with external authorized access (another rarity). The second was to organize training sessions to familiarize members with Internet services and related software like mail clients, web browsers, etc. This process was organized through CITIDEP, who played an important role in this aspect, as referred in the "Actors" chapter.
Creating regular habits of checking email and using group mailing lists to communicate (even when the interlocutor was only me, so that everyone was kept involved without the need for too many meetings), etc., all this seems like easy routine affairs nowadays. At the time, it was an indispensable step and a critical requirement to the very existence of the Expert Panel and its contribution, as a viable working group.
However, Internet - based collaborative tools, beyond email communication and web Browser software, had too many limitations (more so in 1996) and did not satisfy all the needs of demanding and time-consuming tasks, such as vocabulary classification and taxonomy building. For that purpose, other tools had to be made available to the IMS Expert Panel, or suffer the consequences. These would be either the progressive defection of panel members, exhausted by the process, or my inability to integrate on time, in a consistent way, all their contributions. More probably, a combination of the two would happen.
Since there was no commercial software package fitting the operation in hand, I programmed one, incorporating it in the Intelligent Multimedia System.
5.7.3. IMS Vocabulary Management Tool
In the previous chapter ("Expert Panel"), I described the ground work done by the IMS Expert Panel towards defining a suitable structure for the IMS knowledge base. The first milestone of this work, December 95, was a list of base vocabulary; the second, March 96, a dual taxonomy framework ("domain" and "issue" taxonomies). But before reaching the final product (workable taxonomies, for a real-world-size knowledge-base), many obstacles and difficulties had to be overcome. Understanding these obstacles, and the strategies developed to overcome them, is one of the rich contributions of the thesis experiment, for any future endeavor of the kind. In here I describe a substantial part of this knowledge-engineering process and present the collaborative software I programmed, a set of modular tools working together.
I want to emphasize that all these non-glamorous, boring, collaborative problem analysis and problem solving are not a simple detail. One of the main goals of this experiment was to test real life conditions for creating a workable, realistic IMS prototype, which meant testing (and solving) real problems of implementation and design in field conditions.
One of the harder problems is that you are dealing with non-standard concepts and structures. Where there is no unique solution for a classification criteria, then you must use collaborative tools and deal with the potential trouble spots when you want to merge the work of several elements of the team. Without tools that will support this collaborative effort, experts are forced to multiple sequences of meetings just to merge their work. This is either expensive and/or unfeasible.
By the end of the first "taxonomy building" cycle, we still had close to 800 terms, but we had created a much more focused series of terms within the vocabulary. Those were, naturally, Portuguese terms, but we translated a significant number. Figure 5.7.3.-1 shows we had 516 English translated terms.
The software I designed to help out on the classification was now ready to produce entire families of each area from whatever node in the taxonomy tree. This was a useful way to confront different opinions of experts that could not come to a meeting.
5.7.3.1. Inserting data
The normal tree data structure consists of a "parent" node with "children" nodes. From here we can check repetitions or inconsistencies, like having terms more general, downward, near the leaves, instead of near the root. I developed a series of modular tools to clean up this structure, to check inconsistencies and to facilitate multiple iterations of merging collaborative work.

Fig. 5.7.3.-1
- IMS Vocabulary Management Tool - Inserting new groupThe design of the tool is simple: there is an English and a Portuguese index, and you can define groups of vocabulary. If you chose to insert a new group, you can write a vocabulary list, with a shared metadata header. This way it is inserted automatically not only the new vocabulary but all the elements of classification shared by that list, such as kind, sub-kind, domain class, domain sub-class, complexity, issue class, issue sub-class and relevance. This allows a "batch" data input process.
5.7.3.2. Merging data
It was possible to import from "clones" of this tool (stacks), that is, exact copies of it containing both the vocabulary data and classification done so far by one member of the Expert Panel. Each "clone" had the full set of modular tools.
This was an important feature to smooth team work. There were several copies of the tool spread among the team members. The challenge was to merge different clone copies and deal with the potential contradictions, such as when two team members classified differently the same term, or input a different term in a different place of the taxonomys hierarchy. Special care was given to the validating routines and acquiring experience on the typical problems of merging team work.
I had to program truth maintenance routines and merging algorithms, with a user interface in a local maintenance card. Figure 5.7.3.-2 shows the several elements or steps involved in merging two stacks, or checking possible contradictions. These could arise not only between two members of the team but also from mistakes or contradictory inputs from a single member.
After many iterations of the development of this tool, and experiment checks, we identified the following major lines of procedure:
There was two possibilities: one was to merge two (stack) clones of the same tool; the other one, to merge an export from another tool into a file that could be, for instance, easier to send by electronic mail. To merge all the data exported from other stacks into files, these files would be sent by e-mail, and then the e-mail would be merged into the major master IMS vocabulary stack, which resided on my computer, as the moderator of the team.
Whatever the step involved here, from the stack or from the file merging, the "target" stack had to be cleaned previously to this merge procedure, from possible contradictions, and checked for consistency. Only then was reliable to proceed with checks on the new data to be imported. When merging it, we could then check, step by step, potential points of conflict. There was either sequential sub-steps or independent steps, like checking duplicates.

Fig. 5.7.3.-2
- IMS Vocabulary Management Tool - Merge and Consistency Check RoutinesOn one hand, we need to identify if there is a conflict of classification (catching conflicts routine). On the other hand, we have new terms (new records or "cards") and we must check the new structure. So it is like a multiple decision tree: if there are conflicts, then we need a procedure to deal with the conflicts, and so on.
Once a stack (clone) was cleaned and the consistency checked, there remained sometimes names with blank or strange characters, like character "return". We had to rebuild the index to make sure that all the references that we were using to merge were actually updated.
When adding new cards, or dealing with conflicts, or rebuilding the index and cleaning the stack, there are "low level" actions that are more or less shared by those, like checking the structure. Checking the structure means, among other things, checking the parent-child nodes relationship, and ensuring that there is no contradiction in the kind of classification.
We must recall that we are dealing here with two different taxonomy structures (domain and issue). If one term belongs to the "domain" taxonomy tree, its children nodes and its parents nodes must also belong to the "domain" taxonomy, and the same consistency must exist with any term belonging to the "issue" taxonomy.
This is complicated by the fact that people sometimes used the same terminology to identify an element of the "issue" taxonomy or the element of the "domain" taxonomy because, in fact, they were the same. We had to force artificially a distinction, as said before, and the way to do it was to add a qualifier to the term, in order to distinguish it in terms of syntax.
5.7.3.3. Dealing with conflicts
Even after identifying these major steps, we found ourselves faced with many exceptions, which somehow had to be internalized in order to make this automating of the merging procedure helpful.
The first consideration was that, instead of doing a blind merge when dealing with a conflict, we had to make a decision for each case, on which term or which kind of classification, or which kind of position in the taxonomy hierarchy, or should one term be kept and the other disregarded.
In some cases, I could define a weight attributed to each member of the team, according to their specialization and area of "expert authority" in the domain. For instance, if I was receiving a clone stack from a specialist in air, I was sure that if there was a conflict between the air related terms she was bringing and the ones that were residing there from other members of the team, I should assign a priority value to her terms, because she was the specialist in this area.
I introduced several ways of handling these exceptions in the automatic process. For instance, I could define classes or sub-classes not to import or, on the contrary, classes to import. When merging a stack from someone doing classification for solid urban waste management, I could say "I don't want to import any term that has to do with air or any sub-class that has to do with that".
I provided a trace mechanism with an automatic report, so that I could trace back every action and every decision taken by a program, to allow me to eventually go back and correct a wrong decision. Of course this was critical, since building up these routines was a painful and lengthy process, with close to 40 iterations of re-programming, re-coding and re-checking.
5.7.3.4. Help in classification
To facilitate the work of my team, I also included in the tool a "help" card, with all the needed information to operate it, as well as the knowledge classification guidelines, etc., as shown in figure 5.7.3.-3

Fig. 5.7.3.-3
- IMS Vocabulary Management Tool - Help cardI included a general guideline on how do you identify a rule or a norm or a concept or a description or a term or a model or a methodology or a procedure (the canonical forms I identified). There were also guidelines regarding how to deal with problems of translating to or from English / Portuguese.
5.7.3.5. Vocabulary record structure
The main record (card), shown in Fig.5.7.3.-4, was identified by the name of a term. This name was inserted both in Portuguese and English. There was a pop-up menu to identify what kind of canonical form it corresponded to; this option identified also what kind of tree of classification was involved (domain or issue taxonomy).

Fig. 5.7.3.-4
- IMS Vocabulary Management Tool - Classification card, with children nodesI had to add here two other sub-kind-types: the "metaclass" or "keyword only".
Metaclass was a way of keeping the structure consistent with the set of tools. Top classes of the domain taxonomy, such as administration, environment, architecture, anthropology, architecture, law, economy, etc, needed a "parent" node. These top level layers of the domain taxonomy belong then to a metaclass, or the "domain metaclass", thus identifying the taxonomy branch.
The "keyword only" sub-kind was an interesting product of the experience with the difficulties of trying to fit real world knowledge into this structure.
We needed to have some constraints in the representation, not just create a totally unrestricted environment, otherwise it was going to become much harder implementing it in some consistent way. This meant, concretely, that to the same input, the output must always stay the same.
The inconsistencies increase directly in proportion with the flexibility of the structure and with the possibility of having ambiguity, which in this case means, concretely, allowing one term to belong to more than one class (one term allowed to be positioned in more than one place in the hierarchy), or allowing the same term to have different sets of children. So we had to impose some constraints, to secure some level of consistency. But there were cases where this constraint was obviously impoverishing the system, by creating an artificially reduced set of options that did not have its parallel in the real world, the real body of knowledge.
A good way to overcome this difficulty was to introduce a new sub-kind, the "keyword only". Besides the rigid taxonomies, we could allow another class or kind of terms, which we called keywords, because it was an appropriate designation. In this case, keywords did not belong to the taxonomy itself, meaning that a keyword by itself would not define a class or a subclass of knowledge, but would be a qualifier or an identifier associated with an element of the taxonomies.
In other words, each level of the taxonomy, like law; and the children of law, like civil law, environmental law, information law or international law, etc., each one of these children could have keywords associated with it. This allowed another web of ways of linking information without necessarily representing a relationship of the kind that-is-a-child-of or that-is-a-parent-of. Instead it would be an association, and this association can be different from a synonymous because it has not to represent an equivalent semantic component. For instance, keywords can represent more a Thesaurus-like approach. More importantly, keywords can represent precisely those kinds of terms that were suitable to belong either to a domain taxonomy or to an issue taxonomy.
While we had to enforce and restrict that each term that was a part of a taxonomy, whatever the taxonomy in question, would have to be unique, placed in only one specific place and position in the hierarchy of the taxonomy, keywords could be freely associated with different levels of the same taxonomy trunk, or could even belong to both trunks; therefore creating some kind of a horizontal path to bridge between these taxonomies. This contributed to lessen the handicaps imposed by the artificial constraints of the structure.

Fig. 5.7.3.-5
- IMS Vocabulary Management Tool - Classification card, with glossaryOn the lower left part of the record, as shown in Fig. 5.7.3.-5, there is room for synonyms, children, keywords, entities, things and the glossary. There are different levels of significance in these elements of information.
Synonyms, children and keywords are associated to a way of describing and defining the taxonomy and the immediate relations with it. On the other hand, entities and things represented relations with other specialized databases that are part of the design of the knowledge base.
Synonyms, as referred before, is a way of creating some horizontal cross-reference, when in some cases there are two or more terms that mean exactly the same, but are all commonly used, and there is no strong reason to disregard ones and keep the others.
Keywords, as described before, is a way of powdering all this rigid structure with some level of flexibility, and therefore work more as a qualifier or an identifier to add some semantic value.
The glossary is meant as a kind of a dictionary for the term or a kind of description of the concept, etc. Both keywords and glossary were meant as enriching the information associated with each term.
Entities and things are in a totally different category. The idea behind it is that the system was designed having at its core a knowledge base, but a knowledge base supported by a relational data base, which major components were: people, entities, things, places and events. It happens that it was frequent to have entities inserted at this level. This was because during the brainstorming people threw to the table terms that in fact represented institutions or groups of entities, like Associations of Municipalities, etc. Therefore it made sense to provide a special slot of information in this tool, so that people could easily transfer the term into a entity list and eliminate it as a term or vocabulary, because it did not make sense to include it on the taxonomy, either of issue or domain. And it made sense, yes, to include it as a component on the corresponding database module.
5.7.4. Collaborative outcome
We completed the project having 1158 terms in the system. The goal was to acquire a reasonable set of consistently classified vocabulary. Thanks to the collaborative tools, the Expert Panel could put together such structured vocabulary and build on it solid, consistent taxonomies. With these solid foundations, the IMS Expert Panel moved on to next level of knowledge structure: the knowledge representation model, as described in next chapter.
5.8. The FAQ model
Introduction; Choosing the FAQ model; Issue Taxonomy for FAQ; FAQ metadata descriptors; Causal reasoning in the CTRSU issue.
5.8.1. Introduction
After building a good size vocabulary and classifying it, creating in the process a dual taxonomy (domain and issue), there only remained one thing to complete the knowledge acquisition framework: identify the main knowledge representation model to use in the IMS and define its metadata descriptors.
From the design stage, I had identified two good candidates: the rule-based model, emerging from the infrastructure shortfalls domain representation, and the case-based model, with a questionnaire framework, emerging from natural resource management domain representation. The challenge here was to find out which one, or both, or another, would best be able to capture the essence of "planning knowledge" in the domain of solid urban waste management, more specifically the issues concerning an incinerator for urban waste and its impacts.
The Expert Panel option was unequivocally in favor of a variation of the case-based representation: the FAQ ("Frequently Asked Questions") model. In this chapter I briefly present this option, its rationale and form, and my view of the alternative rule representation for this case.
5.8.2. Choosing the FAQ model
While completing the classification of the vocabulary, and more in particular in the process of building the "issue" taxonomy, the Expert Panel discussed multiple aspects of the substantive matters concerning this case, exchanging views on the incineration of solid urban waste, on strategic planning for urban waste management, on the EIA structure, on the EIA review process, etc. At the same time, some of the members were actively involved in related committees: one was invited to a committee to draft a new EIA review law and regulation; another, for a planning committee on urban waste management; and so on.
One consequence of both this easy flow of discussion and parallel engagements, was that it was considerably faster to settle on an "Issue" taxonomy, than the "domain" taxonomy. This dual approach clearly simplified matters by liberating them from the domain structure ambiguities; but also many of them acquired a clear view of the issues in question in these discussions and parallel assignments. The only real argument was between a structure following very closely the volume structure of the concrete CTRSU EIA study, once it became public, and a more idealized version of "what should be a good EIA study" plus "what should a good EIA review and a good public consultation consider".
Another consequence was that it became natural for them to enumerate series of questions about the CTRSU and the EIA, echoing their own concerns and parallel discussions.
By comparison, when asked to express in writing their causal reasoning (cause-consequence relationships between alternative options and their corresponding environmental impacts) in the form of IF - THEN rules, the answer would be a verbal lengthy statement explaining their views, hard to break down in rule conditions and variables, unless through a time-consuming working session, with myself doing the translation from verbal expression to written rule.
It became obvious that such rule-based "knowledge-mining" process was not viable in the short time frame available. At this stage, most of the work with the Expert Panel was done either in multiple meetings with a few different members each time, or in individual sessions with me, followed by email exchange, because it was impossible to conciliate all the busy schedules and keep the same rhythm with all.
The overall sense I got was that the panel members favored a kind of FAQ model, since it was so easy for them to raise questions on the subject. In consequence, I focused on debating whether it was reasonable to expect that we could anticipate the kind of questions that were going to be raised during the EIA review, and most particularly during the public consultation. If we were going to adopt a FAQ model, the main question was: since we did not have the "F" information to guide us, could we produce a good estimate?
From the point of view of the EIA Review Committees questions, it seemed quite possible. Based on past experience from other EIA reviews, and also on the existence of some typical "check lists" for guiding both EIA studies and reviews, most experts thought it was a good bet, or at least the best option.
From the point of view of the public consultation, it was less clear. Lacking the time to organize some meaningful survey among the targeted population, I opted for interviewing also several experts outside the panel, including the ones in charge of past public consultations (IPAMB). The result was mixed: while some issues always came up, many times there were unexpected questions. I considered to go forward with a survey on this issue, but I decided that, besides the real time constraints, it would be interesting to set up a list of FAQ without previous survey, and then observe the outcome.
And this way it was settled that the main knowledge representation paradigm in the IMS should be the "FAQ" model. It satisfied key factors such as suitability to the kind of knowledge in question, better responsiveness of the knowledge sources to the corresponding knowledge acquisition model, as well as feasibility, considering the short time frame available for implementation.
Next step was to implement the model, with a concrete expression that would allow to load the knowledge base. For such, it was necessary to settle also on the final form of the taxonomies, and define the metadata descriptors corresponding to the desired FAQ content.
5.8.3. Issue Taxonomy for FAQ
Having defined the FAQ as the core of the knowledge base, and based on the rich input from the IMS Expert Panel, I put together a working version of the "Issue" Taxonomy, that would serve as a direct index for the FAQ. Naturally, during the FAQ compilation, this index was adjusted from many iterations of feedback, with the final version as shown in table 5.8.3.-1.
Table 5.8.3.-1.
- Issue Taxonomy for the FAQ|
A. Present Situation (note: underlined text identifies classes and sub-classes with issues) |
|
B- Project Characterization B.I. General description B.II. Proposed strategy of solid urban waste management B.III-Advantages B.IV-Operation/Exploration B.V-Technology |
|
C-Alternatives to the project C.I-Site alternatives C.II-Solid urban waste management strategies' alternatives C.III-Technology alternatives |
|
D-Project Impact D.I. Public Health D.II-Water D.III-Waste D.IV- Air Quality D.V-Hydrogeology D.VI-Noise D.VII-Ecology D.VIII-Socio-Economic D.IX-Soil D.X-Landscape D.XI-Patrimony D.XII- Land use D.XIII-Traffic |
|
E-Risk of the Project |
|
F-Minimization |
|
G-Compensation |
|
H-Decisions on the project H.I-Content and form of the project H.II-Review and decision process H.III-Project Monitoring H.IV-Project Checking |
|
I-Public Participation I.1-Consultation Process I.2-NGO's role in the consultation I.3-Social-psychology |
|
J-General |
5.8.4. FAQ Metadata Descriptors
Thanks in first place to the structure provided by the both domain and issue taxonomies, the model adopted is more properly an "Intelligent Multimedia FAQ". This is expressed in its following facets: "classification", "presentation" and "representation" .
On the FAQ classification, the question-answer pair is associated with an issue taxonomy class or sub-class, technical level for the question-answer, list of keywords, and identification data, such as the list of authors, date of the answer, etc.
On the FAQ presentation, the question-answer template form is not restricted to one text, but expandable (on the "answer" component) through hyperlinks to other media objects (files) such as more texts, sounds, photos, videos and other composite objects, such as maps;
On the FAQ representation, the question-answer pair is connected, through corresponding relational links, to other knowledge units in the knowledge base, such as people (contact business cards, including the authors), entities, places, events, things, bibliography, glossary; and to other composite knowledge units, such as other question-answer pairs (FAQ Trails), or multimedia booklets (Data Trails); linked and structured in such a way as to benefit from object-oriented properties (class types, inheritance, etc.) deriving from the taxonomies.
The FAQ classification, presentation and representation facets constitute the FAQ metadata.
Each question-answer pair is associated to a metadata header or descriptor.
Summarizing, linked to the text of the question and the text of the answer, a typical metadata descriptor will contain:
a) the author or authors of the answer;
b) the date of the answer
c) sequential or precedence links to other questions;
d) technical depth difficulty level of the question and answer;
e) whether it's an official statement or a personal opinion for each of the authors associated with the answer;
f) a series of relational links (pointers), such as authorship, entity and related multimedia, such as photos, text, videos and bibliography associated with the answer for the respective question;
g) which 'place' in the "issue" taxonomy, that is, which question class or subclass does it belong to;
h) keywords, which may relate to the "domain" taxonomy.
"FAQ Trails", or sequence of question-answer pairs that are inter-related and make sense to read ones after the others, are present by identifying precedent sequential links, since experiences made show that the most efficient way to automatically follow the chain of sequential links was to identify the next questions and not the previous ones.
As for the metadata on the technical depth level / difficulty category, three levels were adopted, from the very technical to the simple, lay level, which corresponds basically to these classifications:
a) either the answer will be easily understood by a lay person;
b) or, on the other extreme, it will be needed an in-depth technical knowledge to really understand the answer to such question;
c) or an intermediate level of so-so, not as much technical depth but not necessarily a 'totally lay person' type of question.
In the system, technical levels are visualized according to the traffic lights metaphor: green (lay), red (expert) and yellow (middle ground).
For the purpose of collecting the FAQ in a format ready for automated insertion in the system, I created a template form for each question-answer pair, including the metadata descriptors, representing this way one single knowledge unit of the IMS. This template is shown on table 5.8.4.-1.
Table 5.8.4.-1
Knowledge unit "question-answer" template form|
@levelQ: |
Technical difficulty level for the question |
|
|
|
|
@code: |
Code identifying the question within the Issue Taxonomy (Class - Sub-class - Issue) |
|
|
|
|
@question: |
Question text |
|
|
|
|
@author: |
Name of author(s) |
|
|
|
|
@type: |
Qualifier indicating the nature of the answer, whether it is a personal opinion or an official stand representing an entity, in which case the entity must be identified |
|
|
|
|
@levelA: |
Technical difficulty level for the answer |
|
|
|
|
@date: |
Date of the answer |
|
|
|
|
@summary: |
Text summary of the answer (shows in IMS "expert card") |
|
|
|
|
@quotes: |
Place here any text which is a direct citation from the Environmental Impact Assessment Study in discussion |
|
|
|
|
@answer: |
Place here the main body of the answers text |
|
|
|
|
@sequence: |
List of question codes (see above) that are in natural sequence of this one, including their respective difficulty level codes (for data or "FAQ trails"). |
|
|
|
|
@keys: |
Keywords associated with the answer (by default, they became linked to the question too) |
|
|
|
|
@links: |
Multimedia file names associated with the answer, either for automatic incorporation at the end of the text (like special formatted text, tables, graphs, etc.) or show as "hyperlinks" |
|
|
|
|
@end |
End of file marker (eof), for automatic processing ("parsing") |
In the next page, Table 5.8.4.-2 shows a concrete example with a template form filled-in.
Note in that example pointers such as "[@table:aumento trafego D XIII]", that allow the system either include the files, referred by the @ operator, together with the main file, or generate automatically an hyperlink that allows the user to follow that path if he or she want those details.
Table 5.8.4.-2
Example of a knowledge unit "question-answer" with template form filled in|
@levelQ: |
1 |
|
|
|
|
@code: |
D XIII 2 |
|
|
|
|
@question: |
What is the traffic level for trucks loaded with solid residues derived from the incinerator operation? |
|
|
|
|
@author: |
Maria João Leite |
|
|
|
|
@type: |
answer supported directly in the EIA study, followed by personal opinion |
|
|
|
|
@levelA: |
1 |
|
|
|
|
@date: |
96/03/27 |
|
|
|
|
@summary: |
This answer is based on the data presented in the EIA, although it contains a personal evaluation of the impacts. The EIA only mentions numbers for traffic increase for the EN10 variant (in case it will be made) or for the direct access road to the incinerator (via CP collector), with expected significant negative impacts for the night period (+ 35 vehicles/hour in 1998 and + 57 in 2010). But it is plausible to expect also a significant negative in particular in the grid of crossings from A1, Portela e Santa Iria da Azoia, given the concurrence of traffic arising from garbage collection vehicles. |
|
@quotes: |
|
|
|
|
|
@answer: |
O EIA apenas refere valores de acréscimo de tráfego para a variante à EN10 (caso venha a ser construída) ou para a via de acesso directo à incineradora (via colectora da CP) (ver Quadro 1-Valores de aumento de tráfego de veículos pesados, expressos em veículos/hora). [@tabela:aumento trafego D XIII] De acordo com o Quadro 1 são expectáveis impactes negativos significativos para o período nocturno 0H00-6H00 (+ 35 veículos/hora em 1998, e +57 veículos/hora em 2010). É plausivel esperar um impacte negativo significativo particularmente nos nós da A1, Portela e Santa Iria da Azoia, dada a confluência de tráfego de veículos de recolha de lixo. Para além do aumento de veículos de transporte de lixo induzido pela CTRSU, há ainda a considerar outras fontes geradoras de tráfego. [@texto: fontes trafego] Se pretender mais informação, pode consultar ainda uma análise comparativa de quilómetros totais gastos por cada uma das duas alternativas à CTRSU (alternativa 1-três aterros controlados de grandes dimensões; alternativa 2-instalação de uma unidade de compostagem complementada por um aterro controlado) |
|
|
|
|
@sequence: |
1 D XIII 1,1 D XIII 3,1 D XIII 8,1 F 5 |
|
|
|
|
@keys: |
transporte de residuos solidos urbanos,estradas |
|
|
|
|
@links: |
kilometros D XIII 2+texto,fontes trafego+texto,aumento trafego D XIII+tabela,zona CTRSU+foto |
|
|
|
|
@end |
|
5.8.5. Causal Reasoning in the CTRSU issue
As mentioned above, I also considered the use of sequences of if-then causal-consequence reasoning terms, that could represent an important component of the problem in debate for the environment impact assessment review. Despite the fact that it became not feasible, it remains in my view a legitimate and adequate model of knowledge representation for this kind of planning knowledge. This is why I include here a brief, small example of my effort to identify chunks of causal reasoning.
For this if-then reasoning, there were some interesting examples, such as:
On the project developer / proponent side it was considered that if the incinerator was not the chosen alternative to solve the general problem of solid urban waste management then there wouldn't be a realistic solution in time for closing and cleaning up the waste site dump of Beirolas, which in any case was already saturated. Therefore, the basic underlying reasoning was that any other alternative considered, such a reducing, recycling and re utilization, which were the main trust of the environmentalists' proposals, would not be sufficient as a solution in general, and in particular as a secure solution in time for the Expo 98.
The sequence of this reasoning was the following: if we don't build the incinerator and since we do not believe that the three R's (Reducing, Recycling and Re-utilization of solid urban waste) are going to be enough in the short term, then this will imply the immediate need of large capacity in waste dump sites. If this is the case, then in the short run we need Beirolas dump site or new dump sites, in such an amount of surface that it will mean a waste of good soil for other purposes and that might be very well be impossible to find on our mostly urban areas, and that in any event would contradict all municipal plans and all the land-use plans; if on the other hand we dont close Beirolas, then EXPO98 will be in serious trouble and given that Beirolas is already saturated this will be not a feasible solution in the medium-long term; etc.
As a secondary line of reasoning within this if-then sequence, there was the worst-case scenario consequences for the Expo 98 (of not having a solution for the Beirolas waste dump site), the possible side effects on public health because of the continuing, not solved, solid urban waste problem caused by open-sky garbage sites, and similar economical, social, political consequences (like the waste of agriculture soil for dump sites, the effect on adjacent land values, etc.) .
On the environmentalists' side , there was also a well-established causal reasoning. For instance:
If we opt for the incineration and given the actual volume and composition of the urban waste and the incinerators projected capacity, then it will be generated hazardous substances in the process and the enormous investment made will imply the maximization of the good use of the incinerator. If we need to maximize the incinerator function, then it will imply that the more garbage is burned, the better; if garbage volume is no problem, then there will be no incentives at all to reducing, recycling and re-utilization strategies, on the contrary, there will be no control over the continuing growth of volume for the solid urban waste produced in the metropolitan area. The consequences of that would also have secondary lines of if-then reasoning, such as, the economical, social, public health side effects of this strategic choice (if hazardous substances are generated through incineration (emissions, ashes), then there will be negative impact on public health and we will need .dump sites for the ashes anyway; etc.).
As mentioned, I decided to sideline the if-then model, given the fact that it would be hard to, in reasonable time, acquire the knowledge in the form of rules and identify all the foundations for each causal consequence reasoning. It must be said that many of those foundations could also have several inter-dependencies and side effects on political, social and economical interests that would not be easy to establish and prove.
It is precisely some of the rich complexities of the institutional response, arising from the substantive issues in question, that I will present next.
5.9. The Institutional Response
Introduction; Environmental NGOs; Public administration technical staff; Public administration decision-makers; National Government; The Author - System Content and Use Guidelines; Local Government - Municipalities; Local citizens committees; Consultants in EIA private enterprises; Facility promoter; Private consultants that produced the CTRSU's EIA.
5.9.1. Introduction
In the sequence of the work generated by the IMS Expert Panel, I began circulating early versions of the FAQ structure, with a seed list of questions, among all institutional actors, namely the environmental NGOs, facility promoter, governments (national and local) and public administration. The main purpose was to obtain feedback for the proposed structure, to gather more suggestions for questions and to collect answers and other support documentation. Meanwhile, Valorsul presented the final EIA study, marking the beginning of the official EIA review period (14 February), and with it the beginning of a new phase of the experiment. In this chapter I present the essentials of the institutional response.
5.9.2. Environmental NGOs
Environmental NGOs (ENGOs) were actively involved in the controversy around the incinerator for solid urban waste, but at odds with the process.
In their view, the first step concerning the solid urban waste problem should focus on evaluating the global conditions and dynamics, define a strategy and elaborate corresponding regional / zoned plans for integrated management of urban waste. Only then should the incinerator alternative be considered, let alone choosing a site for it.
At some point, such planning process was put in motion by government, public administration and Valorsul itself. ENGOs were invited to participate in institutional panels and committees and accepted to do so, but they remained critical of the process, because they viewed the planning effort built around already made options -- such as the incinerator.
Nevertheless, the intervention in these institutional panels and committees, together with activities to mobilize local citizens from the site area around their views, remained the focus of their efforts, and not much attention was paid to discuss details of the EIA during the review. Significantly, their document with a joint opinion on the EIA was delivered to the Review Committee on the last day of the public consultation legal period, and a great deal of the document was dedicated to the strategic options (Quercus, GEOTA & LPN 1996)..
This perspective of the problem, together with a general sense that debating EIA details would help to legitimize a process they adamantly opposed, clearly marked the mode in which the ENGOs participated in the IMS experiment. They remained very supportive, but more in their willingness to help what they perceived as positive and important, rather than by integrating the IMS in their work process concerning the CTRSU case.
In all fairness, other relevant factors constrained their use of the IMS. They were extremely busy and overextended, responding to several cases at the same time; also, and foremost, the IMS only became operational, with meaningful data, very late in the process, by reasons that will be presented later in this chapter. Even so, it was mostly outside their actual work procedures that they contributed to IMS content. The main product of the NGOs participation, their combined statement, was done completely outside the IMS; and the issues raised in it only permeated slowly into the IMS prototype by my persistence in asking them set some time aside to answer questions -- which they did willingly and without reservation.
It is also interesting that some NGO representatives, in order to present their priority concerns, usually preferred to answer to new questions they would introduce themselves in the FAQ at that moment, rather than answering the ones already on the list. This suggests that those concerns were not in line with the largest majority of the FAQ, collected from the expert panel and specially from public administration technical staff and private sector consultants, as we will present next. Discussion of this phenomenon is left for the respective section.
5.9.3. Public administration technical staff
Technical staff from public administration, national, regional and local (municipal), but specially the last two, had to deal with many issues directly related with the incinerator and its impacts, as part of their jobs responsibilities. For instance, regional administration (part of the Environmental Ministry) had licensing, monitoring and sometimes management responsibilities in areas like air quality, waste dump sites, natural reservation areas, etc. Local administrations (municipal) were in charge of garbage collection, recycling initiatives, etc. So they were particularly attentive to the Valorsul process and its implications in their work.
Even so I was surprised to observe they were among the earliest and most active contributors of questions to the FAQ list. Although this collaboration was naturally concentrated in a few technical staff, in areas directly related to the CTRSU or its impacts, those few were very supportive and their contribution was in much larger volume than I expected. The other unexpected facet was that their contribution was focused on suggesting questions, not on providing answers for them. I had assumed the opposite; that they would welcome an opportunity to express their opinions and therefore be more keen on writing answers than think about questions. This was a symptom of an interesting paradigm, that I realized (and discuss) later in the process.
Also, national and regional environmental agencies (part of the public administration) were either in charge, or a component, of the EIA review process, according to law and regulations. This meant that several members of their technical staff were called to serve on the EIA Review Committee, or give some technical support to the Committees work. In consequence, they had early access to the EIA presented by Valorsul.
Public administration technical staff participation in the IMS experiment was therefore concentrated on their active role contributing with questions for the FAQ (near 50% of the total).
Their use of the new IT, like Internet and IMS, is less conclusive. Some of them began using email for the first time, including those using email accounts set by the initiative of the IMS team, but with most of their colleagues off-line (and other constraints referred next), Internet did not become a part of their normal working procedure.
In what concerns the IMS prototype, however, the fact that a meaningful data set was available only at a later stage, was undoubtedly a factor. Many of them tried the IMS "shell" with a small "seed" data and later on with the full set. All were positive in their opinions that it could be very useful for them if available fully loaded at a much earlier stage. A joint paper published with a senior staff member, then on the Review Committee, expressed two of the difficulties felt by them, that can profit from a fully operational IMS, to level the field:
" - On the terminology and working methods of each expert of the Evaluation Committee: for instance, to make the terminology used by DGA experts, about the different generated wastes, understandable by the all the elements included in Committee;
- On the different experience of each Committee member, in particular in previous EIA processes: for instance, some of the Committee elements had more than 5 years of experience in this kind of processes (e.g.. some DRARN-LVT experts), while other elements had only participated in one or two processes (e.g.. some DGA and ICN experts), with reflects on approach and methodology" (Ferraz de Abreu and Chito 1997)
and point to the main reason why IMS could not help:
"- By the time the system was available with full information, it was already past the early stages of the review, when it can be more useful to the Evaluation Committee." (Ferraz de Abreu and Chito 1997)
Finally, another interesting detail is the notorious differences between the participation of junior, middle or senior technical staff. The first two were the contributors of practically all suggested questions for the FAQ from this actor, and very few answers; while the senior staff provided practically all (of the few) answers but near zero questions.
Again the discussion of such facts is left for the respective section, but at the time I observed that all junior and even middle staff were very wary of hierarchy rules and of not overstepping their usual "invisible" status in public administration, leaving all public statements for the decision-makers, while senior staff were usually more at ease with speaking out.
Since the authorship of questions in the FAQ list remained anonymous, while evidently all answers authors had to be identified, I could not help but notice this possible effect on the question / answer unbalance. There was even the case of one middle staff that contributed with answers, but asked for them to be identified as coming from someone else, f.i. from the IMS Expert Panel.
I immediately took the initiative of seeking official approval to some specific guidelines under which all staff, including junior and middle level, would be formally allowed to give their input to IMS. They were approved, but this fact did not change one iota the above described behavior. On the other hand, these guidelines had to be enlarged to solve another unexpected institutional response, that I present next.
5.9.4. Public administration decision-makers
Public administration decision-makers are either directly or indirectly dependent on government appointments (national or local). It is only natural that the support from government level to the IMS experiment had a clear impact on how the heads of public administration agencies such as IPAMB, DRARN-LVT and DGA welcomed the project.
DGA was the agency funding the project, and I was given direct access to the heads of the departments related with the case. Initial interviews were very supportive and there was a lot of curiosity about the project, with many questions asked on the experiment. The general procedure to incorporate the experiment steps into the EIA review process was discussed, and all doors seemed open.
This agency was in the middle of a major effort to build an intranet, with Internet connection, and little by little some email accounts became accessible to some of the staff related to EIA reviews. However, there were several restrictions, which at the time it was not clear whether they originated from technical implementation glitches, or from a fuzzy evolution of a fuzzy policy on in-house Internet use. For instance, at a certain point there was individual emails for several middle staff (an exception to the dominant trend "top-level-accounts-only"), but no access to web servers other that the internal site; some services wanted to restrict email use, after a more open start in others; etc. Finally, at a late stage in the process, there was technical staff not involved in the IMS experiment that followed IMS progress through the web, and some of them tested the IMS prototype on local computers and gave useful feedback.
IPAMB was the agency in charge of the public consultation component, and the one apparently more keen on quick IT progress and concerned with making the best of new IT in the short term. After meetings with the agency head and then again with his successor, immediate support was given, to both process and technology facets.
On the technology facet, the agency lent my team a portable computer and access to desktop computers to install the prototype for public access. The portable computer was a critical resource for the knowledge acquisition process. IPAMB desktop computers became the base for many demonstrations and preliminary use of the IMS prototype by different actors, including citizens from S. João da Talha. Equally important was the attentive follow up of the IMS project by senior staff, who gave frequent feedback and discussed the potential uses and audience for the system.
On the process facet, it was decided to inaugurate the use of Internet to support public consultation and, by coincidence or design (as a courtesy to my experiment), the first experience was with the CTRSU EIA case. The plan was to create an email address dedicated to EIA public consultation, where citizens could send their input or ask questions, and publish on a web site the EIA non-technical summary (NTS), together with general and EIA specific information.
The Internet connection was inaugurated with pomp and circumstance, with the presence of the President of Portugal, but there was a serious restriction to the practical use of email in the EIA review process. According to legal requirements, only "written" input could be incorporated in the official public consultation report. The legal department was not convinced that an email had equal legal status within current law, and therefore a decision was made to ask all citizens that wanted to use email for sending their opinions to also send it in "regular" paper with their signature and identification. This was not an arbitrary choice, but a weighted one. There was the concrete fear that citizens or NGOs would use any pretext, such as that technicality, to contest in court the legal validity of the decision on the EIA.
DRARN-LVT was the public administration regional environmental agency that had a major role in EIA reviews. In some cases, like this one, it had the responsibility of chairing the respective Review Committee. Maybe in consequence, this was the agency where the senior staff at decision-maker level (regional director and heads of services) followed more close and in person the IMS experiment. It was also the agency less prepared for Internet access, among the ones involved.
As described in a previous chapter (Collaborative Tools), it was the CITIDEP IMS web team that installed the first Internet connection and email accounts, thanks to the support of FCT-UNL university. Despite their interest and good will, attending Internet and IMS training sessions, the degree of unfamiliarity together with the limited availability, made it impossible to bring them up to speed in the short period of time remaining for the EIA review. It was clear that DRARN Review Committee members didn't feel sufficiently at ease to use the new IT or rely on it even for simple things such as scheduling meetings or exchanging information. Also, the lack of a network infrastructure implied that many of them, in order to use e-mail, had to go a different room, sometimes to a different building, and borrow access from a different department or division.
Nevertheless, their support was instrumental in the institutional integration of the IMS in the EIA review process and in the knowledge acquisition process. In the first case, the Director assembled the whole EIA Review Committee, including members from other agencies, with the purpose of formally introducing me and the IMS experiment. In the second case, they provided a sizable set of answers for the FAQ (the largest contribution within this actor).
It was also DRARN-LVT senior staff in the Review Committee that later confirmed the good match between many of the questions in the FAQ list and the issues that were raised during the review. This was indirectly confirmed by the facility promoter itself, Valorsul, when one of their executives jokingly said that there must be some coincidence of people between my IMS Expert Panel and the EIA Review Committee, since many of the questions they were asked by the Committee to answer coincided with the Panels FAQ.
With the acceptance of the EIA study delivered by Valorsul (after a pre-check procedure, verifying compliance with base rules), the EIA review period began (120 business days) and with it new kinds of institutional responses.
Because some of the events touch sensitive areas and given that the purpose of any scientific research is just the objective description of phenomena, with no more nor less detail than the required to allow scientific analysis of the same, I will not identify specifically neither agencies or people involved, except when relevant to the research goal.
Despite sharing in the essential the agenda of government decisions (otherwise their appointments are a political blunder by political leaders), because public administration decision makers are in the front line of the execution of political decisions, therefore the first to feel the reactions to government policy, I had foreseen that they could behave in a different manner, making them an independent actor in this case. My observation confirmed this expectation, although I did not anticipate the shape it took.
The first symptom that the support to the IMS project was wavering in some actors, was when during an institutional meeting where I was present, one senior staff (at decision-maker level) referred that my system should be funded by Environmental NGO and not by government or public administration, since "it was something that interested mainly them" and "favored them" (ENGOs). While this was an isolated voice, not impacting in the overall behavior, it was meaningful that the same person had before expressed full support to the project.
Meanwhile (February 96), I had began the process of circulating several iterations of the FAQ (list of questions only). After a couple more, by middle April, my presence was requested at a meeting with top level decision makers in one agency.
In this meeting, I was told, very diplomatically, that there was concerns on the sensitivity of the questions raised in my FAQ list, with the undertone that I was having an adversarial stand towards a government stated policy (build the incinerator), which raised also the touchy question of people seeing a project against public policy being funded by public moneys.
What was more, one of the present noted, my IMS prototype allowed more than one answer, from different experts with different points of view, for each question. "Do you realize the confusion this is going to raise in common peoples minds?", I was asked.
I began by clarifying that the questions on the FAQ were not suggested by me, I was just compiling them and circulating them. I said that while I did not see this as harmful to government policy, on the contrary, since it gave them an advanced warning on the issues that were going to surface during the public consultation, the important aspect was that I was just an observer and therefore they, as decision-makers in charge, should tell me the ground rules and I would just abide by them.
More specifically, concerning the issue of allowing more than one answer per question, rather than discuss the democratic concepts subjacent to the objection, I reminded them that I was doing a thesis in planning and not in computer engineering; therefore, the more problems and obstacles, the more interesting it would become to write about all these obstacles and analyze them. Again, it was up to them to set clearly the boundaries of what new information technology I was allowed to introduce in the process.
The senior person present immediately responded that they did not want to censor my research. He only felt responsible for the consistency of the agencys acts. He exemplified with the admonition the agency suffered in the wake of another funded project, an agenda-calendar with environmental events and glossary, in which the authors had inscribed harsh criticism against the agency itself. He did not object against the right to criticize, but he did not agree that a document printed and distributed in name of the agency would have a content undermining the agencys image. The same problem was to fund a system like my IMS, that would be presented to the public as supported by an agency of the Ministry of Environment, when that systems content would undermine the Ministrys policy.
They considered many of the questions in the FAQ list as biased against the incinerator, and this would brand the system as adversarial. It could also lead to some confusion between the official positions and statements by the Ministry of Environment, with personal opinions from, for instance, environmental NGO leaders. The public could interpret their joint presence in the IMS as the Ministry of Environment condoning and promoting those opinions, because they were side by side on the same public consultation system.
I answered that I understood their problem, and disregarding my personal opinions on public funding policy and on the merits of mandatory segregation of different opinions to different publications, I wanted to accommodate their concerns, by the same reason I stated before: I was an observer, not a stakeholder in the issue in question.
I explained it was wrong to present the IMS prototype as responsible for its content; since all answers had their authors identified. More, I did not limit or control any actors contribution to the FAQ list; it was up to each one to decide what questions and how many they wanted to include in the FAQ. Nevertheless, I thought that it was possible to be more specific and rigorous on the IMS presentation of the contents sources and I recognized there was a clear unbalance in the FAQ list (in Appendix,), towards questions "tinted" with a critical presumption (an interesting fact in the experiment). So I offered a solution:
a) I suggested that the IMS could be presented to the public, on the day one of the public consultation period, with only information directly extracted from the EIA or public domain information (on regulations, technical concepts, etc.); only after that I would insert the other input, including the different views and critical stands, just like any other citizen could do during that period. I reminded them that even their own official report from the EIA review was going to include such input from the critics, and the fact that it was published in the same volume, by the Ministry, did not lead anyone to think that the Ministry was supporting those critical views;
b) I would propose specific "user content" and "system use" guidelines, along these lines, addressing their concerns of separating the system support from the content, and submit these guidelines for their approval;
c) I would make a special effort targeting less "represented" actors, inviting more suggestions of questions, having in mind a more balanced FAQ.
I include later in this chapter the proposed (and approved) guidelines, given its relevance and self-explanatory nature, rather than describing them. This is also why the FAQ metadata descriptors (presented in the FAQ model chapter) include a field on the nature of the answer (official statement or personal opinion).
My suggestion was accepted by the senior person present and the meeting ended in this conciliatory note, although it was visible that some among the present were not supportive of the experiment anymore.
The rapport with some senior staff changed after that meeting, as emphasized by the apparent exclusion of one IMS Expert Panel member, that had been previously assigned to the EIA Review Committee work, from the meetings of this Committee. Although this process was a little fuzzy, since apparently it was not the object of formal decisions (more like creating a situation of fact, by not telling the member about the meetings and sending other person instead), the fact that it was known (from the very beginning) that this person was on the IMS Expert Panel, most likely had a bearing in the sequence of events.
It is the right time to note that I had left to the discretion of the IMS Expert Panel members how to handle their compatibility criteria concerning their double role as panel members and eventual members of the EIA Review, or related, official Committees. For instance, in other case, one member left the IMS Expert Panel the moment (he/she) was assigned to the Review Committee. Retrospectively, given these developments, this seems a wise choice. But it also raises the issue of institutional constraints to the work needed to introduce new IT, such as the IMS, and to a critical component of it: an independent expert task force, such as the IMS Expert Panel.
Also at this time, some decision-maker staff made statements towards restricting the use of email. These statements included however some considerations like "we can not have people making calls to USA to send email to you, who would pay for it", reveling some lack of understanding on the inner workings of Internet.
Finally, the head of the department that first received Valorsuls EIA, refused to release it to the IMS team before the beginning of the public consultation period (only 30 days, near the end of the 120 business days of the EIA review), on the grounds it was confidential up to that moment. Members of the IMS Expert Panel contested this interpretation of the law, but I did not want to dispute an institutional decision, in consistency with my (and the IMS Expert Panels) role as an observer in everything but strictly the introduction of the new IT in the process.
As a result, the IMS team could not begin to work with the EIA concrete data, in order to fine tune the FAQ and, more importantly, begin the laborious work of indexing EIA content to questions in the FAQ list and finally inserting data and knowledge units into the IMS.
This was no minor issue. The EIA delivered by Valorsul included 14 hefty volumes plus a synthesis report and a non-technical summary. The prospect of handling many thousands of pages of complex data, from content analysis to structuring, digitizing and insertion, in only a fraction of 30 days - to allow some time for actual use of the IMS during the public consultation, with a team of very busy experts on a volunteer basis, was unrealistic. This institutionally imposed delay not only killed the possibility of testing the use of the IMS with real data by the Review Committee, as it compromised its chances even for the public consultation itself. In consequence, I took the initiative of seeking support directly from the EIA study "owner", Valorsul, as described later in this chapter.
I must emphasize that all these occurrences never gave place to a general context of institutional hostility towards the IMS experiment. Many decision makers and senior staff from all agencies, including the one where these concerns were raised, went on giving their contributions to the IMS, answering questions in interviews, without any reservation, on the contrary, with very professional and positive demeanor.
5.9.5. National Government
The Ministry of Environment, once approved the support to my project, kept at some distance, delegating its handling to the respective agencies and simply reinstating their support when asked to confirm it.
It is worth to mention that this support never wavered, it stood firm, cross different governments from different political parties, all through the experiment.
The evolution of circumstances described above, show how important it was to begin the process of obtaining support to the IMS experiment by the top political level - the national government.
Given the sensitivity of the issues raised by public administration decision makers in the middle of the experiment, I wrote a set of guidelines to the IMS content and use (described next in this chapter) and presented it to the heads of the agencies; but only after I had a meeting with the cabinet chief of the secretary of state for Environment. I wanted to be sure these guidelines had support at the political decision-making level.
The secretary of states cabinet chief was attentive to the problem but supportive without any reservation, approving my proposed guidelines. Although no comment was made in either direction, it became clear to me that those concerns had not originated at the political level, but from the senior staff and decision makers present in the above described meeting.
On the other hand, it is also noteworthy that the same firm, unwavering support was kept in what concerns their stand on Valorsul and the incinerator.
National Government accepted the usefulness of the EIA review for minimizing negative impacts but in no way were they even considering to put in question the basic decision and its incinerator-based strategy. A manifestation of this was a press conference held by the Ministry of Environment announcing the selection of the contractor that was going to build the incinerator. This press conference occurred before the end of the EIA review and long before the beginning of the public consultation period. Even if formally such contract was contingent upon the CTRSUs EIA approval, everyone new how to read the signs on the wall.
Government cabinets were also a source of documentation for the IMS: a collection of VHS videos with Denmarks experience with incineration of solid urban waste. They also had the IMS prototype installed at one of their offices desktop computers, and attended a press conference where, among other items on the agenda, I presented the final loaded version, ready for public consultation.
A final note, just reflecting yet another element of the political context of the time: sometime after this process, the Ministry of Environment appointed a new Regional Director for one of the referred agencies. As it happens, the new appointee had been a member of the IMS Expert Panel.
5.9.6. The Author - System Content and Use Guidelines
Besides my work with the IMS Expert Panel, I played a role by circulating the FAQ-question list (not the answers, until loaded in the system), inviting participation, doing videotaped or recorded interviews of answers to FAQ, collecting documentation, presenting the IMS prototype, etc. But also by handling the different institutional responses to the experiment, among which the above described raised concerns on the sensitivity of the FAQ question list is the most significant.
Addressing these concerns I wrote the "System Content and Use Guidelines". In its preamble I appeal to the contribution of all actors, "from political and administrative managers, to technical staff and scientists, either from the Central or Local Administration, either from Universities or other similar institutions", an effort to obtain a more balanced content; I also explain the FAQ format and that the "answers can be given on either a personal, private basis or on a formal and official basis". Table 5.9.6.-1 shows the actual text of the Guidelines:
Table 5.9.6.-1
- System Content and Use Guidelines|
In harmony with recommendations from the DGA and the Review Committee, it was considered important to adopt a set of norms for transparency of the process, safeguarding the principles of impartiality and non interference in the functions of the Review Committee, and of clear distinction between what is the 'official' Public Administration information and what are opinions from citizens or other entities, however divergent, that the system might include. Therefore, it was suggested (and approved) the utilization of this system in two distinct circles: Public Circuit (open): Up to the beginning of the public consultation period, it presents only answers to standard-questions, opinions or information that do not contain any evaluations concerning the EIA in question; after the beginning of the public consultation period, it can contain any opinions from anyone, which shall be clearly identified as such; "Review Committee -- R.C." Circuit (closed): Can be used by the R.C., for the identification (or modification) of answers to any standard-questions, for private use and / or supporting the work of the R.C.. For instance, to use some standard-questions as a check-list; to support the internal debate and identification of questions to be clarified; to help preparing meetings, elaboration of reports, etc. The access to this Circuit, installed on only some micro-computers, is limited to those persons to whom the R.C., and only the R.C., will give the access codes. |
|
More specifically, it was adopted the following guidelines for the Public Circuit: a) All standard-questions directly concerning the EIA in question, will be (or already were) presented to Valorsul ; b) All standard-questions regarding procedural and normative information will be (or already were) presented to the Review Committee and to the respective Entities/Departments, as well to interested Municipalities, so that they appoint person(s) who might answer; c) All standard-questions that refer exclusively to matters outside the scope of the present study, nor implying any right/wrong assessments on the EIA being evaluated (for instance, description of the current situation; explanation of concepts, models, methodologies), can be answered by departments of the public administration, central and local, as usually happens in the day-by-day management of the department in response to similar demands (for instance, students' papers, journalists' articles); d) All answers given on a personal basis, always identified as such, will be included in the system only after the beginning of the public consultation, so that there is no possible ambiguity on whether they are officially condoned or not. |
These Guidelines were accepted at all levels, as referred above. It remained now to handle the problem of not having access to the EIA study through the institutional channels.
5.9.7. Local Government - Municipalities
Loures Municipal Government was taking most of the heat from population reactions to the siting of the incinerator in S. João da Talha. Among other issues making this an intricate case, they were insisting on the construction of a highway variant, to minimize traffic problems, a not so peaceful issue because some ENGO experts claimed its trace violated building constraints within natural reservations and respective buffer zones. Again this was a symptom of the high-level compromises made at political level, since no one could reasonably justify this violation from the point of view of a strictly technical EIA review. Environmental NGOs were denouncing the "package" approach without corresponding and proper evaluation of each item - incinerator plus road variant impacts.
Lisbon Municipal Government had to face increasing pressure to make sure EXPO98 progress would not be delayed and sidetracked by the incinerator issue.
Consequently, and according to expectations, both political decision-makers and technical staff of the Municipalities involved were supportive and kept their support to the IMS project all along the process, and even the Municipality of Oeiras, not part of Valorsul, ceded interesting documentation. The Mayor of Lisbon and the City Councilmen for Environment granted videotaped answers to some FAQ questions. Administrators and experts from municipal services of Lisbon but specially of Loures answered many of the FAQ listed and provided rich documentation, including related videos and photographs.
5.9.8. Local citizens committees
Many citizens from S. João da Talha and their committees were actively seeking support to their efforts to avoid the siting of the incinerator in their area, or at least postpone the construction. Because of the multi-party, multi-municipal arrangement, they found themselves isolated from many of the traditional support structures (unions, political parties). It remained the environmental NGOs, with whom they met frequently, absorbing arguments to use in the public consultation period. They also met several times with Valorsul representatives, seeking information and debating with them the incinerator plans.
In keeping with my option of testing the validity of a FAQ list compiled without a public survey, I did not try to collect questions or answers from local citizens before the public consultation period. Although there was brief contacts before, they only participated in the experiment during this public consultation period, described in the respective chapter.
5.9.9. Consultants in EIA private enterprises
While in lesser scale than public administration staff, some consultants from EIA companies, not contracted to this particular EIA, were active contributing to FAQ questions. Interestingly, they were also among the actors more motivated to suggest questions than providing answers. Nevertheless, they did contribute with answers, when asked, and their input was important among other things because it enriched the IMS variety of points of view.
5.9.10. Facility promoter
The proponent of the CTRSU, Valorsul, was naturally at the center stage of the process during all phases, but more so after delivering the EIA study for review (January-February 1996).
Valorsul set in motion a careful and well thought plan to handle expected reactions from environmental NGOs and local citizens from the chosen site.
Concerning the environmental NGOs (ENGOs), Valorsul invited them to participate in an expert panel of their own, with the mission of providing a critical view over the POGIRSU (Operational Plan for an Integrated Management of Solid Urban Waste), a plan concerning the area of intervention of Valorsul and whose first stage had been delivered by other hired consultants, in 1995. Members of this panel were funded by them (a fact Valorsul did not forget to point out every time their EIA study was accused of biased because it was paid by them). At least some of the panel members had also funded trips to European countries with experience in incineration, as part of their work. The report produced by the panel was considered by Valorsul a key component of their decision process.
This way Valorsul tried to "internalize" the critical views of the ENGOs, making them a part of a multi-prong input: the consultants report, the critical report and other input from similar enterprises, recycling task forces, etc. They reserved for themselves, naturally, the last word on the plans content; but they assumed explicitly the "unavoidable reality that transforming POGIRSU Proposal into POGIRSU (plan) depend on compromises of alliances and articulation of actions with the set of institutions with intersecting areas of intervention" (Valorsul 1996), in which they included the ENGOs.
Concerning the local citizens of S. João da Talha, Valorsuls favored strategy was to promote multiple informal meetings "face-to-face", long before the EIA review and the public consultation period. In these meetings they began by being shouted at, insulted, etc., but they kept at it, and after a certain point some dialog began. It is clear that even the most hostile inhabitants of S. João da Talha recognized that at least they were there listening to them, as opposed to the general abandon they felt they were object by all other institutions, including their traditional supporters (party, etc.). People tend to respond to the courage to face adversarial ground with some degree of respect, and although the hostility still prevailed, as it was seen at the public hearing later in the process, there is no doubt that these meetings took some of the steam out of the angry population, before the public consultation period began.
One significant element of this strategy was that this way Valorsul chose the ground, the agenda and, most important, the timing of the harshest confrontation, consequently far from the media attention, a media used to focus on the public consultation period, as the traditional show case of controversy.
Meanwhile, my own IMS expert panel was collecting more questions than answers and there was a predominance of questions -- and answers -- from critical points of view. I needed to add Valorsuls point of view, also in part to address the legitimate concerns raised on this unbalance. I needed answers from the EIA itself, but I clearly needed paid consultants, dedicating intensive time for this task, which required more funding.
It is in this context that Valorsul showed lukewarm support for the IMS experiment and little interest in the IMS prototype, because of the expectations it could raise, .as described in the chapter on the actors of the case. Nevertheless they wanted to respond positively to my efforts of improving the public consultation process, and suggested instead a web publication.
So I initiated a sub-project with a CITIDEP team of paid consultants, funded by Valorsul, to use the EIA study to answer a large set of FAQ chosen by them, indexing specific content in the EIA volumes to each FAQ, and publishing the result on a web site. Since Valorsul did not have a web site, my team was also funded to register a domain and build a web site with general information about Valorsul. My goal for this sub-project was to have 1) a real size knowledge base in the system; b) a balanced offer of points of view in the system.
This sub-project was a very intense process and a rich experience in knowledge acquisition for this kind of subject. At the beginning, Valorsul was not very enthusiastic and did not pay much attention to it. However, this attitude changed considerably and at some point they got actively involved. They began suggesting many new questions, that allowed them to better express their points of view, and providing their own answers; to such an extent that I had to switch from promoting their contributions to ask them to bring to a closure what seemed a never ending procession of new questions and answers. The factors involved in this phenomenon, as well as the whole process, filled with challenges, is described more in detail in the next chapter (Knowledge Acquisition).
In the end, the EIA content actually dominated among the volume of information within all IMS, although maybe not the impact the different knowledge base components had. What is beyond any doubt is that, without funding assigned specifically to this indexation, it wont get done, it is too much work to depend on voluntary contributions.
5.9.11. Private consultants that produced the CTRSU's EIA
Private consultants and their companies hired by Valorsul to produce the CTRSUs EIA had a major role as sources for the IMS, and in particular for the CITIDEP IMS team in charge of indexing the EIA to the FAQ and publishing it on the web. In fact, several problems arose and became a significant factor in further delaying the knowledge acquisition, in what was already a very short time frame.
Since consultants type their documents in computers, it does not make any sense to waste considerable time and money to digitize thousands of pages and images from a printed version; but that is exactly what happened in many cases.
Consultants were reluctant to provide their digital source documents, requesting in some cases special written instructions from Valorsul and despite verbal confirmation that Valorsul authorized and supported our work. Two reasons for these reluctance were advanced by one of them: that providing the source files, in digital form, was not part of the contract with Valorsul, and that it was dangerous to give them in this format, because "anyone can change the text in a diskette".
They also stated to have difficulties in gathering the digital files, distributed among many individual computers in unknown places, given the non-existence of a single media with a complete compilation. More, some documents, like maps, were not in digital form, and were delivered by means of paper cut and paste, Xeroxing, etc.
All these obstacles had some effect also on Valorsul open-access policy. Initially, they declared that CITIDEP IMS team could have access to any and all EIA documents and their sources, except eventually those concerning proprietary mathematical models. As mentioned before, the EIA was composed of a non-technical summary (NTS), a synthesis report, and 14 specialized detailed volumes by area of impact. After all this back-and-forth with the consultants reluctance and obstacles, Valorsul began to move towards a more restrictive stand: access to source materials were OK for the NTS and synthesis report, but better forget about the other volumes, since anyway all what was necessary to answer the FAQ were in these two documents. As I could observe later, that was not the opinion of the experts I hired for the job. In any event, Valorsul still gave permission to consult all volumes in the printed version.
Finally, even just for the synthesis report, several key documents still only arrived many days later, after the deadline.
Together with the institutional obstacles raised in accessing the EIA before the public consultation, and despite the good will and support from Valorsul in giving access to their documents, the combined effect of these difficulties was significant.
The practical result was that we could only begin to select, index, compile and load all data (including the question-answer pairs) into the IMS, after the beginning of the public consultation period. Given that this period is typically around 30 days, and given the very large volume of data in question (thousands of pages and files, hundreds of question-answer pairs), this meant that users could only profit from IMS a few days before the end of the legal period of consultation.
This leads us to the bulk of the knowledge acquisition process, presented in the next chapter.
5.10. The Knowledge Acquisition
Introduction; Guidelines for question / answer compilation; FAQ questions sample; Problems with content; Problems with structure; Web site implementation and management; Web Site implementation problems; Final content
5.10.1. Introduction
With a good size list of questions structured in the FAQ model (near 300 at this stage, by middle April), many interesting answers were collected from the actors involved in the case. However, as we described in the previous chapter (Institutional Response), a majority of those answers reflected some critical point of view, and very few of them presented information on the EIA study itself. This is why it was very important to have obtained the support from the facility promoter (Valorsul) to gather EIA related answers, making them accessible through the world wide web.
Whether it was for targeting the Valorsuls EIA volume information, or the opinions and information from other actors, the compilation of the question-answer pairs required a standard data form and very clear guidelines for the acquisition process. More so because in the process of collecting answers, many questions were added, or even modified to better fit the available answer. This was addressed by defining the metadata descriptors (as presented in the FAQ model chapter), and by writing new guidelines for question / answer compilation.
Also, collecting such a high volume of information, spread through many different documents and sources, and publish it on the web, in such a short time, was a challenge and provided many insights on the "real-world" problems faced by anyone dealing with this kind of task.
In this chapter I present the guidelines I defined for the question / answer compilation process, a sample of the questions included in the final FAQ; and specially the process of compiling, formatting and publishing the EIA-related answers.
5.10.2. Guidelines for question / answer compilation
It was important to gather a vast set of standard-questions, either anticipating questions that could arise during the EIA review (in the Review Committee and in the public consultation), or questions that would allow explaining concepts, points of view and stands (GO or NGO). Following this line of thought, it was desirable to gather several answers per question whenever possible, to show different points of view, either complementary or contradictory. Here I present the brief guidelines I set for the question / answer compilation work.
Contributions could be focused on one or more of the following aspects:
The questions:
Suggesting more questions, (from the collaborator's professional point of view, and also regarding different audiences of the EIA review process); Criticizing the wording of questions (giving options or corrections); Suggesting improvements in the question grouping structure (offering new categories and sub-categories of questions, moving questions into another category); Within each sub-category, suggesting a question hierarchy, for instance, from general to particular, from comprehensive to specific (or also suggest other questions to anyone who wants to dig deeper into a part of the theme).
The answers:
Providing answers (either on a personal basis or as an entity); Identifying specific parts of the EIA related with each question; Suggesting support documents to each answer (articles, books or books' chapters, regulations, photographs, videos, etc.); Identifying entities that have responsibility in each answer's theme; Suggesting names of experts and decision makers as possible source for answering the questions.
Methodology to follow (For each question/answer):
In table 5.10.2.- 1 is shown the methodology indicated to all persons contributing to the FAQ.
Table 5.10.2.-1
Methodology to follow (For each question/answer):|
1. Indicate, on a 1 to 3 scale, the technical difficulty degree of each question, in your opinion. Identify the category or sub-category to which the answer should belong to. Indicate which other questions should be previous and posterior to it Suggest names (1 a 3) of other possible answer sources. |
|
2. Identify answer point of view, possibly rewording the question , that is, for instance (standard cases):a) private, particular interests b) common-good, collective interests c) consultant responsible for the EIA d) project developer / promoter e) central and regional administration (MA, DGA, DRARN, etc.) f) municipal administration (Municipalities, Juntas de Freguesia) g) NGOs h) independent expert / scientist (e.g. Universities, etc.) |
|
3. Choose the kind of answer, i.e.;a) answer based on the EIA (summary and specific index of which pages/paragraphs/people); b) answer with critical opinion or stand, on a private basis or as an entity (in this case, indicate which position within the entity); c) conditional technical answers, not implying EIA knowledge (ex. "if the situation is this and that, then we should consider this and that and there may be these and those consequences"), and with advice as to which questions should be made as to clarify a given theme ; d) strictly technical or procedural answer with background knowledge: description of the present situation (state of things), explanation of concepts, models, methodologies, norms, processes, etc. |
|
4. Choose the answer format, i.e.:a) in writing (some paragraphs with answer summary, and possible enclosed extended document, photos, videos, recommended bibliography, etc.); b) video interview, or voice recording; c) fraction of the EIA or Non-Technical Summary that contains the answer; d) list of sub-questions to ask so as to reach the desired answer. |
|
5. Each answer must always have an identified main author (the selected standard-questions in the system are my sole responsibility). All authors will have the opportunity to revise their questions before the system is put to use." |
5.10.3. FAQ Questions Sample
We compiled 445 questions, at the last version. The complete list of the questions is included in the Appendix . In table 5.10.3.-1 I present here a small sample of questions for each category, because they are very useful to get a sense of the substantive issues raised during the FAQ collection, leading to the mentioned reactions on FAQ sensitivity.
Table 5.10.3.-1
FAQ questions sample|
A. Present Situation Will this proposal allow to meet the recycling goals established by the European Union directives on package waste? What are the current tendencies in solid urban waste treatment, in European Union? What happens to the garbage after the citizen puts it in the container? What is the experience in Portugal on the selection and recycling of solid urban waste? |
|
B- Project Characterization B.I. General description What kind of energy will the plant produce? |
|
B.II. Proposed strategy of solid urban waste management Which were the terms of the contract between the ValorSul and the municipalities for the reception and delivery of solid urban waste? What is the POGIRSU ? Considering the European community policy tendencies for reducing, re-utilizing and recycling (the 3R's), why was the incinerator chosen? |
|
B.III-Advantages Can the supply of steam, produced in the plant, to the near industries, bring any benefit to the air quality in the surroundings of the plant? What is the advantage of the "incinerator" option compared with the "dump site" one? |
|
B.IV-Operation/Exploration How many stations are foreseen for the Air Monitoring Net? In relation to air quality, which (pollutants) will be monitored? Will the energy produced cover the operation costs of the entire system? |
|
B.V-Technology How will the plant be able to adapt to possible restrictions of the emission limit values presently legislated for the solid urban waste incineration? Can the filters remove the breathable particles (<10 µm)? Is the chosen incineration technology the more advanced one? |
|
C-Alternatives to the project Are there alternatives to the project? Which are they? C.I-Site alternatives C.II-Solid urban waste management strategies' alternatives Should one consider that the study now being discussed really corresponds to an impact assessment evaluation of a waste management system? C.III-Technology alternatives Why are (sleeve) filters going to be used for removal of the combusting gas particles instead of electrostatics precipitators? |
|
D-Project Impact D.I. Public Health What are the risks of the project to public health? Are the local public health authorities considering any action as to developing proper epidemiological monitoring and watching systems and as to their articulation with environmental monitoring systems? |
|
D.VI-Noise What is the expected noise level in the area where I live? (followed by specific areas) |
|
D.XIII-Traffic What is the traffic level of waste trucks brought on by the incinerator? What is the trajectory of the solid waste trucks on their way to the incinerator? Are new access roads for the incinerator foreseen (to avoid further traffic aggravation)? |
|
E-Risk of the Project Can the plant be considered as a high risk industry? Which are the expected consequences in case of an earthquake? Which are the effects of a failure in the gas treatment equipment during two days? |
|
F-Minimization Which organisms will be checking the monitoring? Which measures are foreseen in order to control the noise produced by the incinerator? Will there be acoustic barriers? |
|
G-Compensation Will there be compensations for the area where the incinerator will be built? |
|
H-Decisions on the project H.I-Content and form of the project Which are the established criteria for deciding the need for a fourth incineration line? |
|
H.II-Review and decision process What is the Environmental Impact Evaluation (EIE)? What is the composition of the EIA Evaluation Committee? How does the EIA Evaluation Committee work? Is the evaluation decision essential for the project licensing? Is it possible during the EIA evaluation to suggest alterations to the project? |
|
H.III-Project Monitoring Which will be the entity responsible for operating the air quality measurement net? If there will be an air quality measurement net, will it begin operating before the plant? Which will be the entity and/or the laboratory responsible for the analysis of dioxins, (furans) and heavy metals? |
|
H.IV-Project Checking Considering that the constructor for the incineration was already chosen, what is the curriculum of that constructor in incinerators already working? Any deficiencies known in those? |
|
I-Public Participation What is the use of giving my opinion if the site has been chosen and the type of treatment to be given to the solid urban waste has been chosen? Hasn't the project and the construction of the incinerator been adjudicated already? I.1-Consultation Process Which opportunities did the public have to participate in the process of choosing the solid urban waste management model for the municipalities of the area of intervention of Valorsul? |
|
I.2-NGO's role in the consultation Are the ADA ("Associações de Defesa do Ambiente"; Environmental NGO's) in favor or against the solid urban waste incineration? Why did some ADAs accept to be part of the POGIRSU's expert consulting board? |
|
I.3-Social-psychology Is the population's concern completely senseless? |
|
J-General What is the difference between a managed waste dump site and a (open sky) garbage dump site? What is reduction, re-utilization and recycling of solid urban waste? What is solid urban waste composting? What is an Environment Impact Assessment (EIA)? What is the Environmental Impact Review process? How does a solid urban waste incineration plant work? |
The distribution of the 445 questions compiled per each section of the FAQ (issue taxonomy) was:
|
section A |
28 |
|
section F |
32 |
|
section B |
110 |
|
section G |
5 |
|
section C |
18 |
|
section H |
18 |
|
section D |
124 |
|
section I |
76 |
|
section E |
22 |
|
section J |
12 |
5.10.4. Problems with content
Several difficult, but interesting problems arose, concerning the content of the knowledge base.
5.10.4.1. on consultant creativity
In the project for indexing Valorsuls EIA to our FAQ set, the first problem regarding content, was to make sure that the researchers and consultants, working under my coordination, understood that their job was to provide the exact or the "best fit match" between the answer found inside Valorsul s EIA, and the question that was to be answered.
It is useful to recall that the questions were compiled from a list that was volunteered by several experts, most of whom were not part of this team.
This issue was raised because some of my consultants found minor errors in the EIA and they were volunteering some mild corrections. I had to emphasize that our role was to be a faithful publisher of the content. So I issued written instructions specifying that the product of their work was either to literally extract answers from a page or several pages of the EIA document or produce an answer from a compilation of several extracts of found text in different parts of the document or different volumes. In this last case they could provide their own wording to summarize and glue together these different pieces, but they had to be extremely careful about not changing anything concerning facts, direct quotes, interpretation, or even in the way the facts were presented. Their role was to mirror, the best they could, the exact spirit and wording of the EIA document.
In order to provide some release and also because it was of interest to the project, I suggested to my consultants that when they felt tempted to contradict some information or interpretation in the EIA document, that they annotate it in a separate notepad and eventually introduce it later on the prototype, as their opinion.
5.10.4.2. on multiple sources
Another issue regarding content was a decision on whether the only answers to be provided on the Internet were going to be the EIA itself, plus whatever other answers and comments Valorsul wanted to make.
It was hard enough to deal in such a short time with 260 questions and identifying 260 answers, plus managing the verifications that Valorsul had to do on our work before we could publish it. Considering also that we had only two to three weeks before the scheduled time, it was easy to foresee that there was a tremendous amount of work to be completed in a very short period of time and the chances of failing were extremely high. Therefore, it was safer to secure the ability to provide at least a set of coherent answers, in time for the public consultation period.
In consequence, I decided that all the other opinions, including the public official opinions regarding procedure or the content of the study, or positions and statements from the environmental NGOs, would only be included on the IMS prototype. The IMS was already designed and prepared to receive multiple opinion to each question, which was not the case of the web site design, because it had to be built from scratch and it would involve a more complex design to have multiple answers per question.
The third issue regarding content was that Valorsuls executive officers were interested in using this opportunity to correct some of the content of the EIA document produced by their consultants.
From previous informal conversations with some of those consultants, I was aware they claimed that Valorsul was not totally pleased with some of the results of their study. The general stand of the consultants that produced the EIA, was that the document, even if in name of Valorsul, had their signature and they were responsible for whatever technical analyses and data in there, therefore they would not allow any changes in it. It must be said, in all fairness, that Valorsul itself proudly pointed these differences of opinion as the proof of the independent nature of the EIA study. Also, they always claimed they respected the consultants independence.
It is common knowledge that these kind of tensions arise. It suffice to say that it was reasonable to assume that Valorsul wanted to, so to speak, correct some of the EIA statements, by complementing the EIA text that would be presented on the Internet. In consequence, I took some steps in order to deal with both aspects of it.
On one hand, I had very much interest in enabling Valorsul to have a voice besides their own consultants. It was in fact interesting to see if there was some significant difference between the answers that were extracted from the document produced by the consultants paid by Valorsul and the answers provided by Valorsuls executive officers themselves.
On the other hand, I wanted to make sure that there was going to be no confusing design that could induce people in mistaking statements provided by Valorsul executives, with the statements that were being presented for public debate on the official document, the EIA document delivered by Valorsuls consultants.
Therefore, it was defined, in the terms of the contract established with Valorsul, that there would be two clearly divided areas for each answer: one area for an "EIA answer", the one extracted or compiled from the EIA document; the other area for a "Valorsul answer", where Valorsul wrote additional comments or whatever they wanted to. This last area was totally their sole responsibility, meaning that my team would not write a single word for it, and only would be responsible for filling the areas under the title "EIA area".
This was one of the major symptoms of the change of attitude of Valorsul towards the IMS experiment.
5.10.4.3. on the evolution of Valorsul engagement in the IMS experiment
As it was mentioned before, they had not been so keen on supporting the Intelligent Multimedia System, since they considered that it would raise expectations on the ability of Valorsul to provide answers in real time in such depth and breadth during the public consultation period that they would be in no conditions to correspond.
At the beginning, it was more or less clear that they considered the Internet a more innocuous media, because they felt that the real targeted audience, the population of S. João da Talha (plus other political actors), was not the audience that was going to be targeted by the Internet. Their view was that the Internet audience was going to be constituted mainly by a couple of intellectuals and some students (in the community) and really not that much of an impact.
The matter of fact is that, at some point, they began to realize that many of the questions that were listed by my team to be published on the Net were actually not addressed by the EIA volumes, therefore the only way that some answers could be provided to the public at large was to provide it themselves.
The combination of these two factors (the motivation to complement and correct several statements in the EIA and the need to cover some areas not in the EIA), was probably what began Valorsuls stronger involvement in answering the FAQ.
We expected, by contract, only a few dozens - more close to twenty - of answers to be provided by Valorsul. In fact they ended up providing us with more than sixty, nearly seventy of those answers. Including one of them, a very extensive answer, regarding the POGIRSU, the operational plan for waste management.
There was strong criticism from ENGOs for the POGIRSU not being in place by the time of the EIA. This probably added to their motivation of working hard to complement the FAQ, providing a long, detailed answer to the question "What is the POGIRSU", who, in comparison, had only one single line in the EIA.
It was also more or less clear that at some point they felt that the work and the time they were investing in providing in-depth answers to some of these questions that were not being addressed by the EIA study, were useful as well for them to feed journalists and reporters that were knocking on their doors. The FAQ question-answer pairs had presented them with some kind of an already made script, of which they made the most.
5.10.4.4. on alleged contradictions within the EIA study
Finally, another issue on content came up.
As described in the previous chapter (Institutional Response), there was a problem in obtaining the EIA source documents, in digital form, from the consultants that produced Valorsuls EIA. Then Valorsul became inclined to consider that the non-technical summary plus the synthesis report should be enough to answer any and all questions, without the need of the specialized volumes (the bulk of the EIA). In fact, my consultant team found that many of the studies in the fourteen volumes were not addressed in the synthesis report or even worse, in the opinion of some of them, there were contradictions between the synthesis report and the data contained in the specialized volumes. It is interesting to note that opinion was shared by other persons outside my team, and was actually one of the points that was addressed during the public consultation.
Summing up, the basic content on the Web site was a selected subset of about 260, of the total ~ 400 questions compiled by my team (445 in the last version). Those 260 questions were linked to corresponding answers, some of them extracted from the EIA, and some provided by Valorsul. The design was organized in such a way that the two sources of answers was clearly identified and no confusion could be made between them. Naturally, all these answers were loaded also into the IMS prototype.
5.10.5. Problems with structure
One of the issues that had been raised during the compilation of questions was the need to structure them in a more natural way, in terms of reading the document, instead of the traditional table of contents. The described "FAQ Trails" and corresponding technical level classifications for them was an answer, but very few of my collaborators assigned levels of depth and technical difficulty to questions.
As it later was observed, there was a good reason for that, because the technical level was not determined only by the question itself, but above all by the type of answer, since the same question could be answered sometimes in a superficial, lay language or with an in-depth, very technical terminology. Therefore, it made sense not to spend a lot of time to predetermine the classification of the question in terms of the depth of their technical knowledge.
5.10.5.1. Uniformity of "Issue" Taxonomy
However, we spent considerable time in structuring the questions, as already discussed in other chapter. In this process, one of the most lengthy problems to solve was the multiple-belonging problem. In other words, the problem that some questions seemed to belonging to several of the classes and clusters of questions. And at some point we had to make a choice.
This meant that the same question could show up in a different part of the taxonomy hierarchy. From the point of view of the structure of the questions, this did not seem to be an issue; however, because of the program to actually manage and produce an HTML code in such a short time, we found very quickly that this could be a major hurdle in terms of implementation. Therefore, while there were no theoretical constraints for a multiple assignment of the same question to different subclasses of the structure, the implementation of this multiplicity would highly increase the level of complexity of the programming. This led me to decide that each question must be assigned to a specific group of questions.
It was not a pacific, clear-cut solution, which means there is no "natural", obvious structure for all these questions and possibly many other ways of organizing the questions were equally adequate. There was actually an issue, even if mild, between some elements of my team and Valorsul with different opinions on that.
5.10.5.2. Implementation of trails
Our goal was to provide examples of the information trails with different levels. Each answer, therefore each question (since in the case of FAQ for EIA there was only one answer per question), was attributed a green, yellow or red dot, that could identify the technical levels, as presented before.
Together with this classification and a natural structure of groups of questions, we provided something like a table of contents. Some pages had the main classes of questions and you'd click on one of those classes and it would show up a page with the sub-classes, and some of the questions of each of the sub-classes, and then you'd click on one of the questions of those sub-classes and it would go to the page with the answer.
Besides this structure and the classification of these color dots - traffic lights - that you could see before you would commit yourself to a choice of the question, we also made an effort to define some natural sequences of questions and answers.
In a way, we tried to model and anticipate not only the frequently asked questions but also a sequence of exploring them. At least we wanted to offer, as much as possible, alternatives of sequences, so that after the user asking some entry-point question, there would be an offer of multiple sequences. The user could choose between a green dot sequence or a yellow or a red dot one. Theoretically, you could navigate through the questions and answers following a path of only green or only yellow or only red questions, or otherwise, at any point, you could choose to jump from the green trail to the yellow or the red trails and vice versa.
In fact, although we provided some of these trails, they were not as complete as desirable, even in a universe of nearly three hundred questions. It was hard to find many long sequences and, in particularly, it was hard to find many parallel sequences of green, yellow or red trails. Some of the questions had follow-up questions offered, but many of them had only one or two, and sometimes both of the questions for alternative sequences were of the same level of technical depth .
Nevertheless, we found that by providing this structure we could implement a first trial of the notion of these information trails with different levels.
5.10.6. Web site implementation and management
The time constraints mentioned above, three weeks estimated time between the beginning of the compilation of the answers to the time for publication on the Web (for the public consultation period), demanded good planning and good tools.
Given the high volume of answers to be provided, given the fact that under the contractual procedure, the answers provided by my team had to be reviewed and eventually corrected by Valorsul; and given the foreseen bugs and errors and the consequent need to re-deploy the whole or parts of the site, I decided early on to use as much as possible management tools, to be produced and customized by myself or someone from the team, having in view this specific type of application.
At this time, Web site management tools were beginning to show up in commercial packages but were still lacking in many aspects. Even nowadays, it is hard to just rely on one of the commercial packages, despite considerable gains in sophistication. But, at the time, the Web site management tools were very crude.
What we did was to establish a clear path between the data mining and the final publication period, in such a way that we could reproduce this path, so that it would be automated and routinized as much as possible, for each web site change.
The process of changing such a complex web site is not trivial. It is not just a matter of changing a piece of text. For instance, if you need to change one page, it is obviously a simple procedure to do it manually. But if you have, twice or three times a week, to change one or two hundred pages, then it is obvious that the manual process is doomed. Also, the team and the resources that I had were limited; there was no unlimited funding to pay for consultants. I had to make a strategic option to concentrate the best part of the available moneys to pay for highly reliable consultants on the content side, because of the legal responsibilities that could arise from a serious mistake. In consequence, there were not many people available even if I wanted to try the manual path; it would be unthinkable to try this page rate with only two to three people involved in taking care of the Web site coding, implementation and management.
We designed it using the metadata form described previously, that any non expert could easily write in simple text, ASCII format, with any standard word processing, but using templates with field identifying characters, that provided coding signals to automate the "cut and paste" and generate the HTML code.
The sheer number of pages involved (near 600 A4 pages) and the rate of changes was already a challenge, but we were also dealing with complex linking that could be changed as well during this process. For instance, the questions sequence, or moving one question from the cluster it belonged to another. Those two are good examples of the complexity involved, because they implied to update a considerable number of relative links and rebuilding the table of contents. What our web structure management tools did, was to automate the process of generating not only the code but also the index pages (table of contents), plus to organize the sequence of questions in trails, plus the html code.
A sample of the template for metadata was presented in a previous chapter (FAQ model). The template included a kind of a "mark up" language, to identify the files to insert in the middle of another, and other kind of information. The advantage of this system is that the same kind of metadata file could be used for the compilation of the web content and for inserting the same material into the IMS prototype. This created an unified system on the side of the source and therefore isolating the content provider from what was going to be coded and its final destiny (whether Internet or the IMS).
The web team implemented a scripting system that read the metadata from each question, reorganized the sequence of questions and answers, generated the appropriate HTML code for the links and, finally, generated the index pages.
We lacked a battery of routine tests to identify mistakes and bugs that came out on the final procedure. For instance, after several tests that lasted two weeks, we finally identified the reason why some metadata generation on the classification of the technical depth of each answer and the sequencing of questions, seemed to be random, not responding to any specific pattern. This was because the supposedly text-only format of Microsoft Word in fact include hidden characters, so when we thought that there was a zero that was in plain view, in fact some other strange character, interpreted as another number by our program, was indeed there. In other words, what you see is definitely not what you get.
Another common problem was the formatting of tables. There was plenty of very complex tables in the answers, and the formatting tools lacked reliability. This implied a lengthy and annoying process of cleaning up all the formatting.
Finally, we had to deal with a problem of quality of the source. Most of the pictures had to be scanned from low quality Xerox copies. One of the most ridiculous cases, was that many maps were color-coded; however, legally, the EIA promoter is not bound to present color photocopies, therefore, the copies of some volumes were totally useless, concerning the interpretation of map features.
5.10.7. Web Site implementation problems
One of the vulnerabilities of the design of the Web component of the system was that we were dependent on the Internet Service Provider (ISP) server for the page visits counters. Before registering the web domain of "Valorsul", the ISP indicated our addresses for counters and for other links. The ISP counter implementation was a CGI routine requiring, for each HTML file, another single text file, the container for the variable (counter). This counter file had to be kept on a specific URL (Universal Resource Locator), identified in the CGI call in the HTML file.
We had already close to 300 files with a complex web of links when, half the way through the period of public consultation, I realized that some of the links were not working, and the page counters (visitors) were not working.
What happened is that, without any warning, the ISP updated the domain registration so that instead of being under a special directory with an alias to recognize the domain www.valorsul.pt and the www.citidep.pt, we were assigned a specific URL on the server to be recognized by the Internet domain servers. For practical purposes, suddenly, all the links URLs were obsolete. When it concerned internal references within our work, this was not a problem because all addresses were relative to the web site. But all external links, including the survey and counter URLs, were now wrong.
In consequence, there was a period of time around a week, almost a quarter of the public consultation period, in which people that tried to access the survey could not access it because they would get a non-existent URL reference.
There were two types of references / links that had been changed: all references to outside sites, such as references to CITIDEP site, project documentation, and to the on-line survey (which was at CITIDEP web site and not Valorsuls as a means of emphasizing the total independence between Valorsul and the survey, solely of my responsibility). Those references were the easiest to correct, manually adjusting something like 15 to 20 pages. But to change for each page the counter links, that meant to regenerate from scratch the whole site, because it was unthinkable to change manually close to 300 files.
The problem with regenerating the whole site was that we couldnt even be sure not to generate new mistakes in other pages that had been, before, correct. Because there was no means to trace a pattern of the bugs, the only way to make sure was the manual review of the 300 html files.
Errors had potentially legal consequences. If we published wrong information, our team could be considered responsible for those mistakes, for instance, if that lead to some kind of consequences in the public consultation. That meant that the file review had to be done carefully on a manual basis. Since this was not practical, the only possible solution was to re-establish the counters only on the main index pages and just give up on any information about specific page access counting.
This imposed another limitation on the data that I planned to gather for analysis, but in itself it provides a good example of the kind of difficulties that a system like this faces. Whatever the level of sophistication your management tools have, you still have the final problem of responsibility and you can not take the human review out of the loop. Therefore there is a built in limitation on how much you can really shorten the period of time through automation, and there is always the need of important human resources assigned to this kind of work.
5.10.8. Final content
The knowledge acquisition process was forced to a premature end, before gathering answers to all compiled questions, in order to allow at least a few days of public access to the loaded IMS prototype, and a few more to the web version.
The public consultation legal period began in 27 May (1996); CITIDEP web team had the web based 260 answers on-line by 11 June, and the IMS prototype was fully loaded only by 8 July, that is, 3 days before the end of the public consultation. Given the formidable obstacles we had to overcame, it was a bravura performance, and it allowed at least some feedback in the real settings of the review process.
Tables 5.10.8.-2 to 4 summarize the final set of question-answer pairs compiled and inserted in the system, according to their source and taxonomic class of issues. Table 5.10.8.-1 reminds the top-level classes for the Issue taxonomy.
Table 5.10.8.-1
- Issue Taxonomy top-level classes|
A-Present Situation B-Project Characterization C-Project Alternatives D-Project Impacts E-Project Risks |
F-Minimization G-Compensation H-Decision process I-Public Participation J-General |
As this table show, despite the enormous time constraints, an impressive number of answers among the total collected was inserted in the IMS prototype.
The final system presented for public use, the IMS prototype and the Web component with all components loaded, with more final content details, is presented in the next chapter, "The System".
Table 5.10.8.-2
- Source of FAQ questions compiled, by Issue class|
Issue Class |
EIA |
Valorsul |
Government |
Decision-makers |
Technical staff |
Private consultants |
ENGOs |
All |
|
A |
1 |
1 |
0 |
0 |
12 |
5 |
9 |
28 |
|
B |
20 |
11 |
0 |
0 |
68 |
5 |
6 |
110 |
|
C |
0 |
0 |
0 |
0 |
3 |
1 |
14 |
18 |
|
D |
39 |
10 |
0 |
0 |
62 |
8 |
5 |
124 |
|
E |
4 |
0 |
0 |
0 |
10 |
2 |
6 |
22 |
|
F |
1 |
1 |
0 |
0 |
22 |
4 |
4 |
32 |
|
G |
0 |
0 |
0 |
0 |
2 |
1 |
2 |
5 |
|
H |
0 |
0 |
0 |
5 |
12 |
0 |
1 |
18 |
|
I |
41 |
5 |
0 |
2 |
8 |
2 |
18 |
76 |
|
J |
0 |
0 |
0 |
0 |
2 |
5 |
5 |
12 |
|
TOTAL |
106 |
28 |
0 |
7 |
201 |
33 |
70 |
445 |
Table 5.10.8.-3
- Source of FAQ answers collected, by Issue class|
Issue Class |
EIA |
Valorsul |
Government |
Decision-makers |
Technical staff |
Private consultants |
ENGOs |
All |
|
A |
13 |
5 |
9 |
12 |
6 |
12 |
5 |
62 |
|
B |
64 |
22 |
11 |
6 |
2 |
5 |
3 |
113 |
|
C |
2 |
2 |
0 |
0 |
0 |
2 |
15 |
21 |
|
D |
58 |
14 |
0 |
0 |
9 |
0 |
2 |
83 |
|
E |
13 |
4 |
0 |
0 |
0 |
1 |
0 |
18 |
|
F |
25 |
2 |
0 |
0 |
2 |
0 |
0 |
29 |
|
G |
2 |
1 |
0 |
0 |
0 |
0 |
0 |
3 |
|
H |
7 |
7 |
0 |
15 |
0 |
0 |
0 |
29 |
|
I |
44 |
5 |
5 |
2 |
0 |
0 |
18 |
74 |
|
J |
0 |
0 |
0 |
2 |
8 |
6 |
5 |
21 |
|
TOTAL |
228 |
62 |
25 |
37 |
27 |
26 |
48 |
453 |
The column referent to EIA, means in effect my CITIDEP IMS project team. Note the disparity between the number of questions suggested by public administration technical staff, and the number of answers provided by them.
Table 5.10.8.-4
- Source of FAQ answers inserted in IMS prototype, by Issue class|
Issue Class |
EIA |
Valorsul |
Government |
Decision-makers |
Technical staff |
Private consultants |
ENGOs |
All |
|
A |
13 |
5 |
2 |
8 |
6 |
9 |
2 |
45 |
|
B |
64 |
22 |
3 |
0 |
2 |
4 |
3 |
98 |
|
C |
2 |
2 |
0 |
0 |
0 |
2 |
8 |
14 |
|
D |
58 |
14 |
0 |
0 |
9 |
0 |
1 |
82 |
|
E |
13 |
4 |
0 |
0 |
0 |
1 |
0 |
18 |
|
F |
25 |
2 |
0 |
0 |
2 |
0 |
0 |
29 |
|
G |
2 |
1 |
0 |
0 |
0 |
0 |
0 |
3 |
|
H |
7 |
7 |
0 |
11 |
0 |
0 |
0 |
25 |
|
I |
44 |
5 |
3 |
2 |
0 |
0 |
18 |
72 |
|
J |
0 |
0 |
0 |
1 |
8 |
4 |
5 |
18 |
|
TOTAL |
228 |
62 |
8 |
22 |
27 |
20 |
37 |
404 |