SWAMI, How I Love Ya
Foreword to D. Wright, S. Gutwirth, M. Friedewald, E. Vildjiounaite and Y. Punie, Safeguards in a World of Ambient Intelligence. Springer, 2008.

By Gary T. Marx  |  Send email  |  Back to Main Page

For I dipt into the future, far as human eyes could see,
saw the world, and all the wonders that would be

                    —Alfred Lord Tennyson, "Locksley Hall"

And you will have a window in your head.
Not even your future will be a mystery
Any more. Your mind will be punched in a card
And shut away in a little drawer.
When they want you to buy something
They will call you...
So friends, every day do something
That won’t compute...

                    —Wendell Berry, "The Mad Farmer Liberation Front"

These poems reflect polar images of science and technology in western societies. Such contrasting views are daily expressed in our literature, popular culture, politics, policies and everyday life. We are enthralled by and fearful of the astounding powers new technologies may bring. We hope with Edison that, "whatever the mind of man creates" can be "controlled by man’s character", even as we worry with Einstein that technological progress can become, "like an axe in the hand of a pathological criminal."

In our dynamic and very unequal worlds of such vast system complexity, there is much to be worried about. But there is also much to be optimistic about. This volume is a welcome contrast to much of the disingenuous commentary on new information technologies offered by technical, commercial and political advocates who command the largest megaphones. The book strikes a balance between encouraging the wonders that could be, while reminding us of the dark forces of history and society, and that nature is filled with surprises. We cannot and should not stop invention, but neither should we uncritically apply it, absent the careful controls and continual evaluation the book recommends.

Before our age of avaricious, data-hungry sensors that can record everything in their path, to say that a person "really left a mark" implied that they willfully did something special. Now, merely being passively present – whether in a physical or biological sense, let alone actively communicating, moving or consuming leaves remnants as well. In an age when everyone (and many objects) will continuously leave marks of all sorts, that phrase may become less meaningful.

The book’s topic is ostensibly the embedding of low visibility, networked sensors within and across ever more environments (called ambient intelligence or AmI in Europe and ubiquitous computing or networked computing in America and Japan). But the volume is about much more. It offers a way of broadly thinking about information-related technical developments. It is the most informative and comprehensive policy analysis of new information and surveillance technologies seen in recent decades.

Those wishing to praise a book often say, "essential reading for anyone concerned with…" But I would go beyond that strong endorsement to say Safeguards in a World of Ambient Intelligence (SWAMI) should be required reading for anyone concerned with public policy involving new communications and surveillance technologies. This should be bolstered by frequent certifying quizzes (and maybe even licenses) for those developing and applying information technology and for those on whom it is applied. The goal is to keep ever in view the multiplicity of analytical factors required to reach judgments about technologies which so radically break with the limits of the human senses and of space and time. In encouraging caution and steps to avoid worst-case scenarios, such analysis can limit unwanted surprises occurring as a result of interactions within very complex networked systems.

How do I like this book? Let me count the ways. If this were a musical comedy, the first song would be "SWAMI, How I love Ya, How I love ya" (with apologies to George Gershwin). First it creatively and fairly wends its way through the minefields of praise and criticism that so inform our contradictory views of technology. It avoids the extremes of technophilia and technophobia implied in the poems above and often in superficial media depictions and in rhetorical excesses of the players. It also avoids the shoals of technological, as against social and cultural, determinism. There is nothing inherent in technology or nature that means specific tools must be developed or used. The social and cultural context is central to the kind of tools developed and their uses and meaning. Yet technologies are rarely neutral in their impact. They create as well as respond to social and cultural conditions.

The book suggests a flashing yellow light calling for caution and analysis rather than the certainty of a green or a red light. This can be seen as a limited optimism or a qualified pessimism, but what matters is the call for humility and continual analysis. As with much science fiction, the dark scenarios the book offers extrapolate rather than predict. They call attention to things that could happen in the hope that they won’t.

While the report is a product of 35 experts, numerous meetings, work teams and consultants, it does not read like the typical pastiche committee or team report. Rather it is smooth flowing, consistent and integrated. The product of specialists from many parts of Europe, it nonetheless offers a common view of the issues that transcends the particularities of given cultures and language. As such, it speaks to an emerging European and perhaps global, sense of citizenship fostered by standardized technologies that so effortlessly transcend traditional national borders, as well as those of distance and time.

While the United States is the major player in the development and diffusion of new information technologies, it sadly lags far behind Europe in providing deep and comprehensive analysis of the social and ethical consequences of such technology. Not only does it lack privacy and information commissions, but there is no longer a strong national analytical agency concerned with the impact of new technologies. The short-sighted Congressional elimination of the nonpartisan analytical Office of Technology Assessment in 1995 has deprived the United States of an independent public interest voice in these matters.

The volume offers a very comprehensive review of the relevant literature from many fields, at least for the English language. As a historical record and chronicle of turn of the century debates and concerns raised by these developments, the book will be of inestimable value to future scholars confronting the novel challenges brought by the continuing cascade of new information technologies. I particularly appreciated some of the metaphors and concepts the book uses such as data laundering, AmI technosis, technology paternalism, coincidence of circumstances, digital hermits, and the bubble of digital territory in its analysis.

Much of the extensive supporting documentation and reference material is available online, making it easy and inviting for the reader to pursue topics in more detail or to check on the book’s interpretations. However, I hope this won’t soon come with an AmI program that, seeing what was accessed, makes recommendations for future reading or offers discounts for related book purchases or worse sends political messages seeking to alter the assumed positions of the user/reader.

The book’s strength is in raising basic questions and offering ways of thinking about these. Answers will vary depending on the context and time, but the social factors and trade-offs that must be considered remain relatively constant. Rules and regulations will differ depending on the setting and the phase. A given use can be approached through a temporal process as we move from the conditions of collection to those of security and use. Or settings can be contrasted with respect to issues such as whether individuals should be given maximum, as against no, choice regarding the offering/taking of their personal data; questions around the retention or destruction of personal information; and whether the data should be seen as the private property of those who collect it, those about whom it is collected, or as a public record. A related issue involves whether AmI systems are viewed as public utilities in principle available to all or are viewed as private commodities available only to those who can afford them and or who qualify.

It has verisimilitude both in its treatment of the policy issues and in its scenarios. It offers an encyclopedia of safeguards and calls for a mixture of available means of regulation. While the volume gives appropriate attention to technical controls and those involving legislation and courts at many levels (national, European community, international) and notes the role of markets, it stands apart from the voluminous policy literature in attending to civil society factors such as the media, public awareness and education, cultural safeguards and emerging tools such as trust marks, trust seals and reputation systems. The informal, as well as the formal must be part any policy considerations and analysis of impact.

An aspect of the book’s reality check is its consideration of the trade-offs and tensions between conflicting goals and needs. In spite of the promises of politicians and marketers and the fantasies of children, we can’t have it all. Hard choices must often be made and compromises sought.

In the rush to certainty and in the pursuit of self-interest, too much discussion of technology shows a misguided either/or fallacy. But when complex and complicated topics are present, it is well, with Whitehead, to not find clarity and consistency at a cost of "overlooking the subtleties of truth." We need to find ways of reconciling, both intellectually and practically, seemingly contradictory factors.

In considering issues of computers and society, there are enduring value conflicts and ironic, conflicting needs, goals and consequences that require the informed seeking out of the trade-offs and continual evaluation the volume recommends.

These can be considered very abstractly as with the importance of liberty and order, individualism and community, efficiency and fair and valid treatment. When we turn to AmI, we see the tensions more concretely.

Thus the need for collecting, merging and storing detailed personal information in real time, on a continual basis across diverse interoperable systems, is central for maximizing the potential of the AmI. But with this can come tension between the goals of authentication, personalized service and validity and those of privacy and security (the latter two can, of course, also be in tension, as well as mutually supportive). The generation of enormous databases presents monumental challenges in guarding against the trust-violations of insiders and the damage that can be wrought by outsider hackers. How can the advantages of both opacity and transparency be combined such that systems are easy to use and in the background and hence more egalitarian and efficient, while simultaneously minimizing misuse, and encouraging accountability and privacy? As the song says, "something’s got to give." Personalization with its appreciation of the individual’s unique needs and circumstances must be creatively blended with impersonalization with its protections of privacy and against manipulation. We need solutions that optimize rather than maximize with a keen awareness of what is gained and what is lost (and for whom under what conditions) with different technical arrangements and policy regimes.

Under dynamic conditions, the balance and effort to mange competing needs must be continuously revisited. Some changes are purposive as individuals and organizations seek to undermine AmI as its operation becomes understood, others involve growth and development as individuals change their preferences and behavior and environmental conditions change.

The dark scenarios are particularly refreshing given the predominance of good news advocacy stories in our culture. The bad news stories offered here are hardly the product of an unrestrained dystopian imagination rambling under the influence of some banned (or not yet banned) drug. Rather, they reflect a systematic method relying on cross-observer validation (or more accurately review and certification). This method should be in the toolkit of all analysts of new technology. Unlike the darkness of much science fiction, these stories are reality-based. The methodology developed here involves both a technology check (are the technologies in the stories realistic given current and emerging knowledge and technique?) and an actuality check (have the outcomes to some degree actually occurred, if not all at the same time or in exactly the same way as the story describes?).

These restrictions give the scenarios plausibility absent in fiction bounded only by the imagination of an author. However, for some observers, requiring that similar events have actually happened might be seen as too stringent. For example, by these standards the Exxon oil spill (prior to its occurrence) could not have been a scenario. This is because something like it had never happened and the chance of it happening was so wildly remote (requiring the coming together of a series of highly improbable events), that it would have been deemed unrealistic given the above methodology.

An extension or reversal of George Orwell?

The aura of George Orwell lies behind many of the critical concerns this book notes. In some of its worst forms (being invisible, manipulative and exclusionary, not offering choice, furthering inequality and ignoring individuality and individual justice in pursuit of rationality and efficiency across many cases), ambient intelligence reflects 1984. It could even bring horrors beyond Orwell where surveillance was episodic, rather than continual, and relied on fear, lacking the scale, omnipresence, depth, automatism and power of ambient intelligence. With the soft and unseen dictatorship of design, the individual could face ever fewer choices (for example, being unable to pay with cash or using an anonymous pay telephone) and if able to opt out and do without the benefits, becomes suspicious or at least is seen as someone who is socially backward as a result of non-participation. Rather than mass treatment which, given its generality, left wiggle room for resistance, the new forms drawing on highly detailed individuated information could greatly enhance control.

Orwell’s treatment of language can be applied. With "Newspeak" and phrases such as "peace is war", Orwell’s satire emphasizes how concepts can euphemize (or maybe euthanize would be a better term) meaning. To call this book’s topic "ambient intelligence" brings a positive connotation of something neutral and supportive in the background, maybe even something warm and fuzzy. Ambience is popularly used to refer to a desired environmental condition. Like surround sound, it envelops us, but unlike the latter, we may be less aware of it. Ambient has been used as the name of a popular pill that induces somnolence. What feelings would be induced if the book’s topic was instead called "octopus intelligence" or, given a record of major failures, "hegemonic stupidity"?

But there are major differences as well. In Orwell's Oceania, the centralized state is all-powerful and the citizen has neither rights nor input into government. Mass communication is rigidly controlled by and restricted to the state. There are no voluntary associations (all such organizations are directly sponsored and controlled by the state). The standard of living is declining and all surplus goes into war preparation rather than consumption. Society is hierarchically organized, but there is little differentiation, diversity or variety. Everything possible is standardized and regimented. Individuals are isolated from and do not trust each other. Private communication is discouraged and writing instruments are prohibited, as are learning a foreign language and contact with foreigners.

Yet empirical data on communications and social participation for contemporary democratic societies does not generally reflect that vision, even given the restrictions and enhanced government powers seen since 9/11. Indeed in its happier version, ambient intelligence can be seen as the antidote to a 1984 type of society– networked computers relying on feedback going in many directions can bring decentralization and strengthen horizontal civil society ties across traditional borders. Differences – whether based on space and time or culture that have traditionally separated persons may be overcome. The new means vastly extend and improve communication and can offer informed end-users choices about whether or not or what degree to participate. Pseudonymous means can protect identity. In the face of standardized mass treatment, citizens can efficiently and inexpensively be offered highly personalized consumer goods and services tailored to their unique needs.

The potential to counter and avoid government can protect liberty. On the other hand, privatization can bring other costs including insulation from regulation in the public interest and increased inequality. Those with the resources not to need the advantages the technology offers in return for the risks it brings may be able to opt out of it. Those with the right profiles and with the resources to participate, or to pay for added levels of security, validity and privacy for their data will benefit, but not others.

In many ways, we have moved very far from the kind of society Orwell envisioned in 1948. His book remains a powerful and provocative statement for a 19th century kind of guy who never rode on an airplane and did not write about computers. Yet if forced to choose, I would worry more (or at least as much) about the threat of a crazily complex, out-of-control, interventionist society that believes it can solve all problems and is prone to the errors and opaqueness envisioned by Kafka than about Orwell’s mid-20th century form of totalitarianism. Hubris was hardly a Greek invention.

While there is societal awareness of mal-intentioned individuals and groups to the extent that "Orwellian" has become clichéd, yet the threat posed by rushing to technologically control ever more aspects of highly complex life through constant data collection and feedback, interaction and automated actions is less appreciated and understood. The emergent dynamism of the involved interdependent systems and the difficulty of imagining all possible consequences must give us great pause.

The book’s scenarios offer a cornucopia of what can go wrong. Ideally, we wish to see well-motivated people and organizations using good and appropriate technology. The book’s dark scenarios suggest two other forms to be avoided:

Bad or incompetent people and/or organizations with good technology. The problem is not with the technology, but with the uses to which it is put. There may be an absence of adequate regulation or enforcement of standards. Individuals may lack the competence to apply the technology or end users may not take adequate protections and may be too trusting. As with identity theft, the wrongful cleansing and misuse of legitimately collected data, and machines that are inhuman in multiple ways, malevolent motivation combined with powerful technologies is the stuff of our worst totalitarian nightmares. But consider also the reverse:

Good people and/or organizations with bad or inappropriate technology. This suggests a very different order of problem – not the absence of good will, competence and or legitimate goals, but of technology that is not up to the job and spiraling expectations. Achieving the interoperability and harmonization among highly varied technical and cultural systems that AmI networks will increasingly depend on can bring new vulnerabilities and problems. For technical, resource or political reasons, many systems will be incompatible and varying rates of changes in systems will affect their ability to cooperate. Technology that works in some settings may not work in others or in conflict settings may be neutralized. Here we also see the issue of "natural" errors or accidents that flow from the complexity of some systems and the inability to imagine outcomes from the far-flung interactions of diverse systems. Regular reading of the Risks Digest can not only give one nightmares, it can make getting out of bed each day an act of supreme courage.

From one standpoint, there are two problems with the new communication and information technologies. The first is that they don’t work. The second is that they do. In the first case, we may waste resources, reduce trust, damage credibility and legitimacy and harm individuals. Yet if they do work, we risk creating a more efficient and mechanical society at the cost of traditional human concerns involving individual uniqueness and will. Given power and resource differentials, we may create an even more unequal society further marginalizing and restricting those lacking the resources to participate and/or to challenge technical outcomes. There will be new grounds for exclusion and a softening of the meaning of choice. The failure to provide a detailed profile, or of a country to meet international standards, may de facto mean exclusion.

The book notes the importance of (p. 24) "focusing on concrete technologies rather than trying to produce general measures." Yet in generating specific responses, we need to be guided by broad questions and values and the overarching themes the book identifies. These change much more slowly, if at all, than the technologies. That is of course part of the problem. But it can also be part of the solution in offering an anchoring in fundamental and enduring human concerns.

An approach I find useful amidst the rapidity and constancy of technical innovation is to ask a standard set of questions. This gives us a comparative framework for judgment. The questions in Table 1 below incorporate much of what this book asks us to consider.

A central point of this volume is to call attention to the contextual nature of behavior. Certainly these questions and the principles implied in them are not of equal weight, and their applicability will vary across time periods depending on need and perceptions of crisis and across contexts (e.g., public order, health and welfare, criminal and national security, commercial transactions, private individuals, families, and the defenseless and dependent) and particular situations within these. Yet common sense and common decency argue for considering them.

Public policy is shaped by manners, organizational policies, regulations and laws. These draw on a number of background value principles and tacit assumptions about the empirical world that need to be analyzed. Whatever action is taken, there are likely costs, gains and trade-offs. At best, we can hope to find a compass rather than a map and a moving equilibrium instead of a fixed point for decision-making.

For AmI, as with any value-conflicted and varied-consequence behavior, particularly those that involve conflicting rights and needs, it is essential to keep the tensions ever in mind and to avoid complacency. Occasionally, when wending through competing values, the absolutist, no-compromise, don’t-cross-this-personal line or always-cross-it standard is appropriate. But, more often, compromise (if rarely a simplistic perfect balance) is required. When privacy and civil liberties are negatively affected, it is vital to acknowledge, rather than to deny this, as is so often the case. Such honesty can make for better-informed decisions and also serves an educational function.

These tensions are a central theme in this book, which calls for fairly responding to (although not necessarily equal balancing of) the interests of all stakeholders. Yet it only implicitly deals with the significant power imbalances between groups that work against this. But relative to most such reports, its attention to social divisions that may be unwisely and unfairly exacerbated by the technology is most welcome.

In a few places, the book lapses into an optimism (perhaps acceptable if seen as a hope rather than an empirical statement) that conflicts with its dominant tone of complexity and attention to factors that should restrict unleashing the tools. This book (p.32) sets for itself the "difficult task of raising awareness about threats and vulnerabilities and in promoting safeguards while not undermining the efforts to deploy AmI and it suggests (p.29) that "the success of ambient intelligence will depend on how secure its use can be made, how privacy and other rights of individuals can be protected, and, ultimately, how individuals can come to trust the intelligent world which surrounds them and through which they move." The book argues (p.20) that "matters of identity, privacy, security, trust and so on need to be addressed in a multidisciplinary way in order for them to be enablers and not obstacles for realizing ambient intelligence in Europe." [Italics added.]

Is the task of the public interest analyst to see that public policy involves, "enablers not obstacles for realizing ambient intelligence in Europe"? Should the analyst try to bring about the future, guard against it (or at least prevent certain versions of it), or play a neutral role in simply indicating what the facts and issues are?

Certainly, where the voluntary cooperation of subjects is needed, the system must be trusted to deliver and to protect the security and privacy of valid personal information. Showing that people will be treated with dignity can be good for business and government in their efforts to apply new technologies. Yet the book’s call to implement the necessary safeguards will often undermine (if not prevent) "the efforts to deploy AmI."

Here we must ask "what does success mean?" One answer: AmI is successful to the extent that the broad value concerns the book raises are central in the development of policy and practice. But another conflicting answer, and one held by many practitioners with an instrumental view, is that AmI is successful to the extent that it is implemented to maximize the technical potential and interests of those who control the technology. The incompatibility between these two views of success needs to be directly confronted.

TABLE 1: Questions for Judgment and Policy

  1. Goals—Have the goals been clearly stated, justified and prioritized? Are they consistent with the values of a democratic society?
  2. Accountable, public and participatory policy development—Has the decision to apply the technique been developed through an open process and, if appropriate, with participation of those to be subject to it? This involves a transparency principle.
  3. Law and ethics—Are the means and ends not only legal, but also ethical?
  4. Opening doors—Has adequate thought been given to precedent-creation and long term consequences?
  5. Golden rule—Would the controllers of the system be comfortable in being its subject, as well as its agent? Where there is a clear division between agents and subjects, is reciprocity or equivalence possible and appropriate?
  6. Informed consent—Are participants fully apprised of the system’s presence and the conditions under which it operates? Is consent genuine (i.e., beyond deception or unreasonable seduction or denial of fundamental services) and can "participation" be refused without dire consequences for the person?
  7. Truth in use—Where personal and private information is involved does a principle of "unitary usage" apply, whereby information collected for one purpose is not used for another? Are the announced goals the real goals?
  8. Means-ends relationships—Are the means clearly related to the end sought and proportional in costs and benefits to the goals?
  9. Can science save us?—Can a strong empirical and logical case be made that a means will in fact have the broad positive consequences its advocates claim (the does-it-really-work question")?
  10. Competent application—Even if in theory it works, does the system (or operative) using it apply it as intended and in the appropriate manner?
  11. Human review—Are automated results with significant implications for life chances subject to human review before action is taken?
  12. Minimization—If risks and harm are associated with the tactic, is it applied to minimize these showing only the degree of intrusiveness and invasiveness that is absolutely necessary?
  13. Alternatives—Are alternative solutions available that would meet the same ends with lesser costs and greater benefits (using a variety of measures not just financial)?
  14. Inaction as action—Has consideration been given to the "sometimes it is better to do nothing" principle?
  15. Periodic review—Are there regular efforts to test the system's vulnerability, effectiveness and fairness and to review policies and procedures?
  16. Discovery and rectification of mistakes, errors and abuses—Are there clear means for identifying and fixing these (and in the case of abuse, applying sanctions)?
  17. Right of inspection—Can individuals see and challenge their own records?
  18. Reversibility—If evidence suggests that the costs outweigh the benefits, how easily can the means (e.g., extent of capital expenditures and available alternatives) be given up?
  19. Unintended consequences—Has adequate consideration been given to undesirable consequences, including possible harm to agents, subjects and third parties? Can harm be easily discovered and compensated for?
  20. Data protection and security—Can agents protect the information they collect? Do they follow standard data protection and information rights as expressed in documents such as the Code of Fair Information Protection Practices and the expanded European Data Protection Directive?

Emile Aarts, who has played an important role in the development and spread of ambient intelligence, notes in the other Foreword to this volume that the technology’s promise will "only work if we can settle the ethical issues that are connected to it." Yet we must always ask just how well do we want it to work, what does "to work" mean, who does it work for and under what conditions? Furthermore, the day we settle the ethical and social issues we are in deep yogurt. Given the inherent conflicts and trade-offs and dynamic and highly varied circumstances, we need to continually encounter and wrestle with unsettling and unsettled issues. This volume offers us an ideal framework for that on going process.

Top of Page  |  Back to Main Page  |  Gary Marx bio  |  Send email


You are visitor number  to this page since July 11, 2007.