Privacy and Technology
This is a revision of material that appeared in The World and I, September 1990 and Telektronik January 1996.

Back to Main Page | Table I | Table II | Appendix | References
 
 

Gary T. Marx

It's a remarkable piece of apparatus.

-- F. Kafka, The Penal Colony

The industrial age was dependent on technologies that extracted value from the earth, trees, and water. Our age too relies on extractive technologies. However the technologies are not pumps or drills, nor is the substance extracted valued because of its physical properties. The technologies are computers, transmitters, spectrographs and video lens. A major substance extracted is personal information.

In 1928 United States Supreme Court Justice Brandeis wrote "discovery and invention have made it possible for the government, by means far more effective than stretching upon the rack to obtain disclosure in court of what is whispered in the closet. The progress of science in furnishing the government with means of espionage is not likely to stop with wiretapping." His haunting and prescient words clearly apply today, as the line between science and science fiction is continually redrawn.

New technologies for collecting personal information which transcend the physical, liberty enhancing limitations of the old means are constantly appearing. They probe more deeply, widely and softly than traditional methods, transcending barriers (whether walls, distance, darkness, skin or time) that historically protected personal information. The boundaries which have defined and given integrity to social systems, groups and the self are increasingly permeable absent special precautions. The power of governmental and private organizations to compel disclosure (whether based on technology, law or circumstance) and to aggregate, analyze and distribute personal information is growing rapidly.

We are becoming a transparent society of record such that documentation of our past history, current identity, location, communication and physiological and psychological states and behavior is increasingly possible. With predictive profiles and DNA there are even claims to be able to know individual futures. Information collection often occurs invisibly, automatically and remote --being built into routine activities. Awareness and genuine consent on the part of the subject may be lacking.

The amount of personal information collected is increasing. New technologies have the potential to reveal the unseen, unknown, forgotten or withheld. Like the discovery of the atom or the unconscious, they surface bits of reality that were previously hidden, or didn't contain informational clues. People are in a sense turned inside out.

To be alive and a social being is to automatically give off signals of constant information--whether in the form of heat, pressure, motion, brain waves, perspiration, cells, sound, olifacteurs, waste matter, or garbage, as well as more familiar forms such as communication and visible behavior. These remnants are given new meaning by contemporary surveillance technologies. Through a value-added, mosaic process, machines (often with only a little help from their friends) may find significance in surfacing and combining heretofore meaningless data.

The ratio of what individuals know about themselves (or are capable of knowing) vs. what outsiders and experts can know about them has shifted away from the individual. Data in diverse forms from widely separated geographical areas, organizations and time periods can be easily merged and analyzed. In relatively unrestrained fashion new (and old) organizations are capturing, combining and selling this information, or putting it to novel internal uses.

In the United States we celebrated the 200th anniversary of the Constitution, a document that extended liberty. Unfortunately, the bicentennial of another important document that restricted liberty has been virtually unnoticed-- the 1791 publication of Jeremy Bentham's Panopticon or the Inspection House.

Bentham offered a plan for the perfect prison in which there was to be constant inspection of both prisoners and keepers. His ideas helped give rise to the maximum security prison. Recent developments in telecommunications, along with other new means of collecting personal information give Bentham's image of the Panopticon great contemporary significance.

The stark situation of the maximum security prison can help us understand societal developments. Many of the kinds of controls and information gathering techniques found in prison and in the criminal justice system more broadly, are diffusing into the broader society. We may become a "maximum security society."

Such a society is transparent and porous. Information leakage is rampant. Indeed it is hemorrhaging, as traditional boundaries disappear. Actions, as well as feelings, thoughts, pasts, and even futures are made visible, often independent of the individual's will or knowledge. The line between the public and the private is weakened; we are under increased observation, ever more goes on a permanent record, and much of what we say, do and even feel may be known and recorded by others we do not kno-- whether we will this or not, and even whether we know about it or not. Data in many different forms, from widely separated geographical areas, organizations, and time periods can easily be merged and analyzed.

As the technology becomes ever more penetrating and intrusive, it becomes possible to gather information with laser-like specificity and with sponge-like absorbency. If we think about the information gathering net as being parallel to a fishing net, then the mesh of the net has become finer and the net wider.

It is easy to get carried away with science fiction fantasies about things that might happen. But looking just to the next decade we will likely see technical developments with implications for personal privacy such as the following:

DNA screening and monitoring. Beyond identifying persons likely to develop serious illnesses or to have children at risk of illness, this may lead to claims to identify tendencies to alcoholism, homosexuality, and poor work habits

Vehicle tracking devices as part of "intelligent" highway systems

Personal tracking devices via chips implanted under the skin (this is now available for pets)

Pressure to use "smart cards" which could contain all of an individual's health, financial and legal records

The expanded commercial use of spy satellites, capable of producing photographic images of a square meter or less

Smart image-recognition systems which could permit computer matching of faces in large crowds in an effort to locate persons of interest

Wireless portable personal communication devices in which persons might be assigned a phone number at birth which they would be expected to also have with them and ever smarter telephones that deliver video images and information about the person (or the number a call comes from)

Paperless electronic safety deposit boxes

Ever more intense work monitoring

Smart homes in which data (electricity, communications, temperature) flows into and out of the home are all part of an integrated system

The increased use of the internet and of various locator devices (e.g., one called "cookies" that keep a record of what sites are visited and what information is accessed)

The head of a computer database company providing reports on potential tenants to landlords says, "the more you know about somebody else, the better off everybody is." The assertion, typical of a view held by many persons in the United States ignores the strategic, aesthetic, diplomatic and self-definitional aspects of personal information. It is increasingly easy to know "more" about others without their knowledge or consent. Technology creates new possibilities for the invasion of privacy and other problems which our laws, policy, manners and culture have not kept pace with.

The increased availability of personal information (whether in audio, visual, telemetric), bio-chemical, or data base forms is a tiny strand in the constant expansion of knowledge witnessed in the last two centuries and of the centrality of information to the working of contemporary society.

As a sociologist, my research interest is in the new technologies and questions and themes these raise about the individual and society. Beyond any given technology, this reflects a more general interest in the discovery/revelation and concealment/protection of personal information. Under what conditions is it appropriate to gather personal information and what are the social correlates and consequences of revealing or concealing it? A morality for the collection of personal data ought not to depend on how weak or powerful a means is, but on more transcendent ideas about what is right and wrong and the social consequences.

The new information technologies involve larger issues regarding the complex inter-relations of technology and society; the growth of large governmental and private organizations; information flows and restrictions in a democratic society; the social functions and dysfunctions of anonymity; the public and the private; and the nature and experiencing of trust and distrust, the social bond, and social control in mass society. To the extent that the vast increase in what can be known about the individual is joined with a declining ability to protect that information, the implications are profound for behavior and social organization.

Most analyses focus on only one technology such as telecommunications, computer data bases, drug testing or location monitoring or apply only one perspective-- technical, ethical, legal, social or policy. In a forthcoming book tentatively called Windows into the Soul: Surveillance and Society in An Age of High Technology I seek to be integrative and comprehensive-- looking across technologies, disciplines and methods.

I treat the various extractive technologies as a unit and from a variety of perspectives. I suggest cross-cutting analytic dimensions which permit uniting seemingly dissimilar, and separating seemingly similar, phenomena. I offer a set of questions and concepts intended to help in understanding and contrasting extractive technologies, regardless of their specifics. In a previous study of undercover police practices I focused on human informers and infiltrators as the means of data collection, while for this project material technologies are central.

The new information technologies raise at least four broad types of question: social scientific, cultural, comparative and ethical. With respect to social science we need to empirically describe the technologies (what are the facts/data?), how can they be best classified (what are the types, dimensions and contexts of variation and generic or ideal-typical forms?). We need explanation. What theories or ideas best account for the observed patterns and trends? Why have we seen such a rapid expansion in the diffusion of these technologies in the last decade? What inhibits or facilitates the use of extractive technologies? What social processes of facilitation and resistance can be identified? What are the implications for the changing nature of, and the social functions and dysfunctions of borders and boundaries? What type of society would we have if there was greatly enhanced visibility for our current actions and thoughts, as well as for the past and the future?

Secondly, the techniques occur within and against a cultural backdrop and personal experience which must be understood. How are these techniques treated in popular culture as represented by advertisements, cartoons, jokes, music, art and surveillance toys for children? What images and symbols predominate? What does this material tell us about the lived experience of being either the watcher or the watched?

This material raises major questions for our comparative understanding of different societies. This is particularly important in an age of globalization where communications technologies weaken borders. Societies (and regional complexes such as Northern vs. Southern Europe or Europe vs. North America, Asia vs. the West) differ in how they view, experience and treat the borders that define personal and social space as it involves these technologies.

An issue of particular importance is that of public policy and ethics. How should the technologies be judged? What is at stake? What competing values are present and how can and have conflicts between them been responded to? What is most problematic or desirable about extractive technologies? What are the major forms of abuse and how can they be minimized? What are the social consequences of control as a result of etiquette, organizational policies, laws and the design of technology?

The first part of the Appendix suggests a number of general questions that can be asked of any new technology. The second part deals with information technologies and, by way of illustration with the information highway specifically. The specific questions offered here can guide research and policy. In the remainder of this paper I wish to do three things 1) discuss the issue of privacy and why it is important 2) list some techno-fallacies regarding information technology and 3) list some principles that can guide us in the development and use of new these technologies.

Does Privacy Matter?

The new technologies may raise a variety of troubling issues including injustice, intrusion, denial of due process, absence of informed consent, deception, manipulation, errors, harassment, misuse of property and lessened autonomy. Privacy as it involves the control of personal information is central to many of the social concerns raised by new information technologies.

The United States does not have the recent European experience with totalitarian governments, and has a rather uncritical view of technology. Those factors, when joined with the value placed on free enterprise, rampant consumerism, freedom of speech and information, and concerns over declining productivity in a global economy, AIDS, drug use and crime mean that in the United States the laws and policies for the protection of personal information are much weaker than in Europe. For example there are no data protection commissions or commissioners. Personal information is commodified e.g., a list of the magazines one subscribes to can be bought and sold without the consumer's knowledge or consent.

One response to privacy concerns often expressed by some industry spokespersons and many citizens is simply, "So what? Why worry?" These technologies fill deeply felt needs. We increasingly live in a world of strangers, rather than in homogeneous rural communities in which people knew those with whom they had contact. The United States Supreme Court has said in its famous Katz decision that privacy was only protected when it could be reasonably expected. Technology changes and social expectations can't remain static. With more powerful technologies we can reasonably expect less and less and hence privacy must become more restricted. Most so-called "privacy invasions" are not illegal in the United States.

Given the free market, you can also buy technologies to protect yourself from privacy invasion. Personal information is often viewed as just another commodity to be sold like any other. Companies have an obligation to stockholders to make money. Government must find the guilty and protect the innocent.

In addition we are an open society that believes that visibility in government brings accountability. With respect to individuals a valued legacy of the 1960s is personal openness and honesty. The only people who worry about privacy are those who have something to hide. Right? Wrong!! There are at least 10 reasons why privacy and anonymity are important:

The ability to control information about the self is linked to the dignity of the individual, self-respect and the sense of personhood. Self-presentations and back-stage behavior are dependent on such control.

Anonymity can be socially useful in encouraging honesty, risk-taking, experimentation and creativity.

Confidentiality (via dissemination protections) improves communication flows and is vital to trust in professional (doctors, lawyers, psychologists) and corporate settings.

Privacy is a resource in inter-personal relations, doled out and exchanged as relationships progress. Intimacy is based partly on the voluntary sharing of personal information with others. Individuals feel free to be "themselves" as they get to know others better, and reciprocal exchanges take place.

The control of information is a strategic resource in impersonal relations. Trade secrets and copyrights are a formal expression of this.

Group boundaries are maintained partly by control over information. Individuals are in or out, and occupy organizational positions based partly on what they are entitled to know and have access to.

Privacy makes possible the American ideal of starting over and the fresh start.

Fairness can be protected by denying access to information which could be put to unfair use. For example while religious discrimination is illegal, if employers, schools, and landlords could ask it (as in most cases they now can not) such protections would be weakened.

Privacy can help provide the solitude and peace necessary to mental health and creativity in a dynamic society. Here it is a question of control over what is taken in, rather than what is given out.

There is a broader, all encompassing symbolic meaning of practices that protect privacy. Such practices say something about what a nation stands for and are vital to individualism. By contrast, a thread running through all totalitarian systems from the prison to the authoritarian state is lack of respect for the individual's right to control information about the self. It has been said that the mark of a civilization can be seen in how it treats its prisoners, it might also be seen in how it treats personal privacy.

Of particular importance are the strong political implications of the topic. A thread running through all totalitarian systems, from the prison to the authoritarian state, is denying the individual the right to control information about the self. It has been said that a civilization's nature can be seen in how it treats its prisoners. It might also be seen in how it treats personal privacy.

Privacy is a value which may only be appreciated once it is lost. It is important that individuals be made aware of what is at stake and what there rights are. It is not a foregone conclusion that technology will develop in such a way as to reduce the power of the individual relative to large organizations and the state, although the forces favoring this tend to be stronger than those opposing it. Schools and religious organizations should deal more directly with what the individual's rights are with respect to means such as third party records, computer dossiers, drug testing, and the polygraph. It is important that citizens act back and ask organizations about their information policies. Assertions such as "the computer says" or "that is the policy" must lead to questions such as "is the computer reliable?" Why is it the policy? What moral and legal assumptions underlie it? What alternatives are there? How was the data gathered? How is it protected and used?

It is also important that the technology be demystified and that citizens not attribute to it powers that it doesn't have. There is a chilling danger in the "myth of surveillance" when the power of information technology is oversold. On the other hand, when technologies are revealed to be less powerful than authorities claim, legitimacy declines. There should be truth in communications policies, just as we have truth in advertising and loan policies. The potentials and limits of the technology must be understood.

Yet in noting the social functions of privacy this is certainly not to deny that privacy taken to an extreme can be harmful. Or that privacy will never conflict with other important values such as the public's right to know and the First Amendment to the United States Constitution, or accountability, health, security, and productivity.

Unlimited privacy is hardly an unlimited good. It can shield irresponsible behavior -- protecting child and spouse abusers, unsafe drivers, and money launderers. Taken too far it destroys community. Without appropriate limitations it can trigger backlash, as citizens engage in unregulated self-help and direct action. The private subversion of public life carries dangers as well as the public intrusion into private life.

Contemporary information extractive technologies can of course also be used to protect liberty, privacy and security. Without the incriminating tapes secretly recorded by President Nixon, Watergate would have remained a case of breaking and entering; without the Xerox machine the Pentagon papers might never have reached the public; and without the back-up computer records kept in NSC files which Oliver North thought he had erased, we would know far less about the Iran-Contra affair. Aerial surveillance can monitor compliance with pollution standards and help to verify arms control treaties. Tiny transmitters can help locate lost children or skiers caught in an avalanche. Devices that permit fire fighters to see through smoke may save lives. Remote health monitors can protect the elderly living alone (in one form an alarm is sent if a day goes by without the refrigerator being opened).

But elements of a Greek tragedy are also present. The technology's unique power is also its tragic flaw. What serves can also destroy, absent increased public awareness and new public policies. With a topic as complicated and changing as this one, it is easier to ask the right questions than to come up with the right answers. The two appendices list some questions which might be asked about the new technologies.

Information Age Techno-Fallacies

The belief that privacy is not important and should matter only to those who have something to hide is one of a large number of what I see as tarnished silver-bullet "information age techno-fallacies" (the silver bullet image refers to an American popular culture figure "The Lone Ranger" who always left the locals with a silver bullet as he rode off into the sunset, having subdued the bad guys). Privacy is affected not only by laws, customs and a constant dialectic between privacy invading and privacy protecting technologies, but the cultural assumptions that underlie attitudes about technologies.

As an ethnographer, in watching and listening to the rhetorics around information technology, I often hear things that simply sound wrong to me, much as a musician hears things that are off key. A sampling of such techno-fallacies:

"Turn the technology loose and let the benefits flow."

"Do away with the human interface."

"When you choose to make a phone call, you are consenting to have your telephone number released."

"Only the computer sees it."

"Those of us who are involved in consumer marketing are the best agents for protecting the consumer's privacy."

"That's never happened."

"The public interest is whatever the public is interested in watching."

"There's no law against this."

"The technology is neutral."

The fallacies which I discuss below differ in kind-- some can be shown to be empirically false or logically suspect, and hence if the argument is correct (whether factually or logically), persons of diverse political perspectives can agree that they are fallacious. Others are normative fallacies and will be rejected only when there is agreement about the values, or value priorities on which they are based. But even here, I think the values that I am expressing are central to American and western society.

Table I lists 30 techno-fallacies. In this limited space I will comment on only some of them.

The fallacy of the free lunch or painless dentistry, (a frequent assumption of the techno-boosters), is that a technical change will involve only benefits and no costs. Therefore it must be adopted since it is basically free. Of course this is nonsense-- there are no free meals and your teeth may hurt when the novocaine wears off. If nothing else a given use of resources involves forgone opportunity costs. The resources might have been used for some other purpose.

The fallacy of quantification is particularly important in the United States where policy setting tends to be dominated by economists and lawyers. It's important to realize that there are values that can't be measured by bottom lines and market-driven phenomena.

The fallacy of the short run speaks for itself. There's a wonderful story about a farmer who was having a hard time making ends meet. Someone advised him to feed his animals less, so he cut down their feed by 25%. It worked-- he saved a lot of money. He then said "hey, this is great, I'm going to cut their feed in half" and he saved even more money. And of course he kept on reducing their feed and you know what happened.

The legalistic fallacy is often expressed by advocates of a technology. The basic idea is that if you have a legal right to do something, it therefore must be the right thing to do. But we ought to start with the law and not stop with it. The fact that a practice is legal, (sometimes because it is too new to have resulted in restrictive legislation, or because power differentials prevent that), does not mean that it is right or wise.

The pragmatic or efficiency fallacy holds that the most important thing is whether or not the technology gets the job done. But there is more to collective life than pragmatism. Certainly given scarce resources and a scientific ethos, we must ask that question. But again an affirmative answer shouldn't lead to the automatic unleashing of the technology and the overruling of other competing values. Values that are difficult to measure rarely receive adequate attention at conferences which are inspired by a particular innovation or problem. Pragmatism must be weighed along side of other values such as fairness, equity, and the external costs imposed on third parties.

The fallacy of the lowest common denominator morality assumes that if your side doesn't use the technology your opponents will, giving them an unfair advantage.

A common fallacy is to assume that personal information (whether deduced from broader aggregate data or collected from a particular individual) is simply another commodity. It is believed that if you are able to gain access to the data, it's yours to use as you wish. But personal information has a special quality, something that's almost sacred. It is the not the same as raw materials or office furniture. Europe has recognized this to a greater extent than has the United States.

There's the fallacy of assuming that the facts speak for themselves. But the "facts" are socially generated and interpreted. Any human knowledge, no matter how powerful and useful, is always abstracted out and partial. It is only a sample or a fraction of what might be attended to. Alternative information or a fuller picture might suggest a different meaning. To adequately interpret, we need a context and a broader picture. When you apply acontextual data to human beings you run terrible risks of error and injustice in particular cases (although in the abstract the system may be rational).

Now to deal with broader context, of course, you have to have more data and that requires more money. This leads to another (and in some ways opposed) fallacy: that if some information is good, more is better. This equation of bigger with better is particularly strong in the United States. It is no doubt related to capitalism and has a gender component. It is simply not necessarily true that if only we spend more money and create more powerful technologies that things will improve. There are issues of scale and threshold, not to mention the hubris of thinking that terribly complex problems existing within contexts of human conflict will always yield to technical solutions. There's nothing inherently good or bad about the increased power of a technology. Our judgements must flow from analysis not from the fact that a tool exists, or might exist. In this sense technology differs greatly from artistic expression.

Just because privacy expectations are historically determined and relative, it is a fallacy to assume that they have to become weaker as technology becomes more powerful. As noted such a view is reflected in some United States court decisions. A related point is that because privacy as we know it in our complex, industrial democratic society is a historically new phenomena, not experienced, or perhaps even valued, by much of the world's population, that it is not important.

The populist fallacy of assume that the public knows best. There is the opposite fallacy that élites know best. There is the fallacy that the means will never determine the end. It has been said that to a person with a hammer, the whole world looks like a nail. Yet a major critique of industrial society is that means increasingly determine ends. It is vital for civilization (if not always for self or organizational interests), that we start with goals and ask what do we want to accomplish, instead of starting with a tool and asking how can we apply it.

There is the dangerous fallacy of believing that because it's possible to successfully skate on thin ice, it's acceptable to do so. A standard response to social critics of technology is "OK, it could happen, but so far it's hypothetical." Foresight is better than hindsight, even if it sometimes errs in its conservatism. There was a time when the United States nuclear accident at Three Mile Island and the large Exxon oil spill in Alaska had not happened as well. It may be fun to skate on thin ice, but it's a dumb thing to do.

There is the fallacy of permanent victory. Here we have the assumption that environments, especially those where there are conflicts of interest, are passive rather than reactive. But we know that isn't the case.

There is the danger of delegating decision-making authority to a machine. It is also often assumed that technology is necessarily good because its new, that you can't stop progress and if we can do something we should. I study social control and one of my favorite quotes is from a police chief, who said "If we have the technology, why not use it?" That is a frightening assertion absent a wide-ranging consideration of multiple consequences. Such statements ought to be approached as empirical and ethical questions and not unreflectively accepted as conclusions.

There's a very important fallacy of thinking that the only meaning of the technology is in its application. Of course we have to be concrete, we have to think about whether or not the technology will work. But technologies involve social and human meanings and historical referents. The meaning of a technology does not lie only in its ostensible use. Technologies also have symbolic meanings. Police dogs can be an efficient crowd control device. Yet if you were the police chief in Birmingham, Alabama, (where there are vivid television memories of police dogs attacking civil rights demonstrators) would you use dogs for crowd control?

There is the fallacy of not considering issues of precedent. This assumes that "we will just do this one time and never again." And finally there is the fallacy of rearranging deck chairs on the Titanic, instead of looking for icebergs. The problem here is looking at superficial issues, or at symptoms, rather than at deeper causes.

I realize that some of my assumptions could be turned around and called fallacies (or worse). Someone holding different values might come up with a different list including items such as "The fallacy of listening to academics who make broad generalizations." The fallacies also differ in seriousness and some are in conflict. My basic point is not to argue strenuously for this particular list, but to argue for the importance of undertaking a critical examination of the assumptions that we make about new information technologies whether this involves new forms of telecommunication, Geographic Information Systems, drug testing, electronic location monitoring, DNA predictions, or computer matching and profiling. In doing this we need humility in the face of complex and interdependent problems.

One way to deal with these issues is have publicly accepted principles by which new information technologies will be assessed. Here we need to clarify values and rights in order to reasonably balance what is at stake. An important example of the kind of principles needed is the Code of Fair Information Practices developed in 1973 for the U.S. Department of Health, Education & Welfare. The Code involves five principles:

There must be no personal data record-keeping whose very existence is secret

There must be a way for a person to find out what information about the person is in a record and how it is used

There must be a way for a person to prevent information about the person that was obtained for one purpose from being used or made available for other purposes without the person's consent

There must be a way for a person to correct or amend a record of identifiable information about the person

Any organization creating, maintaining, using, or disseminating records of identifiable personal data must assure the reliability of the data for their intended use and must take precaution to prevent misuses of the data

These are generally reflected in European data protection standards. In addition other related principles are needed. For example a principle of minimization such that only information that is directly relevant to the task at hand is gathered; a principle of restoration such that in a communications monopoly context those altering the privacy status quo should bear the cost of restoring it; a safety net or equity principle such that a minimum threshold of privacy is available to all; a principle of timeliness such that data are expected to be current and information which is no longer timely should be destroyed; a principle of joint ownership of transactional data such that both parties to a data creating transaction must agree to any subsequent use of the data and must share in any gains from its sale; a principle of consistency such that broad ideals rather the specific characteristics of a technology determine privacy protection; and a principle of redress such that those subject to privacy invasions have adequate mechanisms for discovering and being compensated for violations. Table II lists principles which I believe are central.

It is unrealistic to have principles which apply equally in all contexts and across all technologies. To argue that, they would have to be so general as to be vapid, or simply too rigid. Yet I think principles such as these must be considered when new public policy are developed with respect to information technology.

Sunrises and Sunsets

Former Supreme Court Justice William O. Douglas has written that the United States Constitution with its Bill of Rights

...guarantee to us all the rights to personal and spiritual self-fulfillment. But the guarantee is not self-executing. As nightfall does not come at once, neither does oppression. In both instances, there is a twilight when everything remains seemingly unchanged. And it is in such twilight that we all must be most aware of change in the air-- however slight-- lest we become unwitting victims of the darkness.
We are in such a time period now with respect to new information technologies. This goes far beyond particular laws to what it means to be human and to how we think about society.

There is the possibility of becoming an even more stratified society based on unequal access to information in which individuals live in glass houses, while the external walls of large organizations are one-way mirrors. There is a significant (and perhaps growing gap) between the capabilities of the new surveillance technologies and current cultural, legal and technical protections.

Powerful social and psychological forces work against any easy assumptions that a decent society is self-perpetuating. The masthead of a black civil rights era newspaper in Sun Flower County, Mississippi reads "Freedom is a Constant Struggle." This heralds an important truth. There are no permanent victories in democratic society. As past and contemporary events of this century indicate, liberty is fragile. These technologies require new cultural standards and public policies. The technologies offer wonderful possibilities. Yet they are also reminiscent of Franz Kafka`s short story The Penal Colony in which a prison officer invents a sophisticated machine for punishing inmates. The story ends with the officer being killed by the machine he created. There is no guarantee that hard won conceptions of the dignity of the person, the autonomy of organizations and rights of citizenship will stay won or be extended in the face of continual social and technical change-- absent knowledge, wisdom and vigilance.

Back to Main Page | Top | Table I | Table II | Appendix | References

Table I. Some Techno-Fallacies of the Information Age
 
General Techno-Fallacies
1) The fallacy of immanent development and use which holds that if a technology can be developed it should be, and if it is developed its use can not be stopped
2) The fallacy that greater expenditures and more powerful technology will continually yield benefits in a linear fashion
3) The fallacy that pragmatism and/or efficiency should automatically overrule other values such as fairness, equity, and external costs imposed on third parties
4) The fallacy of thinking that the meaning of a technology lies only in its practicality or material aspects and not in its social symbolism and historical referants
5) The fallacy that the means will never determine the end (or if you can't fix the real problem fix whatever the technology permits you to fix)
6) The fallacy of the free lunch or painless dentistry
7) The fallacy of perfect containment or non-escalation (or the Frankensteinian fallacy that technology will always remain the solution rather than become the problem)
8) The fallacy of thinking that a given, carefully circumscribed change will not create a precedent
9) The fallacy of technical neutrality
10) The fallacy of societal consensus and homogeniety in which it is assumed that conflicts and divisions are non-existent and what's good for those with economic and political power is necessarily good for everyone else) 
11) The fallacy of implied consent and free choice
12) The fallacy of quantification
13) The fallacy of the short run
14) The legalistic fallacy that just because you have a legal right to do something it is the right thing to do
15) The technocratic fallacy that the experts always know what is best
16) The populist fallacy that the people always know what is best
17) The fallacy of lowest common denominator morality in which if the competition or others push moral limits, you are justified in doing the same 
18) The fallacy of permanent victory
19) The fallacy of the 100% fail-safe system
20) The fallacy of delegating decision making authority to the machine
21) The fallacy of a passive, non-reactive environment
22) The fallacy of believing that because it is possible to successfully skate on thin ice, that it is acceptable to do so
23) The fallacy of assuming that if a critic questions the means he or she must also be against the ends.
The following apply particularly to information technologies:
24) The fallacy of assuming that only the guilty have to fear the development of intrusive technology (or if you've done nothing wrong you have nothing to hide). 
25) The fallacy of assuming that personal information on customers, clients and cases in the possession of a company is just another kind of property to be bought and sold the same as office furniture or raw materials 
26) The fallacy of assuming that data are simply there waiting to be delivered or plucked from the data tree (the social and political factors involved in collection/construction are not seen)
27) The fallacy that the facts speak for/produce themselves
28) The fallacy of assuming that because our privacy expectations are historically determined and relative, they must necessarily become weaker as technology becomes more powerful.
29) The fallacy that if a value such as privacy is relatively new or new in form, or applies to only a fraction of the world's population, it can't be very important 
And finally a more general fallacy:
30) The fallacy of re-arranging the deck chairs on the Titanic instead of looking for icebergs.

 

Back to Main Page | Top | Table I | Table II | Appendix | References

Table II. Some Privacy Protection Principles
 
1) Informed and consenting subjects
2) Minimization
3) Restoration
4) Safety net/equity
5) Timeliness, validity, relevance of data
6) Joint ownership of transactional data
7) Unitary usage and non-migration of data
8) Consistency
9) Subject involvement in standard setting
10) Reciprocity
11) Inspections
12) Correction-commentary procedures
13) Data security 
14) Confidentiality and anonymity where appropriate
15) Redress
16) Human review of machine decisions

 

Back to Main Page | Top | Table I | Table II | Appendix | References
 
 

Appendix

Some Questions for Social Research and Public Policy Raised by New Information Technologies

A. Some General Questions To Ask About Any New Technology

What human needs or goals is the technology intended to serve?

What other means are (or might be) available for obtaining the same goals?

What logical, empirical and normative assumptions are made about the technology? Who needs or wants the technology? Where does the pressure to develop and apply it come from?

What groups are most involved in making decisions about the form of the technology and how it will be used?

Who will the technology be available to and who will control it?

What groups are likely to profit most from, or be hurt most by, the technology and in what ways?

What are the likely social impacts of the technology on things such as the economy, the environment, work, education, mental and physical health, the arts, leisure and interpersonal and group relations?

What are the likely impacts on values such as democracy, equality, civil liberties and civil rights, fairness, accountability, individual autonomy, choice, growth and achievement, creativity, tolerance, inclusion, centralization and decentralization, standardization and differentiation, social participation, and beauty?

What new forms of crime and victimization, as well as of social control, might appear?

Will market forces provide for the efficient and equitable distrbution of the benefits of the technology?

How valid, reliable and effective is the technology?

What can go wrong as well as right? What are the major short and long run risks associated with the technology and the likelihood of their occuring?

What unintended positive and negative consequences might occur? How sure can we be that the technology will only be used for its intended purposes? How great is the danger that the machine will control us rather than the reverse? Is their adequate provision for the human vigilance and discretion?

What forms of recourse are available if the technology is misused and individuals and groups are wrongly harmed by it? How easily can this be discovered?

What precedents will use of the technology create and where might this lead?

What symbolic meanings does the technology communicate?

What legislative, judicial and administrative/bureaucratic policies are needed? Is there a role for industry-wide standards and policies? Are new manners needed? What new technologies may be needed to deal with the problems of the one in question?

What lessons can we learn from previous technologies?

What are the best and worst scenarios involving the technology that can be imagined for the next 5-10 years? The next 50 years? What factors are operating to push us toward or away from these outcomes?

B. Some Questions Specific To Electronic Highways

Access

How useful is the highway analogy when the vehicular highway was built with government funds and the information highway is likely to be built with private funds?

Will information highways be freeways or toll roads?

Will the predominant form be like a public library in which all have easy access, or will it be commodified? What are the myriad and conflicting consequences of treating information as a commodity or as a public good?

What does "universal access" mean in a context where there are multiple services and providers? Where will the funds to maintain and use systems come from even if they can be initially made available? What motivation, skills, and money are required beyond access, if the technology is to be effectively used at a mass level? How will schools that don't have money to buy library books suddenly be able to afford access to new forms of information delivery, even if they are given the necessary hardware and software? Will simply providing equal access increase inequality, given different starting points?

Will we see new forms of inequality between the information rich and poor? Can a way be found to benefit from the positive aspects of de-regulation while not losing the subsidization benefits of regulation (a key factor in universal phone service)?

Producers and distributors

Who will provide which services-- telephone, cable television, or entertainment companies, non-profit organizations, government (and in what combinations with what consequences)?

Will we see a horizontal switched network like the telephone system, in which any user can communicate with any other, or a vertical network in which messages only go one-way, as with traditional television?

Is the Internet a realistic model, given its subsidization and technically relatively sophisticated users?

Will companies invest when the legal and regulatory climate is so muddled and uncertain?

What lessons can be learned from the history (and hype) that accompanied the appearance of the telephone, radio and television?

Uses

Will there be balance among the various potential uses such as interactive means of communication for far flung citizens, education, public service information and discussions, shopping, and entertainment, or will the commercially driven forms predominate?

What are the social and ethical implications of the general public favoring entertainment and shopping over the lofty cultural and educational goals of net visionaries? Does a democracy that is inseparable from the market have a built-in contradiction relative to the notion that professional élites have an obligation to tell the public what is best for them?

Legal issues and responsibilities

Will electronic communication come to be fully covered under the Fourth Amendment's protection against unreasonable government search and seizures?

Will communications over a network be given the freedom of speech and assembly protections of the First Amendment? Should the providers of a service be entitled to censor messages and restrict participation?

For public policy purposes should e-mail be viewed as a post-card, a first class letter, or a telephone conversation, or something different from each of these?

Is a posting on a bulletin board best seen as a form of publishing or simply a conversation?

Should the network system operators be responsbile (both morally and legally) for what is sent out over it?

Will medical and other professionals feel at ease to disseminate their knowledge if they have to worry about liability?

Should electronic communications and software be entitled to the same copyright protections as a book? If so will this inhibit use of the medium and be impossible to enforce? If it is not grant-ed does it lessen the incentive to innovate? How free should the recipient of electronic information be to change and retransmit it?

Can networks be secured against malicious hackers, criminals?

Can children be encouraged to use networks, yet be protected from exploitative communication?

Can systems be user friendly and inexpensive and yet secure?

Surveillance and privacy

There are tire marks all over the highway. Electronic trails create unprecedented possibilities for knowing where a person is, whom they are communicating with, what is being expressed, and what information they are using. Will this facilitate demands for a national id card, will this lead to even greater sale of personal information without the knowlege, consent or profit of the subject and to new forms of inclusion and exclusion beyond the control of the individual?

Can systems be technically, legally and socially designed such that their advantages do not come at a cost of the destruction of personal privacy? Can the social functions of anonymous communication be balanced with the dysfunctions? How wiretap-friendly should the technology be? How can the authenticity of a sender and a document be insured? Do we want the government to be the only locksmith in town holding the keys to data encryption? How should the protection of individual privacy be balanced against the broader needs of the society for protection?

Implications for social interaction and psychological well-being

Will the new "virtual" communities and interactions that occur in cyberspace mean greater equity in communication (e.g., race, gender, age and physical condition are not readily apparent on a computer screen), increased chances for social participation, and reduced social isolation?

Will such interactions be as satisfying as those in the world of face-to-face interaction? Or will social skills decline and interaction become more mechanical and emotionless as a result of being electronically mediated?

What are the reciprocal effects of machine and face-to-face interactions likely to be? Will the first lead to new forms of the second? Will traditional relationships be enhanced? Will face-to-face interactions become more social, expressive and playful because there will be less need to functionally exchange information that can now be obtained through the computer? Will there be a "freeing up" of time for pure sociability? Will computer communications become less functional and more socially expressive, as human needs play a greater role in shaping the technology (e.g., the appearance of the "smileys" [:-)]?

Can information overload be avoided given the potential number of communicators and amount of information? Is more always better? Will clear road maps, stop signs and rest stops be available to prevent individuals from getting lost, feeling invaded and overwhelmed and tuning out? Can unhealthy escapism and even addiction to the computer be avoided? Will individuals find it difficult to separate, or navigate between virtual and the other reality?

In permitting the formation of narrowly specialized groups will there be increased fragmentation at the societal level? Will it encourage withdrawl and even secession from local affairs?

What neo-Luddite social backlashes might occur, such as a refusal to cooperate with the IRS or the Census because of fear that privacy is inadequately protected and that information linkages have gone too far, or attacks on computing centers via viruses or more traditional means?

What will be the consequences of the blurring of traditional boundaries (e.g., between work and home or weekdays and weekends) that the technology makes possible?

What happens to national borders when the state can no longer control the information flowing into and out of it (e.g., the centrality of the fax and e-mail to the 1989 recent protests in China? Will the citizens of cyberspace form a new nation? Will a relatively homogeneous, commercial, Western-oriented culture overwhelm local, previously isolated cultures lacking in the resources to communicate back?

Will rural-urban and low-high density population distinctions become less important for life chances as geography becomes less salient for interaction? Is there really a free lunch and are we as smart as we think we are?

Back to Main Page | Top | Table I | Table II | Appendix | References
 
 

References

Bennett, C. 1992 Regulating Privacy: Data Protection and Public Policy in Europe and the United States (Ithaca: Cornell University Press).

Cohen, S. 1985 Visions of Social Control. (Cambridge: Polity Press.

Flaherty, D. 1989 Protecting Privacy in Surveillance Societies. (Chapel Hill: University of N. Carolina Press)

Foucault, M. 1977 Discipline and Punish: The Birth of the Prison. (New York: Vintage).

Gandy, O. 1993 The Panoptic Sort: Towards a Political Economy of Information (Boulder, Co.: Westview Press).

Laudon, K. 1986 The Dossier Society: Value Choices in the Design of National Information Systems (New York: Columbia University Press).

Lyon, D. 1994 The Electronic Eye. (Cambridge: Polity Press).

Lyon, D. and Zureik, E. (eds.) 1996 Computers Surveillance and Privacy.(Minneapolis: University of Minn. Press).

Marx, G.. 1988 Undercover: Police Surveilance in American Society. Berkeley: University of California Press).

Regan, P. 1995 Legislating Privacy: Technology, Social Values and Public Policy. (Chapel Hill, N.C.: University of North Carolina Press).

Rule, J. 1973 Private Lives, Public Surveillance. (London: Allen-Lane).
 
 

Back to Main Page | Top | Table I | Table II | Appendix | References



You are visitor number to this page since November 1, 1999.