Surveys and Surveillance
In F. Conrad and M. Schrober, (eds.) Envisioning the Survey Interview of the Future. John Wiley & Sons, 2008.

By Gary T. Marx  |  Send email  |  Back to Main Page  |  References  |  Notes

A survey is a form of surveillance. Survey and surveil sound alike and are synonyms. The former, however, does not usually conjure up Orwellian images. Rather the survey in its best light has been seen as a component of democratic society in which citizens can voluntarily inform leaders of their attitudes and needs and help clarify public policy questions and all under presumably neutral, scientific conditions that can be trusted. Survey respondents are encouraged to participate in order to "make your voice heard." A pollster observes, "polling is an integral part of the democratic process, it gives everybody a chance to have an equal voice." (Krehbel, 2006)

That lofty potential is present, but so is its opposite. There is risk of a kind of scientific colonialism in which various publics are expected to offer unlimited access to their personal data, but rather than serving the public interest, this can facilitate manipulation by elites pursuing selfish and/or undemocratic ends. We might ask the optimistic pollster of the paragraph above, "Does everyone also have an equal chance to determine what the questions are, who gets questioned and how the data will be used?" Consider the explicit merging of social control and social research in Henry Ford’s 1910 Sociology Department. The department supervised a large number of informers on the factory floor and also visited the homes of workers, carrying out interviews on finances, sexual behavior, and related personal matters to be sure workers behaved in ways considered appropriate by the company (Hooker, 1997). The varied relation between these potentials in different contexts, the kinds of personal information sought, and the conditions under which it is gathered raise significant questions. In this chapter I draw from a broader approach for analyzing surveillance to consider some social and ethical implications of survey research, and how they may evolve in the survey interview of the future.

This approach analyzes the structure of surveillance settings and characteristics of the means used, data collected and goals sought, and a concern with ethics. This perspective is developed in Marx forthcoming. Initial work on basic concepts and goals, means, type of data, and ethics is given in Marx 2005, 2004, 2006a and 1998 respectively—available at

Distinct ethical questions may be raised about the data collection means and process (the topic most relevant to this book). This involves the inherent and/or applied characteristics of the means and the competence of its practitioners. In addition, ethical assessments need to consider the conditions of data analysis (including merging with other data apart from that directly collected) and storage and uses of the data. I note conditions under which surveillance is most likely to be questioned and the subject of controversy. I then ask how these relate to traditional and recent developments in the large scale survey.

Having benefited as a young scholar from the work of Paul Lazerfield, Sam Stouffer, and Hadley Cantril through direct contact with Charles Glock, Hannan Selvin and S.M. Lipset, I am now pleased for the chance to revisit some of the social and ethical questions of concern to survey researchers in the 1960s and to consider new questions. 1

A concern of some social researchers, particularly in the later 1960s and early 1970s was with the impact of survey research as part of a broader critique of social science (e.g., Gouldner,1970; Colfax & Roach, 1971; Mills, 2000). Who did the surveys serve? Who sets the research agenda? Are potential users aware of the survey’s sponsors and their reasons for seeking the information? What did the survey offer to those disadvantaged and beyond the mainstream who were disproportionately the subjects of research? What did surveys really tell us about social reality (Blumer, 1948)? Those issues have not gone away, but as this volume makes clear new issues are appearing.

Academic social researchers like other brethren in the discovery business (market researchers, investigative reporters, police, private detectives, national security agents) are surely more spies than spied upon, even if for academics this is usually in benign contexts. We all seek to find things out about others which they may, or may not, wish to reveal and which may hurt, harm, help, or be irrelevant to them –whether as individuals or more indirectly as group members (Marx, 1984).

In their other roles, academics are of course also watched. 2 The researcher as agent of surveillance contrasts with the researcher as subject of surveillance as a citizen, consumer, communicator, and employee (note citation data base searches involved in promotion and tenure for faculty members at universities).

With respect to organization, problems seem more likely in the surveillance of an individual by an organization (and one that is minimally accountable) and when data collection is nonreciprocal and initiated by the surveillance agent. This is related to power and resource imbalances within stratified societies and organizations.

With respect to any technology, social and ethical concerns regarding surveillance are most likely to be raised by critics when the means are coercive, secret, involuntary, and passive and involve harm, risk, or other costs to the subject –whether during the data collection process or in subsequent uses, particularly when there is no formal juridical or policy review and when the playing field is inequitably tilted. Such review may apply to a general surveillance practice, as well as a given application.

Controversy is also more likely when the organization’s goals are in conflict with the legitimate interests and goals of the subject. Consider, for example, selling lists of pharmaceutical drugs used to potential health insurers or employers.

Concerns about crossing informational borders are also more common when the subject’s specific identity is known and he/she is locatable and when the data collected are personal, private, intimate, sensitive, stigmatizing, strategically valuable, extensive, biological, naturalistic, predictive, attached to the person, reveal deception, and involve an enduring and unalterable documentary record and when the data are treated as the property of the collector and are unavailable to the subject, whether to review and question or to use.

When compared to many forms of government, marketing and work surveillance, let alone that of freelance voyeurs, academic social studies including survey research, in particular, are in general less problematic. At least that is the case with respect to the structure of the data collection context, the means used, and the goals. However, it is potentially not the case when considering the kind of sensitive data the survey may collect.

With respect to structure, academic survey research can be categorized as role relationship surveillance carried out by an organization with an individual subject. The individual is part of an external constituency. The action is surveillance agent initiated. There is a nonreciprocal information flow from the subject to the agent (the respondent does not usually ask the interviewer about his or her income or birth control practices). 3 In the survey research context, see the work of Schober, Conrad, Ehlen & Fricker, (2003).

These structural conditions of the survey can be associated with misuse. However, this is lessened in general because the goals of the agent are likely to be either neutral or supportive of the subject’s goals, rather than in conflict with them. Of course, questions can be raised about the goals and who determines and carries out the research agenda and has access to the results. Consider, for example the aborted Project Camelot, a social science counternsurgency effort (Horowitz, 1967 ) The data are also likely to become publicly available (both with respect to their content and for secondary use), rather than kept secret, or considered the exclusive property of the surveyor. Ironically, this can undercut the privacy of the data, particularly when the bin containing selected subgroups is very small (e.g., in a known location, such as a small town, when sorting by variables that successively narrow the size of the pool—age, gender, education ,and religion). We also need to ask public to whom? Making the individual data available to the respondent differs from making it available to anyone who wants to do secondary analysis. There are irreconcilable conflicts and often competing goods between publicity and privacy. They can, however, be more sensitively managed when these concerns are acknowledged.

The major goal in academic research is the advancement of knowledge, whether for reasons of scholarship or public policy and planning. While there are ostensibly broad public interest goals (although that concept can be elusive)—serving health, education, and welfare, the economy, and ameliorating social problems (apart from whether or not these are met). These are unlikely to be in direct conflict with the legitimate personal goals or interests of the subject (in contrast to some bank, insurance, or employment settings) that are similarly structured.

While the surveying organization has greater resources than the subject of the interview and data flow in only one direction, this need not lead to abuse. The survey is rooted in an institutional context, which provides standards and reviews. Universities and government agencies have expectations and procedures regarding responsible research. IRB boards can serve as a brake on unrestrained research (that they can also break desirable research is a different issue). Public and private funding sources exert control. Peer networks, the professions, and their associations through socialization and codes of ethics also may serve to limit research excesses.

The survey subject is obviously aware of the data collection and the interviewer as surveillance agent (although not necessarily of the survey’s full purposes or of what will count as data). The subject also consents. An ironic cost of this can be intruding into a person’s solitude and taking his/herr time away from other activities.

The respondent (Milgram effects to the contrary) is presumably free to refuse to be interviewed. According to a 2003 Pew Research Center survey, about seven out of ten telephone calls answered do not result in completed interviews (a significant increase over the seven years before). Contrast that with the difficulty of avoiding video, credit card and internet surveillance). Respondents’ decision to participate in surveys seems related to their perceptions of disclosure risk and risk of harm (Singer, 2003, see also Fuchs in this volume and the discussion of Bob Grooves’s work in Schober and Conrad’s introduction to this volume.

Beyond opting out when contacted, the individual may refuse to answer a question or terminate the interview at any point. The interview setting also offers (or at least traditionally has offered) room for the subject to cover and protect information he/she would prefer to keep private. This may permit dignity for the subject not found under coercive, secret, and involuntary conditions. The subject is free to decide just how much to reveal and how honest to be and even to walk away or hang up. Survey data often involve sensitive material that could harm the individual. This harm may be appropriate or inappropriate in some larger sense. For example, if survey information on crime is reported to law enforcement, the harm is not to justice, but with the violation of the promise of confidentiality and data protection offered when the information was collected. The harm in data collection from being asked to recall and report sensitive information may differ from that with the involuntary and coercive crossing of a physical border as with taking blood, a urine sample, or body cavity search. However, the chance of harm is lessened because the goal is aggregate data on populations, rather than data on the subject per se. As a result, the individual is unlikely to suffer direct adverse consequences. Certainly very personal data offer temptations for misuse (whether commercial or prurient) to those in the survey research process. Yet the deindividualization of the data works against this (National Research Council, 1993).

Confidentiality and anonymity (with respect to both name and location) are likely to be promised. Identity is deleted or hidden via masking. The divorce of subject ID from the data can overcome the problems created in other similar structural and data settings, where the subject is identified and can be located. Preliminary evidence suggests there is relatively little chance of intruders identifying respondents by linking individual characteristics reported in the survey to other data sources (Raghunathan, 2007). 4

In summary, many of the correlates of surveillance abuse noted earlier are irrelevant to the typical survey context or if they apply, do so in a nonproblematic way. Considering the broad array of contemporary surveillance forms, the traditional survey seems a rather mild invasion and intrusion, which is not unduly or unreasonably costly to the subject. This does not call for a lessening of vigilance.

The Interview of the Future: New Wine New Bottles?

Yet what of new interviewing technologies involving paradata as discussed by D’Mello, Persons and Olney in chapter 10 and by Cooper in Chapter 3 of this volume? These techniques are examples of the new surveillance. Along with video cameras, facial recognition, RFID chips, and drug testing or DNA analysis based on a strand of hair, they are automated and can be involuntary, passive, invisible, and unavailable to the subject, while extending the senses of the surveiller. 5

It is easy to envision a sci-fi (or is it?) scenario involving duped or simply unaware respondents enveloped in a shield (or a vacuum?) of ambient intelligence. Consider a large interdisciplinary National Séance Foundation project done in the near future. This involves a collaborative relationship between police investigators and social scientists. The group had previously cooperated on improving methods of data collection in contexts of minimal or no cooperation, where there was reason to suspect at least some dishonesty, regardless of whether subjects are survey respondents or interrogated suspects. 6 The project involves five universities and is concerned with developing better tools to identify, in the hope of preventing, problems related to 1) drug use 2) political extremism 3) crime and 4) sexual behavior.

Given the sensitive nature of the topics and the difficulty of obtaining adequate and valid data, researchers have sought unobtrusive methods. (Hancock 2004) Such methods can enhance the face-to-face interview by probing beneath the deceptive veneer of the seemingly "authentic intentionality" found with reliance exclusively on the subject’s words. The methodology draws from recent communications research on the ease of conversational deception and the limits of any survey that gives the respondent space for impression management. 7 A mostly noninvasive multimodal approach (NIMMA) exploiting all channels of communication is used. This includes the methods Person (pp. --) describes such as PEP, fMRI, EEG, EKG, BPMS, and Wmatrix and some additional means still under beta test such as ZOWIE, WAMIE, and BMOC ©. Validity and completeness of response are greatly enhanced by access to unobtrusive measures and to comparisons to data found beyond the interview situation. This approach is also environmentally sound and efficient, as it doesn’t waste data. It seeks to find new meaning through creating a mosaic of previously meaningless, unseen, unconnected, and unused data. The scientist who relies entirely on words is a rather unscientific, profligate, one trick ancien pony who needs to get with the program. The interview and related detection occur in one of ten, ambient intelligence, pastel living rooms matched to the social and psychological characteristics of the subject (e.g., age, education, gender, lifestyle). This is inspired by the clustering of respondents into types pioneered by marketing research. One size hardly fits all. The rooms can be internally rearranged to accommodate 68 distinct types of respondent. Respondent characteristics are determined by a preliminary research encounter, which includes an electronically measured handwriting sample, a Google search, and search of public and (under carefully controlled conditions) restricted databases regarding sensitive behavior. 8 Music the respondent is likely to find pleasing is softly played and a subliminal voice repeats "be truthful.".A barely noticeable scent of pine (believed to be calming) is delivered through the air duct system. The subject is told that in order to accurately reflect his/her experiences and opinions a variety of "state of the art" technologies are being used. However, this message is nonspecific --the subject is not told that the chair seat measures wiggling, body temperature, and electrodermal response; that facial expression, eye patterns, voice microtremors, and brain wave emissions are recorded. Nor is there notice that all verbal utterances are recorded, as is the timing of responses. Some inferences are made about the respondent based on answers to questions unrelated to the topics of direct interest. The internal consistency of questions is analyzed and answers are compared to data found through cross checking a variety of public (because this is such an important government sponsored study) private databases.

The room is slightly warmer than most and subjects are provided with a full complement of free beverages. They are encouraged to use the restroom during or at the conclusion of the two hour interview. They are not told that a variety of automated biochemical urine and olfactory assays are performed, since such offerings by subjects are voluntary.

All of the data from the interview is available in real time via a password protected Web page to professional observers at the cooperating research agencies. To reduce generative performance anxiety, respondents are not told about the remote observation. The interviewer wears a tiny earpiece device that permits feedback and suggestions from the remote observers. A case agent in another room monitors all data flows and quietly informs the interviewer if the respondent is deceptive, frustrated, stressed, fatigued, or doesn’t understand the question. The agent is a kind of coach quietly offering information to the interviewer.

Respondents are of course promised confidentially and that only those who need to know their identity and their data will have them. However, they are given the chance to waive this protection (and a large percentage do, particularly those of lower status backgrounds), should they wish to benefit from a generous frequent shopper offer and the chance to win fabulous prizes. Funds for this are provided by leading marketing researchers and helping agencies who are eager to identify customers and clients in need of their goods and services and the sponsors don’t mind the educational purpose tax benefit they receive.


How should this scenario be viewed? Is it merely satire? How far are we from such a situation? In their introduction to this volume, editors Frederick Conrad and Michael Schober note the importance of anticipating the issues raised by new methods. What questions should be asked about any surveillance? What questions seem specific to these new forms of data collection?

The implications of the new methods for sociology of knowledge and truth issues are fascinating. We know that variation may appear with differences in question wording, order, and the match between the respondent and the interviewer with respect to factors such as gender and ethnicity, as well as between a face-to-face versus a computer survey. But some of the new methods raise additional issues. (Hancock, 2004; Holbrook et al, 2003; Schaeffer, 2000; Schober et al, 2003; Tourangeau & Smith, 1998).

In the traditional interview, the respondent has volitional space to appear knowledgeable and as a good citizen, whether warranted or not. It is not only that respondents may indeed have things to hide, but that they may be motivated to help or please the interviewer and hence not speak candidly. "Help" by giving untruthful answers that shorten the interview by leading to shorter pathways through the questionnaire (Hughes et al 2002) and "please" by providing desirable answers that conform to what is socially desirable rather than what is necessarily truthful.

Traditionally, we accepted this as an inherent limitation of the method and as the cost of gaining cooperation, even as we sought to compensate for it by internal and external validity checks. But what happens when we have new techniques that may go further in identifying lying and in suggesting likely answers (or at least associated emotional responses) when no, or minimal information is directly and knowingly offered? One form involves powerful statisitical models as with election and risk forecasting in which only modest amounts of information are required for very accurate aggregate predictions. 9 Some of the techniques may yield better data in going beyond the deceptive, invalid, or incomplete information that may be found in the traditional interview. What if ways of characterizing and understanding respondents were possible that went far beyond the substantive content of an answer to a question in settings where respondents thought they were merely offering information on their attitudes and experiences? One wonders what Herbert Blumer (1969) and Erving Goffman (1959) might say about such research tools, given their emphasis on getting genuinely empirical and going beyond socially prescribed self presentations that may be gathered/induced by the formal interview.

What is gained and what is lost when respondents are not as able to manage their impressions, as they could before these technical developments in communication and interrogation? 10

How does the interview situation—both empirically regarding the possible effect on results, and morally regarding appropriateness, change when there are hidden, or at least unseen, observers, when meaningless data are converted to information, when unwitting subject offerings are taken as cues to internal states and truthfulness, and when interviewer data is matched to other sources of data on the subject available beyond the interview setting?

How should the trade-offs between enhanced data quality be weighed against manipulating "subjects" in order to maximize what can be learned about them, when they may be unwilling, or unable, to tell us what we desire to know? How should the gains from overcoming the limitations to data quality be balanced with the harm to a respondent’s self-image in being shown to be a liar, ignorant, or a bad citizen? That, of course, assumes respondents would be informed of the power of the technique to reach judgments about consistency, accuracy, and honesty and would have access to their own data and any subsequent additions, coding, and simulations. What would meaningful informed consent look like in such settings? Is it even possible to have this without boring people and significantly increasing refusal and incompletion rates? How can all possible uses/users of the data ever be known? Would offering informed consent significantly increase refusal rates from subgroups who may be central to the research (Singer, 2003)? For the broad public, would knowledge of the increased power of surveys spill over into even higher rates of refusal? Will future research be hurt if potential respondents come not to trust interviewer situations, after the inevitable appearance of newsworthy exposés? Should there be policy limits on the extent to which people may be seduced by rich rewards into selling their most intimate and sensitive information?

Clearly, in taking away some of the respondent’s ability to offer a given presentation of self, something is lost. Yet much might be gained as well. The moral haze and the trade-offs here, as with other new surveillance techniques, are what makes the general topic so interesting and challenging.

More Questions

Broad ethical theories involving teleology, deontology, and casuistry are available, as are the ethical codes of the social science professions and principles such as those in the Belmont Report (National Commission, 1979). However these are all rather abstract and in practice with the appearance of new means, it is often not at all clear when they apply. An alternative approach, while encouraging the researcher to be mindful of the above, asks for self-reflection (or better group reflection among the researchers and sponsors) with respect to specific questions about the data collection, analysis, and use contexts.

Table 1 lists questions that can be asked of surveillance generally. I have put an asterisk by those most relevant to the new survey means. I will argue that the more these can be answered in a way that affirms the underlying ethical principle, the more appropriate the means may be.

TABLE 1: Questions for Judging Surveillance

  1. **Goals: Have the goals been clearly stated, justified, and prioritized?
  2. **Accountable, public, and participatory policy development: has the decision to apply the technique been developed through an open process, and if appropriate, with participation of those to be surveilled? This involves a transparency principle.
  3. Law and ethics: Are the means and ends not only legal but also ethical?
  4. Opening doors: Has adequate thought been given to precedent-creation and long- term consequences? 11
  5. **Golden rule: Would the watcher be comfortable in being the subject rather than the agent of surveillance if the situation was reversed? Is reciprocity/equivalence possible and appropriate?
  6. **Informed consent: Are participants fully apprised of the system’s presence and the conditions under which it operates? Is consent genuine (i.e., beyond deception or unreasonable seduction) and can "participation" be refused without dire consequences for the person?
  7. **Truth in use: Where personal and private information is involved does a principle of "unitary usage" apply in which information collected for one purpose is not used for another? Are the announced goals the real goals?
  8. **Means-ends relationships: Are the means clearly related to the ends sought and proportional in costs and benefits to the goals?
  9. Can science save us: Can a strong empirical and logical case be made that a means will in fact have the broad positive consequences its advocates claim? (This is the "does it really work?" question)
  10. **Competent application: Even if in theory it works, does the system (or operative) using it apply it as intended?
  11. Human review: Are automated results, with significant implications for life chances subject to human review before action is taken?
  12. **Minimization: If risks and harm are associated with the tactic, is it applied to minimize intrusiveness and invasiveness?
  13. **Alternatives: Are alternative solutions available that would meet the same ends with lesser costs and greater benefits (using a variety of measures not just financial)?
  14. Inaction as action: Has consideration been given to the "sometimes it is better to do nothing" principle?
  15. **Periodic review: Are there regular efforts to test the system’s vulnerability, effectiveness, and fairness and to review policies?
  16. **Discovery and rectification of mistakes, errors, and abuses: Are there clear means for identifying and fixing these (and in the case of abuse, applying sanctions)?
  17. **Right of inspection: Can individuals see and challenge their own records?
  18. Reversibility: If evidence suggests that the costs outweigh the benefits, how easily can the surveillance be stopped (e.g., extent of capital expenditures and available alternatives)?
  19. Unintended consequences: Has adequate consideration been given to undesirable consequences, including possible harm to watchers, the watched, and third parties? Can harm be easily discovered and compensated for?
  20. Data protection and security: Can surveillants protect the information they collect? Do they follow standard data protection and information rights as expressed in the Code of Fair Information Protection Practices and the expanded European Data Protection Directive? 12 (U.S. 1973, EU 1995, Bennett and Raab 2006)

Send email  |  Back to Main Page  |  Notes


Bennett, C. & Raab, C. (2006). The governance of privacy. Cambridge: MIT Press.

Blumer, H. (1948). "Public opinion and public opinion polling." American Sociological Review, 13, 542-549.

Blumer, H. (1969). Symbolic interaction: Perspective and method. Englewood Cliffs, N.J. :Prentice Hall.

Colfax, J. D. & Roach, J. D. (1971). Radical sociology. New York: Basic Books.

D’Mello, European Union (1995). Directive 95/46/EC of the European Parliament and of the Council on the Protection of Individuals with regard to the processing of personal data and on the free movement of such data (OJ No. L281). Brussels. Gilliom, J. (2005). Overseers of the poor. Chicago: Univ. of Chicago Press.

Goffman, E. (1959). The presentation of self in everyday life. New York: Doubleday.

Gouldner, A. (1970). The coming crisis in Western sociology. New York: Basic Books.

Habermas, J. (1986). The theory of communicative action. Boston: Beacon Press. Hancock, J. (2004)

Holbrook, A. L., Green, M. C., & Krosnick, J. A. (2003). "Telephone versus face-to-face interviewing of national probability samples with long questionnaires: Comparisons of respondent satisficing and social desirability response bias." Public Opinion Quarterly, 67, 79-125.

Hooker, C. (1997). "Ford's sociology department and the Americanization campaign and the manufacture of popular culture among assembly line workers c. 1910—1917" The Journal of American Culture, 20(1), 47–53.

Horowitz, I. L. (1967). The rise and fall of Project Camelot. Cambridge, MA: M.I.T. Press.

Hughes, Chromy, Giacoletti, & Odom, 2002 in Gfroerer et al. (Eds.). [need to get full citation]

Keen, M. (1999). Stalking the sociological imagination. Westport, CT: Greenwood Press.

Krehbel, R. (2006, July 9). "Poll takers count on public for answers." Tulsa World.

Lipset, S. M. (1964). "Coughlinites, McCarthyites and Birchers: Radical rightists of three Generations." In D. Bell (Ed.), The radical right. New York: Doubleday.

Leo, R. (forthcoming) Police Interrogation and American Justice. Cambridge, Ma.: Harvard University Press.

Marx, G. T. (1962). "The social basis of the support of a Depression Era extremist: Father Coughlin, Monograph No. 7." Berkeley: Univ. of California Survey Research Center.

Mills, C. W. (2000). The sociological imagination. New York: Oxford.

National Commission. (1979). The Belmont Report: Ethical principles and guidelines for the protection of human subjects and research. Washington DC: HEW.

National Research Council. (1993). Private lives and public policies. Washington DC: National Academy Press.

Price D. (2004). Threatening anthropology: McCarthyism and the FBI’s surveillance of activist anthropologists. Durham: Duke University Press.

Schaeffer, N. C. (2000). "Asking questions about threatening topics: A selective overview." In A. A. Stone, et al. The science of self-report: Implications for research and practice (pp. 105-122). Mahwah, NJ: Lawrence Erlbaum.

Schoeber, M. F., Conrad, F. G., Ehlen, P., & Fricker, S. S. (2003). "How Web surveys differ from other kinds of user interfaces." Singer, 2003

Tourangeau, R. & Smith T. (1998). "Collecting sensitive information with different modes of data collection." In M. P. Couper, R. P. Baker, J. Bethlehem, C. Z. F. Clark, J. Martin, W. L. Nicholls, II, & J. M. O’Reilly (Eds), Computer assisted survey information collection. New York: John Wiley & Sons, Inc.

United States HEW. (1973). Secretary's advisory committee on automated personal data systems, records, computers, and the rights of citizens.

Send email  |  Back to Main Page  |  References


  1. My master’s thesis using American Institute of Public Opinion data on the 1930s radio priest Father Coughlin was part of a broader project seeking to understand correlates (and more optimistically causes) of presumed threats to democracy. (Lipset 1964, Marx 1962, 2007b). Protest and Prejudice (Marx 1967) sought to understand sources of black support for the civil rights movement. Both inquiries were in a positivist social engineering vein in which it was rather naively believed that survey analysis would yield instrumental results for specific goals (combating extremism and furthering responsible civil rights militancy) In Marx (1972, 1984 ) a more tempered view stressing the possibility of surveys and related research as more likely to affect climates of opinion than to offer clear directives for intervention is suggested.

  2. Consider the FBI’s interest in sociologists and anthropologists during the cold war period (Keen 1999, Price, 2004) and other encounters sometimes triggered by the researcher’s proximity to dirty data topics involving crime and deviance. Psychologists were also no doubt of interest. The ironies can be striking. In an epilogue to his study of government welfare surveillance, Gilliom (2001) reports on his serendipitous encounter with some involuntary participant observation data, as he became the subject of unwarranted state surveillance.

  3. These distinctions are developed in Marx (2007a). For example role relationship surveillance involves normatively patterned interactions –whether in a family or formal organizational setting. This contrasts with the nonrole relationship surveillance found with the voyeur whose watching is unnconnected to a legitmate social role.

  4. Discomfort in the collection of sensitive information may remain such as embarrassment and painful recall. New data collection techniques involving interaction with a computer may eliminate some of this (see Couper, Chapter 3 in this volume).

  5. Marx (2006b) considers a number of soft forms of surveillance.

  6. See Leo (forthcoming) for a highly informative account of other mergers of social science and law enforcement in a context of seeking withheld information.

  7. In their zeal to find the real deal, the researchers of course fail to note that deceptive data are nonetheless data worthy of analysis, apart from their empirical veracity.

  8. For example, lists of magazines subscribed to, charities contributed to, Web usage, and criminal justice records, but not legally protected information such as medical records.

  9. This relates to a broader conflict between economic rationality and efficiency associated with aggregate data on the one hand and expectations of individualized fair and accurate treatment on the other. The occasional errors found in the case of social research from judging a given case by rating it relative to a model based on a large N about which much more is known would be seen to balance each other out.

  10. In the best of all possible worlds, communication as suggested by Habermas (1986) has a reciprocal, rather than a one-way quality. The scenario described (and indeed the framing of this book’s concerns) fits within the perspective of communications. But it is a communication from the subject. To be sure, via targeted messages and propaganda, the results may then lead to communication back to the subject based on the interview results, but this is certainly asymmetrical communication compared to what is found in ordinary communication in the give and take of exchanges.

  11. The expansion of a technology introduced in a limited fashion can often be seen. Extensions beyond the initial use, whether reflecting surveillance creep, or in many cases surveillance gallup, are common. Awareness of this tendency requires asking of any new tactic, regardless of how benignly it is presented as both means and ends, “Where might it lead?” Consider examples of surveillance expansion such as the Social Security number that Congress intended only for tax purposes but now has become a de facto national ID number and drug testing, once restricted to those working in nuclear-power facilities and the military, is now required of bank tellers and in some places even junior high schools.

    Once a surveillance system is established it may be extended to new subjects, uses, and users. Many factors contribute to this. Economies of scale are created that reduce the per-unit cost of such extensions. Experience is gained with the technique and precedent is established. What was initially seen as a shocking intrusion may come to be seen as business as usual and extensions may be taken for granted.

  12. These offer standards for the data gatherers with respect to the collection, retention, and use of personal information.

    Top of Page  |  Back to Main Page  |  Gary Marx bio  |  Send email  |  Back to Main Page  |  References  |  Notes

    You are visitor number  to this page since July 11, 2007.