Contents/Introduction
Part 1. Values and Value Judgments
Part 2. Ethical Requirements on Action
Part 3. Moral Character and Responsibility
Part 4. Privacy, Confidentiality, Intellectual Property and the Law
Fine Points
Notes
The first point to consider is the difference between being desirable or worthy in some respect, and simply being desired, liked or preferred by some person or group. This distinction is crucial to my later discussion of ethical judgments and standards for engineering practice. Consider the sentences below:
"I like fried peppers."
"I am unalterably opposed to having cats in the neighborhood."
These are statements of preference, that is of liking and disliking, rather than judgments about whether something is good or bad in some respect.
Unlike a value judgment, the statement of a preference, such as "I like fried peppers," is a statement about the speaker. More specifically, it is a statement about the speaker's feelings, views or attitudes toward the thing in question. Statements of preference are false only if they are not true of the speaker.
It is normal to feel some repugnance at wrongdoing, but the strength of one's feelings are not a reliable guide to the gravity of an offense. As people mature they learn to distinguish between their feelings on a subject and their moral judgments. For example, someone believe that, ethically speaking, shooting a person is much worse than shooting a dog. However, if someone recently shot and killed his dog, and he had never been personally acquainted with any person who was shot to death, that person might have a much stronger emotional reaction when hearing about the shooting of a pet than when hear about the murder of a person.
Like this speaker, a person may know the origins of her preferences and attitudes and may give causal explanations in terms of psychological factors that have contributed to their development. For example:
"We always served fried peppers at celebrations when I was growing up."
"When I was a young child, my closest friend was attacked by a cat.
Alternatively she may analyze her preferences to identify more precisely what it is she likes or dislikes:
"I can't stand the sound of cats fighting."
Such a person may even give you reasons for thinking that what she prefers is desirable or at least desirable for her, such as:
"Cats carry disease."
"I am extremely allergic to cats."
However, the speaker need not give any reasons for a preference. For some matters, such as preferring one flavor of ice cream to another, people usually do not have reasons for their preference. When you state your preference, you are stating your own attitudes or feelings, not giving a reasoned judgment. A person may have a strong preference for something without believing it fulfills some high standards or brings about some good. He may not even know how he came to prefer what he does.
If something is claimed to be good or desirable, one makes a statement about the thing that is claimed to be good, rather than about the person who likes it. As Aristotle argued, and many subsequent philosophers have agreed, to say that something is good or desirable is to say that it has qualities it is rational to want (in a thing of that sort). For example, a good knife is one with the properties it is rational to want in a tool with one blade used for cutting, such as being sharp, well-balanced and comfortable to grip. To claim that something has the qualities that it is rational to want in a thing of that sort is to claim that there are good reasons for wanting it.
Given the differences between value judgments and statements of preferences, you may expect that others expect you to back up your judgments and preferences in different ways. If you make a value judgment, others are likely to ask you for the reasons you judge it rational to want (or not to want) whatever is the object of your judgment. If, on the other hand, you merely state your preference, you need give no further reasons for your liking or disliking. You may or may not have reasons underlying your preference.
Statements--both ordinary statements and statements in specialized areas of study--along with hypotheses, research studies, theories and designs for experiments are also judged to be good or bad (or something in between) in terms of what are sometimes called knowledge values or epistemic values. These include truth, informativeness, precision, accuracy and significance. For example, research is regularly judged not only by whether it reveals a relationship that is unlikely to have occurred by chance--that is, it reaches "statistical significance"--but also by its larger implications, assuming its findings to be true. Research is also judged in terms of its fruitfulness, that is, of the further lines of inquiry the research questions or results suggest. Hypotheses are judged in terms of plausibility, the scope of the phenomena explained, and testability.
Plans and strategies are common objects of prudential judgment. When someone speaks of a good (prudent or effective) strategy or a bad (stupid, short-sighted) plan, they are making a prudential judgment about the efficacy of the plan or strategy in question, that is, whether it will achieve certain ends. Behind most prudential judgments are other value judgments that certain ends are worth achieving. A special case of an end that is generally assumed to be valuable and, therefore, something that should not be casually jeopardized, is survival, either biological survival or survival as a member of some group. A plan or idea is generally judged imprudent or stupid to the extent that it neglects matters crucial to the biological survival of those involved (or, by extension, to their continued membership in some cultural or organizational group, to their career or economic well-being). People do speak colloquially of "survival value." When two people disagree in their prudential judgments, they may be disagreeing about what is risked in some course of action or whether that thing should be put at risk. For example, in response to the warning "If you want to survive in this organization, you will not report corruption, so it would be stupid to stick your neck out," one might reason that one does not want to risk one's integrity by being part of a corrupt organization. If corruption does permeate the organization so that reporting wrongdoing will be punished, there might be more survival value in leaving than in staying. In this case, one puts the maintenance of one's integrity ahead of continuance with the organization.
The final type of value other than ethical value I will discuss is religious value. The terms of evaluation include "sacred" and "holy" as contrasted with "profane" and "mundane." Purely religious standards are often applied to people, writings, objects, times, places, liturgies, rituals, stories, doctrines, and practices. Religions that emphasize the importance of doctrine are called "doctrinal;" others emphasize liturgy, the order of worship. Some religions understand life in terms of sacred stories. Other religions emphasize non-liturgical practices, such as forms of yoga or meditation, either for spiritual training or for their own sake. Some emphasize care of less fortunate members. One emphasis may co-exist with others. (These differences are noted because a surprising number of philosophers write as though doctrine were the central element in religion. In fact, doctrinal issues are not the central concern of many religions.) Some emphases change over time. For example, in Judaism before the exile, a place--the Temple at Jerusalem--had central importance. After the exile, when people could no longer go to the Temple, scripture--the Torah--became central.
Most existing religions, and all major world religions, have moral, or ethical, standards which they apply to moral agents--to their character traits, motives or actions. Religions vary in their relative emphasis on the development of spiritual and moral virtues of individuals, on the realization of a particular kind of family or community, and on the faith of people as a whole and its practice in community. Confucianism puts great emphasis on the family, for example. Buddhism in contrast emphasizes enlightenment of the individual. Judaism emphasizes the relation of the whole people of Israel to God, and praiseworthy individuals are those who make the relation between God and community flourish. Because Christianity emphasizes individual salvation, it is generally regarded as more individualistic than Judaism, notwithstanding a continuing emphasis on the community of the faithful or "the Church." Islam emphasizes the duty to form an equitable society where the poor and vulnerable are treated decently. (These are rough generalizations about each of these major religions and does not take account of differences among branches of them.) Although major religions do have ethical components, a religion need not have a morality associated with it other than enjoining piety toward divine beings or forces, especially if, according to that religion, divine beings or forces are amoral and unconcerned with the behavior of humans toward one another.
There is not space here to examine all the many views about how the different types of value relate to one another, but here are two examples: aesthetic criteria, such as beauty and symmetry, are commonly held to enter the assessment of scientific theories. Conversely, many argue that great art gives a profound insight into reality, which brings aesthetic value close to religious, or perhaps scientific, value. So, although I have distinguished various types of value here, it is an open question whether there are fundamental connections among them.
Notice that all of the types of value discussed differ from market value. When one assesses market value, one is not making a value judgment of what is good or bad in some respect. Rather, one is simply determining the price at which the supply of an item equals demand for it. For example, we all need breathable air for health and survival--health is commonly regarded as a fundamental good. Since there is no scarcity of air of breathable quality in most areas, no one needs to buy it. Therefore, breathable air has no market value.
Just as valuable items, like clean air, may have no market value, so high "market value" may attach to items that are not good by any reasonable standards. Market value depends on the relation of supply and demand. So it may depend on the strength of preference of those who have the means to pay for an item and the willingness of those who have it or can make it, to sell it. An addictive and physiologically destructive drug with analgesic or euphoric properties might have high market value. Such a drug would not be "good" even in the sense of having properties it would be rational to want in an analgesic or euphoric drug.
It is important to distinguish between ethical subjectivism and another view that also is called "relativism" or "cultural relativism." Cultural relativism with regard to ethics is the view that ethical judgments, rules and norms reflect the cultural context from which they are derived and cannot be immediately applied to all other cultural contexts. Cultural relativists put the burden of proof on those who think that they can generalize from one social context to another.
Many, like philosophers Alasdair MacIntyre and Annette Baier, who do not consider themselves relativists, argue that moralities are social products constructed by particular people in particular societal contexts and must be understood in relation to those societal contexts. For example, the Hippocratic oath specifies extensive duties towards those who have taught one medicine. In this oath physicians pledge to respect and care for their teachers as for their own parents. The societal context in which these duties of physicians were formulated was very different from what it is in industrialized nations today. It is implausible that the same duties should apply to physicians in all societies, but this does not mean that they did not have validity when the oath was first formulated. What makes the difference is not a person's opinion, but the social reality in which the person participates.
One must cope with the social reality to be effective in it, regardless of one's ethical assessment of it. Suppose that in one culture the person who is expected to educate a child in certain respects is that child's father. Suppose further that in a second culture the person who usually oversees that aspect of a child's education is the maternal uncle. These factors may make a significant difference to how someone acting within that culture might go about meeting a need of a child within that society. A teacher who saw a child was having difficulty in the area of learning that is overseen by the father in the first culture, and the uncle in the second culture, would be wise to try to consult the father, if the child was from the first culture, or the maternal uncle if the child was from the second. This has nothing to do with the teacher's moral belief about who, if anyone, in a child's family ought to take care of these things. The teacher may be from the first, or second or some third culture. Understanding how a culture influences opportunities for effective action is objectively important for understanding how to effectively carry out one's responsibilities.
Further there is a more modest relativism that recognizes the development of specific ethical standards over time. So to judge an action in some other period by today's standards, is simplistic, but this does not mean that the action can be criticized only by the criteria commonly used in the other period. For example, informed consent for medical experiments is a standard that has developed in the United States and other industrialized democracies only since the Second World War. The implicit prior standard was, "First do it [the experiment] on yourself." Someone who used the prior standard conscientiously in 1940 is not subject to the same moral criticism as would a person who tried reverting to that standard today. However, the informed consent is arguably a superior standard; we would think highly of someone for seeking his subject's informed consent for experiments in 1940.
The term "ought" is sometimes used to mean what is desirable or advisable, other things being equal. For example, "You ought to avoid bad company" means that other things being equal, you should avoid bad company, rather than that there are no circumstances in which you ought, ethically speaking, to be in the company of morally corrupt people. Sometimes it means what is advisable, all things considered--as in "in these circumstances what you ought to do is start over." Often, as in the above examples, when the general case is being discussed, "ought" without further specification is understood as "necessary, other things being equal." When a specific instance is being discussed, "ought" is understood as "ought, all things considered."
In this book, the term "ought" without further qualification should be understood as according to a given case's generality or specificity. Therefore, if I am discussing a specific case, "ought" without further qualification means "ought, all things considered." If I am discussing a general type of situation "ought" without further qualification means "ought, other things equal." The qualifiers "all things considered" and "other things being equal" may be added in some contexts for clarity.
Moral and Amoral Agents
Acts, agents and the character or motives of agents are the objects of moral
evaluation. However, only certain agents have their acts, character or motives
morally evaluated. For example, the statement "the storm was responsible for
three deaths and heavy property damage" means that the storm caused these
outcomes. Although the storm was the agent of destruction, the actions of the storm
are not subject to moral evaluation. The storm is not guilty of murder or
even manslaughter. Those whose actions, character and motives can be morally
evaluated are called moral agents. A competent and reasonably mature human
being is the most familiar example of a moral agent. In contrast, most
"lower" (that is, non-human) animals are generally understood to be
amoral. Saying they are amoral is to say that morality is not a factor in
their own behavior, and, therefore, questions of morality are not appropriate in
evaluating them and their acts.
To say that lower animals are not capable of acting morally or immorally is not to deny that there are moral constraints on the way moral agents should treat them. Moral constraints on the way lower animals are treated is a matter of the animals' moral standing, not their moral agency. I discuss moral standing in Part 2, in the section Moral Obligations, Moral Rules and Moral Standing. Any moral agent has moral standing, but the prevalent view is that some beings are not moral agents yet have moral standing. For example, it is generally held that it is wrong to be cruel to animals--even though they are incapable of moral action.
Although one can evaluate the behavior of amoral beings in other ways (as being stupid or clever, instinctual or learned, adaptive or maladaptive), their behavior is neither moral nor immoral, because those beings are incapable of considering moral questions. Some animals, like pet dogs, do sometimes sacrifice themselves for humans, but this behavior is (perhaps mistakenly) taken to be motivated by their attachment to their owners rather than by moral considerations. Recall the earlier discussion of the difference between emotions and ethical evaluation.
Human beings and human groups such as nations are the most familiar moral agents. However, some other species, like porpoises, are often alleged to qualify as moral agents, even though they are not human. The Planet of the Apes portrays apes as moral agents. Science fiction often describes non-human extra-terrestrials as persons and moral agents. Various religious traditions speak of beings, such as angels or dakinis, who seem very much like people and count as moral agents. Humans may be the most common example of moral agents, but they are not the only example.
Moral agents are not necessarily morally good individuals. They are those who can and should take account of ethical considerations. Moral agents are those of whom one may sensibly say that they are either moral or immoral, ethical or unethical in contrast to the amorality of most other beings.
Common terms of moral praise for agents include "good," "a person of high moral character," "virtuous." Particular character traits that are praised as moral virtues are "kindness," "honesty," "courage," "bravery." Acts are judged as right and wrong according to three criteria: the nature of the acts--e.g., "Murder is wrong;" the specific circumstances of a particular act--e.g., "Arthur's unprovoked assault on Cecil was wrong;" and the motives with which the agent committed the act--e.g., "Cedilla's criticism was destructive and motivated by hostility rather than a sincere attempt to improve performance."
When the consequences considered are ones that can be quantified, the formal technique of cost-benefit analysis may be helpful in clarifying the tradeoffs involved in exchanging one harm or benefit for another. (Of course, the same action may produce both harms benefits. For example, some water purification measures currently in use result in the introduction of minute quantities of carcinogens into the water.) In cost-benefit analysis one compares different courses of action by multiplying the probability that a given course of action will produce some consequence by the magnitude of the harm (or benefit) of that consequence, and comparing this quantity to the quantity resulting from alternative actions. For example, one might compare business plans with respect to their success in acquiring a substantial share of the market for one's product.
Cost-benefit analysis is technique for making tradeoffs among consequences (harms and benefits) that can be represented as arithmetic quantities, such as number of deaths, days of illness or monetary cost. The probability of some consequence is multiplied by a magnitude representing the degree of harm (or benefit) of that consequence. This quantity, the probability multiplied by degree of harm, defines risk, in the technical sense. This notion of risk is a bit different from the ordinary one.
"Risk" as the term is commonly used means a danger or hazard that arises unpredictably, such as being struck by a car or capsizing in a boat. The "unpredictable" element in the ordinary notion of risk links it to the notion of an accident. The term "risk" is also used for the likelihood of a particular hazard or accident, as when someone says, "You can reduce your risk of capsizing, by sailing only in light or moderate winds."
Risk analysis, risk assessment, or risk management use the technical sense of "risk" that is, the probability or likelihood of some resulting degree of harm. The harms (or benefits) often considered are increased (or decreased) probability of death (mortality risk) or monetary cost (or receipt). Engineering oversight for safety or environmental protection often employ the technical sense of risk in contexts that have nothing to do with accidents. Risk of illness and risk of death are often calculated for a person's chronic exposure to some pollutant, for example. In the technical sense one focuses on the resulting harm and not just the harmful event. So one would speak of the risk of death by drowning or exposure as a result of capsizing, rather than simply of the risk of capsizing. With this notion of risk one can compare, say, the average person's risk of death from crossing the street with the average person's risk of death from cancer in a given period, or the relative mortality risk of traveling a certain distance by automobile and traveling the same distance by airplane. One can also compare the risks associated with harms of different magnitudes. For example, consider two monetary risks: the rather likely event of losing money in a broken vending machine, and the rarer event of having one's money stolen in a holdup. In a some locales that there is a greater risk of monetary loss from a malfunctioning vending machines than from being held up and robbed.
Notice that use of the technical sense of risk requires that one be able to meaningfully treat the resulting harms as arithmetic quantities (i.e., not only quantify them so as to be able to rank order them, but assign the harms quantities that may be added and subtracted). At the least this requirement usually results in considering one type of harm, a type that can be so represented, and ignoring others. Assessment of the magnitude harm of some event is notoriously difficult, to do in a way that is not arbitrary, which is why the technique is most readily applicable to certain only to certain kinds of harm. For example, in the case of monetary loss to vending machines as compared with monetary loss to robbers, we did not consider the greater emotional trauma associated with being held up. It would be a mistake to conclude that someone was behaving irrationally in taking more precautions against being held up than against using malfunctioning vending machines even if the risk of monetary loss to the latter was greater.
Comparison of different sorts of harms or benefits is especially difficult, not only because in a pluralistic society there are different notions of the good life and what promotes or frustrates it, but because a specific harm will have different implications for people in different circumstances. Consider whether is it worse to increase one's chance of death by 25% or to be painfully disabled for ten years of one's life. When such choices are made in healthcare we say that the individual patient has the right to make them. Similar decisions, such as decisions about what risks to tolerate in order to purify the public water supply must be made for the general population. When using cost-benefit calculations in such cases, it is not possible to get meaningful assessments of degree of harm an benefit. Instead what is often used is a measure or estimate of the degree that each consequence would be preferred by most people, converted to the dollar amount that people would be willing to pay to achieve the benefit or avoid the harm. However, as we have already seen preferences are subjective, and strongly influenced not only by value commitments and practical circumstances, but by personal history. Therefore, measure of degree of preference is not a measure of magnitude of harm or benefit. Furthermore, measuring harms and benefits in terms of willingness to pay ignores the fact that people vary in the value that money has for them (for example, because of their ability to pay.)
If the type of harm or benefit is held constant, and one can simply compares different course of action all of which might lead to that state, it is not necessary to assign a magnitude to the harm, one can simply compare the actions in terms of their likelihood of producing that consequence. When the harm or benefit is held constant, it is common to speak of "risk-benefit" rather than cost-benefit analysis. (Assessments of those probabilities are often very difficult to make. The field of risk assessment has developed many sophisticated mean for making these assessments. Such assessments often raise subtle value questions by the way they focus attention on some consequences rather than others. However, risk-benefit calculations avoid the crude substitution of market preferences for judgments of harm and benefit.
There are, however, other morally significant matters that risk-benefit calculations may obscure. One such question is whether some measure not only reduces the net risk of harm or increases net benefits, but harms one group while benefiting another. When those who are harmed are different from those who are benefited, it is called "risk shifting." Even if the net risk is lessened, there is an ethically significant question of the fairness of the transfer of risk. The responsible use of cost-benefit and risk-benefit techniques requires a clear understanding of their ethical limitations.
pdsarin@mit.edu