6.031 — Software Construction
Fall 2019

Reading 29: Ethical Software Engineering

Software in 6.031

Safe from bugsEasy to understandReady for change
Correct today and correct in the unknown future. Communicating clearly with future programmers, including future you. Designed to accommodate change without rewriting.


After this reading, you should be able to:

  • Explain the ethical principles to consider when you design, build, and maintain software.
  • Apply four different moral lenses as you examine the ethical ramifications of a particular system.


The three technical objectives for software in 6.031 have appeared at the top of every reading and in the discussions during every class this semester. There are many other properties of good software that we have not discussed: performance, security, usability, …. Each may be as critical to the success of a software project as correctness. The topic of this reading is another such property: ethicality.

Your practice of software construction, whether as a professional software engineer or in one of the innumerable other positions where you might write software, does not take place in a vacuum:

  1. You work with other people
  2. on software designed by other people
  3. that is used by other people
  4. whose use of your software affects other people.

Writing software that is correct, clear, and changeable is your professional obligation to (1) the others you work with and (2) the organization you work in. Correctness is also an obligation to (3) the users of your software, who expect it to function as described. Expanding outward from those users, we encounter both (4) the other people who are affected by your software, even if they don’t use it themselves; and the wider world in which all those people live.

Social networking systems provide one very visible example of these two expanding circles of influence. Even someone who has managed, miraculously, never to divulge a single piece of information to a social networking site will still be deeply affected by their friends’, family’s, community’s, and society’s use of social networking.

And the impact of a social networking system on any individual is not merely a function of the system and that person, but also how the person and system interact with community, society, etc. That’s true of any software system: a major challenge of building and maintaining ethical systems is to anticipate and design for emergent properties that arise when the system is used by real people in a complex world.

We’ll start, however, with some ethical principles around writing code, before we go on to consider larger questions.

Starting small, one ethical issue you’re familiar with is plagiarism — we asked about it on Problem Set 0. The Academic Integrity at MIT handbook says: “honesty is the foundation of good academic work.” The 6.031 collaboration policy and the handbook section on Writing Code describe how plagiarism violates that obligation to honesty. Any time we use code written by others, we must attribute it to the authors and give them credit for their work.

The 6.031 collaboration policy also describes when and how sharing your work from the class is and is not allowed. Software engineering has a strong history and culture of sharing. The free software movement considers unethical any software that users are not free to run however they want, modify to suit their needs, and redistribute in both original and modified form. The open source movement advocates for sharing source code as a practical advantage over keeping it proprietary. Many pieces of software that provide the foundation for modern computing are free software (meaning freedom-preserving, not simply $0). The culture of sharing is also visible in places like Stack Overflow, where programmers have collaboratively created an enormous wealth of explanations and examples from which we all benefit.

The authors of free and open-source software grant others the rights to do things like modify or redistribute their source code by releasing it under a license that permits those uses, perhaps with certain obligations. Any time you use code written by others, you must have a license for that use. Copying and using source code to which you do not have a license is illegal and unethical, as it violates the authors’ copyright — our understanding that authors of a creative work can control, at least for some time, how their work is used. (If the authors waive their copyright and place the code in the public domain, that’s an exception: no license is required for any use.)

A code of ethics

As our next step for understanding ethical software engineering, we will use the Association for Computing Machinery (ACM) Code of Ethics and Professional Conduct. The ACM is the leading international professional organization for computer scientists. (Among other things, the ACM gives out the Turing Award, the CS equivalent of a Nobel Prize.)

General ethical principles


ACM Code Section 1, General Ethical Principles

This section describes obligations to be honest (1.3) and to respect authorship and copyright (1.5), similar to the discussion above. It also describes an obligation around confidentiality (1.7), especially to an employer. And it introduces several new dimensions to consider:

  • You have an obligation to human well-being (1.1) and to prevent harm (1.2).
  • You have an obligation to be fair and inclusive (1.4).
  • And you have an obligation to build systems that respect people’s privacy (1.6).

reading exercises


Write a subsection number 1 through 7…

Which of those principles (1–7) do you personally feel most confident about applying?

And which (1–7) do you personally feel least confident about?

(missing explanation)

Ethical structures, not just ethical individiuals

Section 2, Professional Responsibilities, provides a list of obligations you could consider not only as an individual, but also when you join an organization. For example, you should “strive to achieve high quality in both the processes and products” of your work (2.1) and “maintain high standards of professional competence, conduct, and ethical practice” (2.2). But these are not goals you can achieve alone.

When a large organization builds a system that is detrimental to human well-being — it causes harm, it discriminates, it increases inequality, it violates privacy — it is tempting to look for some single bad actor was the evil mastermind behind a nefarious plot to hurt people. If that’s the case, that bad actor probably is an actor, because you’re probably watching a movie where the storytelling is driven by individual villains and heroes.

Much more likely is that the large organization builds a harmful system because it doesn’t have a structure in place to ensure that the collective effort of individual participants has an ethical result. Worse, because we live in a world and society where so many existing structures are ethically unsound — they discriminate, they perpetuate inequality, they deny autonomy, etc. — an organization that pays no particular attention to acting ethically will end up reproducing in its work the ethical failures of the society around it. (See, e.g., Prof. Susan Silbey on How Not to Teach Ethics.)

When you interview at a company, you might ask: “how do you do code reviewing?” With that question, you’re probing one part of how that company creates a high quality software process (2.1), shares technical knowledge (2.2), and provides professional feedback (2.4). You should also ask, for example:

  • “how do you review new features for potential abuse?”
  • or “what different people or groups do you discuss when you design a new product?”
  • or “was there a time you changed a planned feature for ethical reasons — how did you decide to do that?”

Ask how the company understands and reviews the impact of its projects on people, because merely hiring good employees and setting out on a good path is not enough. If the company hasn’t designed its own structure so that ethical questions are asked and answered and the objectives of whole teams and projects are updated in response, you should be as skeptical as if your interviewer says: “we don’t need code review, we trust every engineer to write great code on their own.”

Social network data

Facebook’s API for 3rd-party developers allowed an app to harvest and misuse the personal information of all the friends of users of the app. No single engineer designed or built or deployed that API. If an engineer raised ethical questions about abuse of the data, was Facebook as an organization designed to amplify and examine those questions, or silence them?

Social network misinformation

In response to anti-vaccination misinformation, Pinterest decided that they could not rely on an algorithmic approach to promote truthful posts and demote lies. Instead, they stopped returning search results for hundreds of health-related keywords, showing info from hand-picked public health sources instead. How many employees contributed to deciding on and then implementing this policy?

With great power…

Finally, Section 3, Professional Leadership Responsibilities, describes ways that leaders should build ethical processes into their organization.

In explaining the general principle to avoid harm (1.2), the Code references an item from section 3: “recognize and take special care of systems that become integrated into the infrastructure of society” (3.7). Wouldn’t we all love to build a system so wonderful and widely used that it becomes a part of everyone’s everyday lives? Perhaps you can think of some such systems (social networks, again, are the ready examples). The builders of many of today’s widely-used software systems absolutely did not recognize the need for special care. As the rate of adoption of their products went up, so too did their ethical responsibilities. They failed to anticipate and design for the societal implications of their systems. The next section will offer a procedure that you and your world-changing startup can use to do better.

Autonomous vehicles

It’s entertaining to ponder how self-driving cars should navigate impossible ethical tradeoffs in trolley problem scenarios. But as of this writing, the only pedestrian killed by a self-driving car died even though the software eventually saw that emergency braking was required: it wasn’t programmed to actually apply the brakes immediately. Did Uber consider the ethics of that design tradeoff? Have they considered how widespread self-driving cars might affect society, and what their own responsibilties are?

Law enforcement

Suppose police use facial recognition just to track passengers through international airports. How do the ethical questions change if they track everyone within 1 mile of the airport? Or 10 miles, or 100?

Suppose police wear body cameras to increase accountability. How do the ethical questions change if the cameras are on 1%, 10%, or 100% of the time? Or as the recordings are stored longer, or made available to more people? Or fed to that facial recognition system?

Moral lenses

“Professional competence starts with technical knowledge and with awareness of the social context in which their work may be deployed. Professional competence also requires skill in communication, in reflective analysis, and in recognizing and navigating ethical challenges.” (ACM Code 2.2)

How can we begin to do that recognizing and navigating?


Moral Lenses (MIT certificate required)

This reading describes a process for ethical evaluation. It applies four moral lenses, different viewpoints on a project’s positive and negative impacts:

the project’s costs and benefits
the way they are achieved
patterns of good and bad outcomes and processes
considering the project as a person

The practice exercises at the end are for you to consider on your own.

reading exercises


If your software records user activity and sells the data to third parties without users’ knowledge, which moral lens best captures the way in which this is bad for users?

(missing explanation)


Suppose you work on a cell phone that unlocks by recognizing users’ faces. The accuracy of your facial recognition is significantly worse for users with darker skin. Which moral lens captures the way in which this is bad?

(missing explanation)


Compared to an interface with physical buttons, a touchscreen might be much less usable by someone who cannot see the screen.

You’re designing a touchscreen cell phone. Which moral lens leads you to develop features like iOS VoiceOver or Android TalkBack?

(missing explanation)

You’re implementing a touchscreen car center console. Which moral lens captures concerns about drivers diverting their attention to look at the screen?

(missing explanation)

You’re building a touchscreen restaurant menu tablet. User testing shows that people don’t understand what the tablet is if the screen turns off, so you keep the screen on all the time… showing flashy video ads. Which moral lens definitely reveals a problem?

(missing explanation)

Code it

Alice implements a method with unreadable code and an incomplete spec, and her teammate Bob, who is about to write code that calls the method, misunderstands what it does. Which moral lens captures the way in which Alice’s actions were bad?

(missing explanation)

Commit it

Charlie implements a method with perfectly working code, doesn’t test it at all, and commits it to the team repository. Which moral lens captures the way in which Charlie has done something wrong?

(missing explanation)

Software beyond 6.031

Correct, clear, changeable software is a starting point. Without correctness, virtually no other desirable property is achievable, because we can’t rely on the code to do what we intended. And without clear, changeable code, multiple authors will be unable to collaborate, and even a single author will eventually lose the battle with complexity and be unable to build further.

Although we don’t address them in 6.031, properties like performance, security, and usability cannot be bolted on later: they must be baked into the system as part of its iterative design process.

Ethicality is no different. Wherever you go on to build software, consider and test its ethical properties just as you would any others. Do not rely on yourself or anyone else to simply “get it right.” We rely on testing to tell us whether our implementation successfully satisfies a spec, and when we put on our testing hat, we work as hard as we can to break our own code. With your ethical testing hat, consider as pessimistically as possible all the ways that what you build might be misused and abused, using the moral lenses from this reading. Fixing these bugs might be tricky, because they involve so much more than just the system and its source code. Luckily, you have plenty of experience fixing tricky bugs.