MIT
MIT Faculty Newsletter  
Vol. XXXI No. 2
November / December 2018
contents
MIT Entanglement with Saudi Monarchy
Requires Independent Evaluatio
Ethical Obligations of Universities
in Their Transnational Engagements
The MIT Stephen A. Schwarzman
College of Computing
Ethics and Liberal Arts in the
Schwarzman College of Computing
MIC
Update on Construction Across Campus
Lamenting MIT's New Web Portal
On The Transition to Retirement
Hypothesizing About Stephen Hawking
Questioning the New MIT Website
Faculty Policy Committee
Committee on the Undergraduate Program
Committee on Campus Planning
Most Popular Undergraduate Majors
2005-2018
Printable Version

Ethics and Liberal Arts in the
Schwarzman College of Computing

Bernhardt L. Trout

All of campus is excited at the announcement of the new Stephen A. Schwarzman College of Computing. Everyone is trying to figure out how they might best participate, and no doubt the administration is trying to process an overwhelming barrage of ideas, suggestions, critiques, and just plain lobbying. A billion dollar academic initiative is indeed bold, and the administration should be applauded for developing it and having already raised a good part of the funds. Of course, this scale of investment into Artificial Intelligence (AI) pales in comparison to that of Silicon Valley, not to mention to the initiatives of companies worldwide across just about every sector. In my area, pharmaceuticals, companies have been implementing AI methods for years, and our lab has played a small part in AI.

Nevertheless, despite the massive effort in AI outside of MIT, given the Institute’s technical expertise and creativity, we will no doubt figure out how to have a major technical impact. However, our senior leaders have correctly stated that the technical aspects are only a part of the initiative for AI. The ethical and societal implications are intimately linked with the technical development of AI and need to be addressed hand in hand with the technical. Below is a vision of how the ethical and societal implications of AI could be addressed so that we are able to achieve the potential of the Schwarzman College envisioned by Mr. Schwarzman and MIT’s leaders.

As President Reif writes in his October 15 announcement, “In this pivotal AI moment, society has never needed the liberal arts – the path to wise, responsible citizenship – more than it does now. It is time to educate a new generation of technologists in the public interest.” In the MIT News article released that day and the accompanying Q&A, “ethics” and its derivatives are used 25 times. Key statements include, “The College will . . . transform education and research in public policy and ethical considerations relevant to computing and AI.” “The College will equip students and researchers in any discipline to use computing and AI to advance their disciplines and vice-versa, as well as to think critically about the human impact of their work.” President Reif is also quoted, “‘We must make sure that the leaders we graduate offer the world not only technological wizardry but also human wisdom – the cultural, ethical, and historical consciousness to use technology for the common good.’” Stephen Schwarzman himself makes it clear that ethics and society are of major concern, “‘The College’s attention to ethics matters enormously to me, because we will never realize the full potential of these advancements unless they are guided by a shared understanding of their moral implications for society.’” In his announcement of the same day, Provost Schmidt writes, “A central idea behind the College is that a new, shared structure can help deliver the power of computing, and especially AI, to all disciplines at MIT, lead to the development of new disciplines, and provide every discipline with an active channel to help shape the work of computing itself.”

The administration’s focus on intertwining ethics and policy with technical advances is laudable. In thinking about how to make this vision a reality, we might wish to consider the following questions: Are disciplines the right way to think about ethics and human wisdom? Can ethics be advanced or transformed? How can the liberal arts best be the path to wise, responsible citizenship? Let’s take these questions in turn.

Understanding the ethical and societal implications of AI necessitates an understanding of both the artificial and intelligence. Understanding the artificial necessitates understanding together the artificial and the natural. Understanding intelligence, or what might better be referred to as mind, necessitates understanding together mind and material. Furthermore, understanding all of these must incorporate the understanding of how human beings fit with them. Such an understanding both connects and divides each of these elements. Thus, understanding artificial intelligence together with its connections to ethics and society necessitates understanding the whole in which each of these parts fit. A reductionist approach may be convenient for some purposes, but as one cannot understand a human being by studying only the chemical elements that compose that human being, one cannot understand the full implications of AI from separating those implications into parts. Achieving the full potential of the Schwarzman College necessitates developing an understanding of the whole and therein, the place in which AI fits within that whole. Such an understanding cannot be gained through any one discipline, or even through interdisciplinary studies, which itself presumes that the whole is divided into disciplines which can interact but not be put together into something greater than themselves.

Whether or not ethics can be advanced or transformed is questionable. We must be careful not to impose the methods that have been enormously successful in modern science on realms in which they are not applicable.

Ethics and society may not be amenable to the modern scientific method. Certainly, society has been transformed by science, but it is unlikely that human nature itself has changed. The fundamental questions that are innate to humans as humans have not changed: What is our relationship with nature? What is the best political regime? What is justice? What is a good human being? What is the best life? Certainly, opinions regarding the answers to these questions can be affected by the particulars of a given society, but there is nothing to indicate that society alters or advances the answers to those questions. Aristotle is taught today both in the School of Engineering and in our Philosophy Department not for historical or sentimental reasons, but because of the wisdom he can bring to bear on these questions.

President Reif makes explicit the connection between the liberal arts and civic education. This is no doubt an invitation to explore this connection and also the divergence. While it would be a challenge to make the case that studying Bach, Botticelli, or Browning makes one a more responsible citizen, studying them may make one a wiser citizen. Perhaps in addition to the liberal arts broadly, we should think about how to educate towards an informed citizen. This should include teaching about political institutions and about the regimes in which they exist, in particular democracy. Perhaps it would be important to include the best analyst of American democracy, Tocqueville, an author with penetrating insight, who at the same time students find most appealing.

A more specific idea to be considered for the Schwarzman College, would be including our Society, Ethics, and Engineering (SEE) Program, which is housed in the School of Engineering. We have educated over 1000 engineering students in ethics and engineering and in the past few years, over 10 percent of each MIT class. We include in our classes the ethical issues behind AI and have developed many bespoke versions of our Ethics for Engineers courses to meet the needs of various departments and programs. Seven of the eight SOE Departments in addition to GEL (Bernard M. Gordon-MIT Engineering Leadership Program) have partnered with us, and these departments (and GEL) together with their students seem to love our courses, and as they are not required, students take them because they wish to engage in the material. A large part of our success stems from our courses viewing both ethics and engineering as part of a larger whole. Not as two separate disciplines that are melded together, but as two parts, each of which points both inward to itself and outward to something greater.

In our Ethics for Engineers course, we start with four fundamental theories of ethics (utilitarianism, duty ethics, rights ethics, and virtue ethics). Our students read excerpts from the authors who developed these theories, including Bentham, Kant, Locke, and Aristotle – we think that thinkers who have stood the test of time are the best introduction to their own theories. We then turn to the political and social context in which ethical (and other) decisions are made, with readings in Tocqueville and others (including Sutherland, who was highlighted by our Faculty Chair, Susan Silbey, in her article on teaching ethics in the previous Faculty Newsletter). We then look at the modern engineering project, with readings from Bacon and others, and we address fundamental ethical challenges faced by engineers in our time, especially in biotechnology and AI. For each class we read case studies about real situations faced by real engineers, from the notorious Ford memo of the 1970s to the recent Volkswagen scandal, from the seminal 1966 Beecher article which led to the requirement of consent in medical studies to recent ethical quandaries at Google, Facebook, Theranos, and elsewhere. The enthusiastic student reviews that we have received over many years, tell us that we are on the right track in our approach to teaching ethics to future engineers.

In the Schwarzman College, we can go much deeper. For in order to address well the ethical and societal implications of AI, we must think directly about technology within the whole. That human endeavor which best approaches the study of the whole is philosophy, the architectonic science that has rightly been called “the queen of the sciences.” It would be tremendously beneficial if the faculty of the Department of Philosophy and those in diverse disciplines across the Institute could work together to create a vision for the future of AI that we could teach our students. My own efforts to achieve this in our grassroots SEE program have not been successful, but I hope that the vision for the Schwarzman College can promote such broad thinking and bring together scholars from philosophy, science, and engineering to think and teach subjects together in ways that have not been possible before.

Back to top

Those who work in AI or related engineering and science fields (and all engineering and science fields are related to AI, as our leaders argue) need to grapple with the profound ethical questions themselves. Are science and technology ultimately utopian or dystopian, and what criteria do we use to decide? Are there things that we best not do in the name of science? How do politics play a role, and how do we promote a healthy politics that maintains freedom while not losing sight of equality and dignity? Without a doubt, those who consider themselves professional ethicists could have a role to play, but the questions ultimately come down to basic human judgements informed by experience, study, and technical knowledge of the subject.

Our science and engineering colleagues, as I have found both through individual discussions and talks that I have given (including one on the ethics of AI), are open to and interested in discussing foundational issues of ethics. Many of them have a mechanistic view of nature, consonant with the Baconian science that they pursue, but they are open-minded enough to realize that the world may be larger than just their opinions.

Of course, examining opinions is only the beginning of addressing these foundational questions that advances in AI force us to address. We need to go beyond those opinions with serious study to question the prevailing views. We need to study seriously ethics, but also need to make ethical choices that are informed by an in-depth understanding of technology. Without both of these, we will not be able to judge what our technologies can do and therein decide what we should do. Our leaders need to discourage the narrow and ultimately false concept that ethics is a discipline that only licensed members of the guild can practice. The ethics of AI needs to start first and foremost with those practicing AI and be informed by those interested in the whole.

In fact, there is a long tradition at MIT of scientists and engineers interested in the whole. Two luminaries of the Institute whose far-ranging investigations are directly relevant to ethics and society in the Schwarzman College are Norbert Wiener and Marvin Minsky. Another part of this tradition at MIT can be found in the Lewis Report of 1949. Warren K. Lewis, “Doc” Lewis as he was affectionately known, was formerly head of the Department of Chemical Engineering before he led the Committee on Educational Survey which, among other things, led to the establishment of what has now become SHASS. Despite founding this School, his report made clear, “In practice the professional and the general elements should not be isolated; they should not be assigned to separate subjects or to separate teachers. All parts of an educational program should contribute to both ends.” (p.19) That vision has regrettably disappeared from the mainstream of MIT. Perhaps the Schwarzman College is an opportunity to get it back, but it will not happen organically. It will need, as is often said about difficult things, leadership.

Thinking about those who consider that science has a governing impact on society might turn us to seek out the wisdom in the founders of modern science and their views of ethics. As we did with Aristotle, we might suspect that these founders have something to teach us that we would otherwise be hard pressed to learn.

For example, in Part 5 of Descartes’ Discourse on Method, the founder of analytic geometry makes the claim that machines can never fully mimic humans. Given the work of Turing and those who followed him, we might suspect that Descartes simply got it wrong. However, if Descartes was anything, he was not simple, and we should recall that in Part 1 of the same work, Descartes had told us, “. . . I offer this writing only as a story, or if you prefer, as a fable . . .”. Thus, perhaps, Descartes is not proffering simple conclusions as facts, nor is he simply presenting the blueprint for a project. On the contrary, Descartes is helping to inform us of human possibilities that we have ignored in the slumber of our present thinking.

Given the tremendous ethical and societal challenges that AI promises for us, we might also recall that modern science by design planted at the very beginning the seeds of forgetting of its own assumptions. Warnings about the consequences of cultivating those seeds are in, for example, Bacon’s New Atlantis, which describes a futuristic scientific institution, and this literary description actually influenced the formation of the Royal Society, through it Benjamin Franklin’s American Philosophical Society, and by inspiration that of MIT itself. Bacon’s ideas led to the creation of disciplines, which after some time forgot their origins and the warnings that Bacon gave against staking too much on disciplines.

I urge our leaders to seek out the still and small voices of those in their midst and those far away who can help them to realize the true potential of AI for the betterment of human beings.

Back to top
Send your comments

   
MIT