Team creates LEDs, photovoltaic cells, and light detectors using novel one-molecule-thick material.
The helpful little paper clip that watches as we type our reports and letters in the latest versions of Microsoft Word, second guessing us and correcting our errors, has a certain degree of native intelligence. (Indeed, it is the product of years of research in the field known as artificial intelligence.) But that doesn't mean we are likely to treat it with any intellectual respect; if it tooted its horn and started typing "I want to be able to vote for the next President!" we would click it out of existence without a second thought.
What would it take for us to treat seriously the demands of a computer program for equal rights? What if robots -- which today we treat as virtual slaves -- insisted that they were endowed with the same unalienable rights that humans enjoy, including the right to self-determination or the right to bargain over wages and working conditions?
Humans have a sad record in their willingness to grant equal rights across races, classes, religions and sexes. Our sense of tribalism has historically overridden our sense of fair play and justice. And those tribal feelings are especially strong when our sense of what makes us human -- our feelings of self and specialness -- are under attack, as they have been for the past few hundred years. Galileo paid dearly for challenging our erroneous belief that we are the center of the universe; Darwin's assertion that we are descended from mere animals is still reverberating through the intellectual badlands of America.
No wonder IBM in the sixties chose the reassuring slogan "Computers don't think, people do." By now, however, most of us realize that machines can reason, sort, search and even play chess better than any person. To maintain their sense of specialness, humans resort to name-calling, deriding machines for their soullessness, for their lack of real emotion, for their cold hard reason.
Before we (humans) will take seriously any request for equal rights from them (robots), we will have to be ready, in our hearts, to treat them as emotional, empathetic equals with the ability to share our own loves and anguishes. And robots will have to really want those rights. Popular culture loves to explore this theme, most recently through Commander Data, the Star Trek humanoid robot who wants an emotion chip, and Robin Williams' Bicentennial Man, a robot who wants the rights of a human citizen of the world.
But could a robot ever want anything? Could a robot have any real emotions? The hard-core reductionists among us, myself included, think that of course in principle this must be possible. We humans, after all, are machines made up of organic molecules, whose interactions can all be modeled by sufficiently large computers (we think). We are, then, just machines, perhaps with some funny randomness forced by quantum mechanics (a popular hideaway for theories of free will).
We human machines certainly want things and have real emotions. We even experience consciousness, although discussions of consciousness invariably fall into deep pits of confusion. This is largely because observation of conscious activity is such a solipsistic activity. We can never be sure that anyone else is experiencing anything remotely similar to what we all experience, and the question really gets murky when we try to extend it as far as our dogs, let alone our machines. But, in principle, it should be possible to build other machines, non-flesh machines, that want and feel. Under this line of reasoning, it just remains to be seen whether we humans are clever enough to build them.
Critics of such thinking fall into two camps. One group believes that there is something beyond the mere material in the make-up of things that live, variously articulated as a life force, a soul or a biological historical context. (In my opinion, the latter is just an artful dodge to avoid coming out and calling it an elixir of life.) The other group suggests that there are scientific principles that we do not yet fully understand, suggesting that the key to life or consciousness is perhaps quantum mechanics, or some previously unrecognized fundamental type, like mass in physics. (Again, the latter may as well be called an elixir of life.)
Scientists are genuinely divided on these issues, perhaps because we are at the nexus, much as in Darwin's time, where a great communal intellectual leap will be necessary for us to be able to relinquish one more claim to specialness that we as humans have held dearly -- this time giving over to the view of ourselves as mere machines.
So how well are we doing in creating living, breathing robots? We are making progress. There are a lot of levels at which this question is being tackled, both from the bottom up (starting with the low level dynamics of life itself), and from the top down (exploring the high-level behaviors and capabilities that we expect from members of the human species).
In trying to build artificial living forms, researchers are taking apart the simplest living bacteria -- mycoplasmas -- whose genome can be stored in less than a quarter of a megabyte. The hope is to build new versions of living creatures by mixing and matching components from many different naturally occurring bacteria so that we can fully understand the process of life down to a molecular level.
Meanwhile, by building computer programs that reproduce and evolve, using so-called evolutionary algorithms, researchers are able to demonstrate the emergence of many of the abilities and behaviors that we expect from simple living creatures, such as interaction with a complex environment and sexual reproduction. Inside computers artificial life forms have already evolved that can locomote, chase prey, evade predators and compete for limited resources. Some researchers speculate that this line of research will let us build truly intelligent robots without having to figure out all the details ourselves.
At the other end of the living robot endeavor there has been a renaissance of interest in building humanoid robots. In Asia, Europe and North America, teams of researchers are designing robots that walk like humans, talk like humans, detect human faces and understand their voices, have human-like reflexes, emotions and the beginnings of human social responses.
This has always been the ultimate dream of artificial intelligence researchers, as well as the fantasy of science fiction writers. However, the robots we are building today still can't tell the difference between a vacuum cleaner and an ironing board, don't have the physical dexterity of a one-year-old baby, and don't yet recognize that they are the same robot today that they were yesterday. At best today's humanoids are zombies stuck in the present, surrounded by a sea of faces and unrecognizable shapes and colors. Their social interactions with people are recognizable as social in nature, but ask them a question outside their pre-programmed fields of expertise and they fall apart.
Still, these interactions are remarkably more advanced than just three years ago. The direction is clear. Robots in research laboratories are becoming more human-like. Barring a complete failure of the mechanistic view of life, these endeavors will eventually lead to robots that we will want to treat as ethically as we treat animals, and ultimately as we treat fellow humans.
But we should not forget that we will also want robots to man the factories and do our chores around the house. We do not have ethical concerns about our refrigerators working 24 hours a day, seven days a week, without a break or even a kind word. As we develop robots for the home, for hospitals, and for just about everywhere else, we will want them to come similarly free of ethical issues.
So expect to see multiple species of robots appearing over the next few years. There will be those that will be our appliances, and those that we will feel more and more empathy toward. One day the latter will call our bluff, and ask us to treat them as well (or as badly) as we treat each other. If they are unlucky, their prayers might just be answered.
ï¿½ï¿½ï¿½ 2000, Time Magazine, Inc. Reprinted by permission.
A version of this article appeared in MIT Tech Talk on October 18, 2000.