I recently watched the 2010 documentary film “Plug & Pray” which primarily focuses on the ethics of technology through the diametrically opposed visions of the brilliant former MIT professor Joseph Weizenbaum and futurist Raymond Kurzweil. I was particularly fascinated with the moral and ethical convictions of Weizenbaum, a man who played an important part in the advancement of computers (and therefore society) in the 20th century.
In the 60s, Weizenbaum wrote a program called ELIZA which received much attention and praise and became a sort of jumping-off point for modern artificial intelligence. He soon became concerned about the theoretical uses for AI, ranging from psychological disingenuity in human relations to exponentially efficientized military machines capable of mass death. He also eventually became aware of the broader philosophical and religious consequences of the evolution of AI, such as the blurring of the boundaries between man and machine, the phasing out of biological humanity, and the future possibilities of eternal life.
Weizenbaum spent much of the remainder of his life attempting to counterbalance the Pandora’s box he felt he had opened, and to broadcast his apprehensions about “the other side” of AI. He also grew frustrated with the tendency of many in the computer science field to turn a blind eye to ethical questions in their quest for knowledge and achievement. One of Weizenbaum’s statements from the film puts this in perspective:
There are major military projects to create robot soldiers. Then you have this artificial human who can calculate that he should shoot that person over there with that strange uniform. That means we computer scientists don’t have the right to yell at our politicians because they lead us into war. For without our help it wouldn’t be possible.
If we didn’t help, war would look much different today. It may even be impossible in a sense. At least to the extent we see it today.
I came away with a profound respect for Weizenbaum and his ability to objectively analyze his own achievements and take a step back at his own risk.
I also found a statement from the Vatican to be interesting:
God gave man the ability to think and be creative. If he can build humanoid robots, then he should use this gift.
It seems a carte blanche approval of “if you can dream it, you can build it”. But the Vatican should remember mankind’s great capacity for destruction and evil, too. The statement seems to be saying “anything goes”, but there is a moral component to our actions. Building humanoid robots in broad terms may be morally ambivalent… but what are those robots intended for?
At the end of the day, I am left with a number of questions. At what point does man with his prostheses, Pacemakers, medicines, and artificial hearts, become a biorobot? Is it when the mind is replaced with a computer containing previous memories on disk, and where is the soul in all this? Where is the line between natural extension of life via health and medicine versus artificially bioengineered extension of life? Who is at fault when a sentient, autonomous robot accidentally (or purposely) kills someone – the robot itself, or its owner, or its creator? And when that day comes, how will we respond to a race of superhuman autonomous robots which no longer wishes to be subject to humans? These questions will probably have to be addressed in my lifetime.