Robots are excellent chess players, best drivers, and they are already becoming doctors.
If a robot provides a better service than a human being, since it reaches a more accurate diagnosis in less time and cost, or performs a more reliable surgery, or monitors with full accuracy our health condition, then many people (if not most people) will prefer a robot as their doctor, –whatever a robot is- just because the robot is more useful and produces better results. Just the same happens with taxi drivers.
Let’s begin the discussion from a utilitarian ethical paradigm, and afterward explore alternative ethical and political paradigms.
Robots are useful because they remove human errors; they have no flaws. Daniel Kahneman, in Noise: A Flaw in Human Judgment (2021), describes many flaws in medicine, even regarding reading X-rays: too often different doctors reach a different diagnosis, particularly when the patient is a child. Doctors are no better than judges, for instance, providing both predictive and evaluative, value-based, judgments. It is true that intelligent algorithms may dehumanize patients by standardizing people into statistics, but algorithms fed with massive data by sensors and digital devices can be much more precise, fair, and reliable than a human being. It is therefore rational for people to choose robots as doctors, as well as a judge, not just as taxi drivers or chess players.
Robots are free from any emotion or distraction. They are value-free and have no prejudices. Networks of interconnected devices, from intelligent implants to exogenous prosthesis, all sorts of autonomous artifacts, will learn and improve very fast by themselves, just by exchanging massive data about a particular human being or a given group of people, in a given environment, under specific circumstances, and over time. Humans may not know why robots take one or another decision, algorithms tend to become black boxes, but this obscurity may seem a reasonable price to pay for many if the robot’s decision is at the end right.
Robots can provide objective advice to patients based on facts, as observed and measured scientifically. They don’t have problems mixing up values with facts, feelings with thoughts. Robots are free from human errors and never face “tragic dilemmas”, as defined by Martha Nussbaum: decisions having no alternatives free from ethical controversy.
Moreover, robots will likely become affordable, not just to the wealthier populations, but to everybody, as more robots are being manufactured by other robots. Therefore, it is likely that like it or not, all sorts of robots used in medicine provide “a greater happiness for a greater number of people”, the key political decision-making criteria from a utilitarian ethical view, as formulated first by Jeremy Bentham, and then refined by John Stewart Mill, the great liberal philosopher, in the mid-19th century. Governments should support robots acting as doctors since they are more useful and cost-effective than human doctors. Moreover, the interaction with a robot will be increasingly similar to the interaction with a human doctor because robots may be designed with human-like interfaces and the capacity to interact. Since Alan Turing proposed his famous test in the 1950s, distinguishing between a robot and a human has become increasingly difficult. We often come across CAPTCHA online tests that ask us -Prove that you are not a robot! –
My brother-in-law, Pablo, is a good doctor; he practices in Berkeley, California. Unlike his father, who is a psychoanalyst, Pablo has a scientific-oriented mindset. Actually, Pablo finished a PhD in Molecular Biology at Harvard before studying medicine. If a patient asks for his opinion, he always tells the truth –and nothing but the truth. “Your life expectancy is about three months”, he may say. If the patient keeps asking him what to do, Pablo may say: “you could accept it and live as peacefully as possible for the next months, or go to a good private hospital to get the best cure available for half-million dollars, having maybe a 10% chance to survive”. Implicitly, Pablo is applying a utilitarian ethical paradigm, since he is balancing costs and benefits, financial and emotional.
A robot would be able to give better advice. What is the best alternative from a pure cost-benefit analysis? It depends on the age of the patient, as well as their income level. Just by answering in terms of costs and benefits, Pablo is sharing his own utilitarian ethical frame. A robot would be able to perform the cost and benefit calculus in a more refined way than Pablo, based on the official US Cost-benefit guidelines and the so-called “Statistical Value of Life”. What rational price –in dollars or euros- a given human being should be able to pay to gain some more months or years of life? This calculation is perverse, “grotesque” as Charles Taylor said, but it is needed since public resources are scarce and some sort of rule is needed to allocate them to reduce car accidents, finance public research on cancer, hire more doctors in public hospitals, or just to build schools or another art museum.
Based on cost-benefit decision-making criteria, it seems that letting robots substitute human beings as doctors may be a sound decision indeed, meaning it is very useful. But what does it mean for humans to live in a society where doctors –psychologists, judges, and (why not?) best friends and lovers- are increasingly being replaced by robots? Just focusing on doctors, it might mean that human bodies can be manipulated and fixed as machines and, of course, the same holds for animals, engineered to develop organs to be transplanted to human beings. The scholastic distinction between body and mind or soul will be removed, but also these concepts. A human body is an imperfect organism that needs to be improved. Death, in the end, is an error of nature that should be solved. This is the implicit ethics applied in engineering. It is embodied into a pragmatic and positivist, progressive rather than conservative, mindset. Every invention is useful if it solves a problem in a cost-effective manner; if it improves human wellbeing. There is no absolute and universal truth independent from humanity, as Richard Rorty stated. Our values are evolutionary, like our biology, they come from the stories we tell ourselves once we experience the consequences of our previous decisions. More useful values survive, remain. Animals have no other rights than those useful to human beings. Transhumanism, as claimed by Kurtz Kurzweil, Engineer of the Year in 1992 in the USA, is likely and not too distant and, according to Jacques Ellull, our unstoppable future.
Let’s explore now an alternative ethical paradigm, based on deontology and rights. Several patients would rather claim their right to die with dignity instead of spending their last days and months surrounded by robots. Tobias was Californian, like Pablo, but migrated to Europe in the late eighties and opened a chiropractic clinic in the Olympic Village of Barcelona. He enjoyed sailing very much and dreamed to sail one day across the Atlantic. He even bought a lovely second-hand sailing boat that needed care and expensive repair and devoted a lot of time and money to it. I remember a sailing trip where we spent almost ten hours alone, discussing vitalism: Tobias was convinced that our body can cure itself if we take care of it, particularly if we make sure that our nervous system functions well and sends proper messages to our brain. He rejected being vaccinated, long before the coronavirus pandemic. Never took medicines. Almost never seen a doctor. He got a very late diagnosis of cancer one day, when his family finally took him to one. The Spanish public health system could treat him for free, and he also had enough savings to fly back to the US to the best hospital in Houston. Instead, Tobias decided to equip his boat at his very best, say goodbye to his two sons and sail to America from Barcelona with his wife. He thought the joy of fulfilling his sailing dream and the peace of sailing for weeks would be good for him; he would live intensely his remaining time. Tobias died much before expected, days before reaching the Canary Islands, and his wife sailed by herself a few days to reach Palma, with Tobias’ body tied to the cabin, covered by a blanket. Tobias’ death was premature, indeed, but it was not sad. Robots had no role at all in Tobias’ history.
Surely Pablo would admire Tobias, because his decision was human. Only a human being can decide this way. Since Ancient Greece, doctors have had a deontological code beyond pure utilitarian ethics. Both doctors and patients have the right to be protected. Should a doctor help a patient to die if they ask for it? How do we understand this kind of help: as avoiding pain and suffering? as taking out the system that keeps the person alive? or even as providing means to provoke the death of the patient? The engineers that design and build all sorts of intelligent autonomous devices, such as robots, don’t use to ask these questions to themselves, they don’t have any deontological code. Moreover, engineers don’t feel the need to have any deontological code since they have an implicit utilitarian or consequentialist ethic. How can we develop a deontological code for robots? The European Parliament is studying the legal status of “electronic beings”. Who is accountable for robots’ wrong-doing? Do doctors need to rethink and update their ancient deontological code? Do lawyers need to redefine the meaning of law?
There is a third ethical paradigm to be considered, based not on the consequences of the decision as evaluated by each individual, nor on the rights and moral imperatives as stated by Immanuel Kant. From Aristotle to Charles Taylor, Alistair McIntyre, or Mikel Sandel, there is an ethical tradition that emphasizes the concept of “virtue”, values that cannot be imposed as norms neither evaluated as cost and benefits or usefulness, but shared by a community. A robot can hardly be sensitive to virtues. Pablo’s patient may prefer not to ask their sons or daughters for money, he may wish to be together with all of them, in his latest days, in a nice mountain resort, while Tobias invested all his savings in his boat and left to the Canary Islands, instead of paying his sons’ university degrees, because he wanted his sons to live by themselves, without needing him.
There is a restaurant in Lima, Peru, called “The Good Death”. What is a good death? A robot will never have the sensitivity to understand this question. Many doctors, at least decades or centuries ago, had this sensitivity. Robots had no role in the death of my grandmother: “I will die happily”, she said to all of us, “Because all of you are here, with me”, she added. She, like ancient Stoics, believed that we humans have “to obey the law of life” to live peaceful lives, not to worry about anything we know we cannot change. But it is part of the human condition not to obey these laws but to change them. We don’t know how to die as happily –this was her word- as my grandmother because it is not easy in a rationalized, disenchanted world.
An engineer’s mindset is a problem-solving mind; engineers see life and nature as an exciting environment full of problems to be solved. They tend to believe results matter, think in terms of utilities more than rights or virtues. And they have endless curiosity. Hannah Arendt, in The Human Condition (1958), stated that engineers don’t know the meaning of what they are doing, since they are just focused on usefulness. Somebody –a humanist, a philosopher, the King-philosopher of Plato- has to look over engineer’s shoulders and decide for them. Maybe a humanistic psychologist –in the tradition of Husserl, Marlow, Sartre, Merleau-Ponty, has to look as well over doctors’ shoulders. Are doctors becoming the first engineers, on their way to becoming robots? Engineers are naïf, play like children, don’t care much about the long-term implications of their inventions, nor about the science that may explain it. Instead of thinking, engineers do, try, test, and learn by doing. Technology drives itself, according to Jacques Ellull’s The Technologic Society (1954).
Martin Heidegger was optimistic enough to still devote time to educate engineers, to make them aware of the meaning of replacing nature with machines, in Letter for Humanism (1947). Richard Sennet, student of Arendt, had a more positive perspective of engineers. For Sennet, they have to become craftsmen, as he stated in The Craftman (2006): able to work for the sake of doing well, independently of the cost or the benefit, avoiding excessive standardization and alienation, saving as much authorship as possible. Engineering should be considered as a humanistic activity focused not to replace the world or the body, but to inhabit it. Engineers have to become the humanist doctors of old times.
Richard Rorty, the most famous American pragmatist, claimed in Philosophy and Social Hope (2000) that philosophers have to become useful, like doctors or engineers, less interested in abstract metaphysics and more on ethics. But we claim that doctors and engineers also have to become interested in ethics, politics, and sensitive to practical philosophy; not just aware of the consequences but also of the values, rights, and meanings involved in what they are doing. At least here, let’s agree with Heidegger.