Notes on the 24 November 2015 conference on machine ethics

The day before yesterday, I attended a German-speaking conference on robo- and machine ethics, organized by the Daimler and Benz foundation and the Cologne center for ethics, rights, economics, and social sciences of health. Speakers included Prof. Oliver Bendel, author of a German-language blog on machine ethics and Norbert Lammert, president of the German Bundestag. The conference wasn’t meant to be for researchers only – though a great many scientists were present –, so most talks were of introductory nature. Ignoring the basics, which are, for example, covered in the collection on machine ethics by Anderson and Anderson and the book by Wallach and Allen, I will in the following summarize some thoughts regarding the event.

webvisual_roboterethik_150831
Poster from the conference website

Conservatism

Understandably, the conference focused on the short-term relevance and direct application of machine ethics (see below). Robots with human-level capabilities were only alluded to as science fiction. Nick Bostrom’s book Superintelligence was not even mentioned.

Having researched the speakers only a bit, it also seems like most of them have not cared to comment on such scenarios, before.

Immediately relevant fields for machine ethics

Lammert began his talk by saying that governments are usually led to change/introduce legislation when problems are urgent but not before. And thus, significant parts of the conference was dedicated to specific problems in machine ethics that robots face today or might face in the near future. The three main areas seem to be

  • robots in medicine and care of the elderly,
  • military robots, and
  • autonomous vehicles (also see Lin (2015) on why ethics matters for autonomous cars).

Lammert also argued that smart home applications might be relevant. Furthermore, Oliver Bendel pointed to some specific examples:

AIs and full moral agency

There was some agreement that AIs should (or could) not become full moral agents (at least within the foreseeable future). For example, upon being asked about the possibility of users programming robots to commit acts of terrorism, Prof. Jochen Steil argued that illegitimate usage can never really be ruled out and that the moral responsibility lies with the user. With full moral agency, however, robots could in principle resist any kind of illegal or immoral use. AI seems to be the only general purpose tool that can be made safe in this way and it seems odd to miss the chance to make use of this to increase the safety of this powerful technology.

In his talk, Oliver Bendel said that he was opposed to the idea of letting robots make all moral decisions. For example, he proposed the idea that robot vacuum cleaners could stop when coming across a bug or spider, but ultimately let the user whether to suck in the creature. Also, he would like cars to make him decide in ethically relevant situations. As some autonomous vehicle researchers from the audience pointed out (and Bendel himself conceded), this will not be possible in most situations – ethical problems lurk around every corner and quick reactions are required more often than not. In response to the question of why machines should not make certain crucial decisions, he argued that people and their lack of rationality was the problem. For example, if one were to introduce autonomous cars, people whose relatives would be killed in accidents by these vehicles would complain if the AI had chosen their relatives as victims even if the overall number of deaths was decreased by using autonomous vehicles. I don’t find this argument very convincing, though. It seems to be a descriptive point rather than a normative one: of course, it would be difficult for people to accept machines as moral agents, but that does not mean that machines should not make moral decisions. And the preferences that are violated, the additional unhappiness or the public outcry caused by introducing autonomous vehicles are morally relevant, but people dying (and therefore also more relatives being unhappy) is much more important and should be the priority.

Weird views on free will, consciousness and morality

Some of the speakers made comments on the nature of free will, consciousness and morality that surprised me. For example, Lammert said that morality necessarily had to be based on personal experience and reflection and that this made machine morality impossible in principle. Machines could only be “perfected to behave according to some external norms” and he said that this has nothing to do with morality and another speaker agreed.

Also, most speakers naturally assumed that machines of the foreseeable future don’t possess consciousness or free will, which I disagree with (see this article by Eliezer Yudkowsky on free will and Dan Dennett’s Consciousness explained or Brian Tomasik’s articles on consciousness). I am not so surprised about disagreeing with them, because many of the ideas of Yudkowsky and Tomasik would be considered “crazy” by most people (though not necessarily philosophers, I believe), but by how confident they are given that free will, consciousness and the nature of morality are still the subject of ongoing discussion in mainstream, contemporary philosophy. Indeed, digital consciousness seems to be a possibility in Daniel Dennett’s view on consciousness (see his book Consciousness explained) or Thomas Metzinger’s self-model theory of subjectivity (see, for example, The ego tunnel) and theories like computationalism in general. All of this is quite mainstream.

The best way out of this debate, in my opinion, is to only talk about the kind of morality that we really care about, namely “functional morality”, i.e. acting morally, but, if that is even possible, not thinking morally, feeling empathy etc. I don’t really think it matters much whether AIs are really consciously reflecting about things or whether they just act morally in some mechanic way and I expect most people to agree. I made a similar argument about consequentialism and machine ethics elsewhere.

I expect that machines themselves could become morally relevant and maybe some are already to some extent, but that’s a different topic.

AI politicians

Towards the end, Lammert was asked about politics being robosourced. While saying that he is certain that it will not happen within his life time (Lammert was born 1948), he said that politics will probably develop in this way unless explicitly prevented.

In the preceding talk, Prof. Johannes Weyer mentioned that real time data processing could be used for making political decisions.

Another interesting comment to Lammert’s talk was that many algorithms (or programs) basically act as laws in that they direct the behavior of millions of computers and thereby millions of people.

Overall, this leads me to believe that besides the application in robotic areas (see above), the morality of artificial intelligence could become important in non-embodied systems that make political or maybe management decisions.

Media coverage

Due to the presence of Norbert Lammert (president of the German Bundestag) and all the other high-profile researchers and based on the large fraction of media people on the list of attendees, I expect the conference will receive a lot of press coverage.

Utilitarianism and the value of a life

Utilitarians are often criticized for being cold-hearted in that they assign numbers to suffering and happiness (or number of lives saved) and for making ethical decisions based on calculations. With regard to this, Russell and Norvig write in the third edition of their famous artificial intelligence textbook (section 16.3.1):

Although nobody feels comfortable with putting a value on a human life, it is a fact that tradeoffs are made all the time. Aircraft are given a complete overhaul at intervals determined by trips and miles flown, rather than after every trip. Cars are manufactured in a way that trades off costs against accident survival rates. Paradoxically, a refusal to put a monetary value on life means that life is often undervalued. Ross Shachter relates an experience with a government agency that commissioned a study on removing asbestos from schools. The decision analysts performing the study assumed a particular dollar value for the life of a school-age child, and argued that the rational choice under that assumption was to remove the asbestos. The agency, morally outraged at the idea of setting the value of a life, rejected the report out of hand. It then decide against asbestos removal — implicitly asserting a lower value for the life of a child than that assigned by the analysts.

So, if one actually cares about a life not being destroyed, then the optimal approach is to assign a value to a life that is as accurate as possible. Deliberately not doing that makes sense only if you care more about something else and don’t mind assigning lower values implicitly.