Thursday, December 14, 2017

The Ethics of Apologies: Why, when and how




I apologise a lot. Whenever I write an email, I apologise for the tardiness of my response (“I’ve just been so overwhelmed; please forgive me!”). Whenever I pass a sarcastic comment that offends someone, I apologise for the offence. Whenever I forget someone’s birthday or anniversary or other significant life event, I apologise for being so thoughtless.

These infractions are, of course, relatively minor. Other people have to apologise for far more serious things. Recent revelations about historic sexual harassment (and in some cases assault) in Hollywood and media have led to a lot of public apologies. Many of these apologies have been criticised. They are said to be insincere and calculated. Too little, too late. And yet the apologies have to be given. Without them, people would descry the lack of moral accountability on the part of the wrongdoers.

I’ve watched these debates from the sidelines. I’m curious as to why apologies are so hotly contested and carefully evaluated. I share the sense that it is important to apologise when you have done something wrong, but I’m not sure what the criteria for a morally appropriate apology actually are. What are people looking for from an apology? What is their moral function? What standards can we use to evaluate them? As I listened to people confidently deconstruct and critique the current spate of public apologies, I realised how little I know about the ethics of apologies.

So I decided to do some reading. Nothing too elaborate. I asked people on Twitter for some recommendations and did some research. I read a handful of articles on the philosophy of apologising. This reading brought some clarity to my thinking, but also revealed important disagreements and points of contention among philosophers concerning the nature, function and propriety of apologies. I thought I would use this post to share some of what I learned. I will do so by asking and answering three questions: (i) What is the nature and purpose of an apology? (ii) Is there a paradox of apologies? and (iii) What makes for an effective apology?


1. The Nature and Purpose of Apologies
Let’s start by trying to clarify what an apology actually is. The most common understanding of an apology is that is an acknowledgment of wrongdoing, directed at a specific person (or persons), often coupled with a commitment to repair or rectify the wrong, and usually containing an explicit or implied request for some response from the person to whom it is directed. Apologies are consequently necessarily interpersonal: they are directed at other people. They are very different from public acknowledgements of wrongdoing. I could stand up in front an audience and expostulate on my moral failures for several hours, but this would not amount to an apology unless it was directed at someone else who had been a victim of these failures.

Why are apologies directed at other people? Because they are given in recognition of the fact that those people have been negatively affected by your actions, and so have good reason to resent you for those actions (or, if you don’t like ‘resent’, you could replace it with ‘negatively evaluate’). The purpose of the apology is to get them to reconsider this negative evaluation — to switch from an attitude of resentment to one of forgiveness. Indeed, forgiveness is often viewed as the main purpose behind apologies, though, as we shall see below, they may serve other purposes too.

Apologies should be distinguished from excuses, though the two are closely related. An excuse is also usually directed at another person. It provides an explanation for the wrong done to that person, and is offered in the hope that the person will withdraw their negative evaluation once the excuse has been given. The crucial difference is that an excuse is an attempt to ‘explain away’ the wrongdoing. If the excuse is well-grounded, it absolves you of any culpability. It says to the other person ‘look, you may think I did something do you that deserves your moral censure, but actually you are wrong; I’m not to blame’. An apology is given in full recognition of the fact that you are culpable. You are asking the victim for mercy; you are not trying to explain away what you did. There is a thin line between apology and excuse, and people often stray from one side to the other.

Apologies are peculiarly powerful. Adrienne Martin’s excellent article on the power of apologies starts with this observation. She thinks its odd that a simple formulation of words carries so much weight in our social dealings. She argues that this is because apologies have ‘reason-giving power’: if done right, they give victims good reasons to change how they think about wrongdoers. She favours an account of this reason-giving power that is based on Peter Strawson’s work on reactive attitudes. Strawson argued that we are sometimes justified in directing negative moral emotions (disgust, indignation, resentment) towards others if they have transgressed some norm of social and moral life. One reason for this is that when someone has transgressed a norm we become rightfully wary that they may do it again. The power of apologies, according to Martin, lies in the fact that they help to mitigate this threat of future transgression. They indicate that the person recognises the transgression and is willing to take steps to make sure it does not happen again.


2. Is there a paradox of apologies?
Oliver Hallich has written a number of articles defending the notion that there is something paradoxical about apologies. His arguments can be a little complex, and to fully wrap your head around them you have to follow the back and forth between himself and other contributors to the literature, but the gist of it is that there is a tension between the ‘attitudinal’ and ‘directive’ aspects of an apology:

Attitudinal Aspect: To be appropriate, an apology must be offered in a spirit of humility, i.e. the apologiser must recognise that they have done wrong to the victim, that the victim is entitled to their feelings of resentment, and that they (the apologiser) has no right to forgiveness.

Directive Aspect: Apologies are directive speech acts, i.e. they are uttered with the intention/aim of achieving forgiveness.

The tension arises from the fact that if the apologiser were really acting in a spirit of humility, they would not be seeking forgiveness. If they were really remorseful or repentant, they would recognise that the victim is justified in feeling the way they do, and so they have no right to be looking for the very thing that apologies are hoped to provide. After all, if you acknowledged that someone was feeling justifiably happy, you wouldn’t try to intervene and change their feelings. The only way to dissolve the tension, according to Hallich, is if the victim seeks the apology; otherwise, the wrongdoer should be turning away in shame.

There are several responses to this paradox, but Hallich dismisses them all. For example, some people respond by arguing that apologies are not necessarily about forgiveness. They are about admitting guilt or wrongdoing, or expressing remorse for what one has done. But Hallich is unpersuaded by this because he thinks that you can admit guilt and express remorse without apologising. Remember, apologies are always directed at a specific person. There is no reason why you couldn’t admit guilt or express remorse without directing your utterances at the victim of your wrongdoing. Hallich’s reasoning seems right to me on this score. Apologies ought to be kept conceptually distinct from other forms of public accountability.

Nevertheless, I’m not persuaded by all of Hallich’s arguments. One view, which I’ve already noted and is defended in Adrienne Martin’s article, but which is dismissed by Hallich, is that apologies are not about forgiveness but rather about ensuring ongoing membership in a moral community. The apologiser is not looking to be forgiven for a past act of wrongdoing, but rather asking not to be excluded from the moral community to which he/she belongs, and, perhaps, not to be excluded from some relationship with the victim, in the future. They are saying that they have learned their lesson and they will try to avoid transgressing norms of respect in the future (‘try to’ being key here since no one is perfect). Hallich dismisses this by saying that a truly humble and repentant wrongdoer would not try to reestablish themselves as equal members of a moral community after a transgression. They would realise that by doing wrong by their victim they have forfeited some of the rights to equal status, at least with respect to their moral transgression. So, for example, a person who wronged another by leaking confidential information about them to a third party would, if they accept they are in wrong, realise that they have forfeit any right to be viewed as a trustworthy person in the future.

Hallich moderates his position a little, arguing that the wrongdoer does not forfeit the right to equal status forever. They can, over time and through reform, reestablish equal status. But, again, apologies are not essential to this process of reform and reestablishment. I think this might be a little misleading and a little harsh. This is for two reasons. First, I think that for certain transgressions, instant reform may be possible. In fact, in some cases, realising that you have done wrong by someone may, in itself, be enough to disincentivise future transgressions. In those cases, it seems to me that an apology could be an important signalling tool. Second, I think that people need to be given reassurance from their moral communities that they will be given the opportunity for moral reform, i.e. that they are not, forever and always, beyond the moral pale. Issuing and accepting apologies could play an important signalling function in these cases too. They can set the terms for conditional reacceptance into the relevant moral community. Victims can effectively say to the wrongdoers ‘Okay, I’m glad you’ve acknowledged the wrongdoing; you realise you have some work to do; if you do it, I will not exclude you from the community of moral equals forever.’ Obviously, this is very much contingent on the gravity of the wrongdoing. There may be some cases where wrongdoers truly are beyond the moral pale (though, for a variety of reasons, I think we should reach that conclusion very reluctantly).


3. What makes for an effective apology?
At this stage, the inquiry takes a more Machiavellian air. We’ve covered some of the moral debates about the nature and purpose of apologies. But suppose you actually had to apologise to someone? What’s the best way to do this? How can you ensure forgiveness or re-acceptance into the community of moral equals? David Boyd’s article, ‘Art and Artifice in Public Apology’, is a useful practical guide. Boyd reviews a lot of empirical literature on what makes for an effective apology, and then draws the findings together into a linear model for distinguishing the ‘artful’ apologies from the ‘non-artful’ ones. As an added bonus, Boyd uses this model to evaluate a number of famous public apologies, including Tiger Woods’s apology for his extra-marital affairs and Lloyd Blankfein’s apology for Goldman Sachs’s role in sub-prime mortgage crisis. Tiger Woods does surprisingly well.

Boyd’s model says that there seven distinct steps to an effective apology. There is an artful and non-artful way of dealing with each of these steps. As follows:

1. Revelation: You must admit that something has happened and try to account for it. The artful way of doing this is to offer an explanation; the non-artful way is to offer an evasion, i.e. to avoid acknowledging your actions. There are different methods of evasion: (i) dissociation, i.e. where you avoid associating yourself with the relevant actions; or (ii) diminution, i.e. trying to diminish the severity or gravity of what you did.
2. Recognition: You must show some awareness of the harm that you have done to others. The artful way of doing this is to empathise with their plight. The non-artful way is to be estranged from their plight, i.e. to ignore or overlook the harm done.
3. Responsiveness: This refers to how soon after the revelation of wrongdoing you apologise. The artful way is to be timely, i.e. to respond very quickly. The non-artful way is to be tardy, i.e. leave a long gap between the wrongdoing and the apology. (Note: it’s interesting that Boyd’s framework focuses on the date of revelation as opposed to the date of the original wrongdoing; I tend to think the latter would be better)
4. Responsibility: You must admit that you are responsible for the wrongdoing. The artful way to do this is to internally attribute responsibility, i.e. to locate responsibility within yourself. The non-artful way is to externally attribute, i.e. to locate responsibility in external factors beyond your control. There are two different ways of doing this: (i) dispersal, i.e. blame many others as well or (ii) displacement, i.e. lessen what you need to take responsibility for.
5. Remorse: You must express/convey remorse and shame for what you did. The artful way to do this is to express guilt. The non-artful way is to use guile to conceal any pretense of guilt.
6. Restitution: You must try to repair the damage done to the victim(s) as best you can. The artful way to do this is to offer compensation. This need not be monetary in nature. You could try to restore someone’s reputation or career. The non-artful way is to abrogate, i.e. avoid any efforts at compensation.
7. Reform: You must make some effort to change yourself (or your organisation) in order to ensure that the same thing does not happen again. The artful way to do this is to demonstrate change, i.e. to show that you are actually doing something. The non-artful way is to be complacent, i.e. to not demonstrate any changes.

Boyd’s framework is perhaps a little bit too ‘neat’ (the desire to name every step and sub-step using a word that starts with the same letter feels a little forced), and his instructions are clearly aimed at those in the public eye (as opposed to those us with more mundane lives), but it is still instructive. It can tell us something about why people react negatively to certain apologies. Consider, as an example, Louis CK’s recent attempt to apologise for his past sexual misconduct. This apology attracted much attention and criticism at the time with some people arguing that it wasn’t really an apology at all. The full text is available here. I encourage you to read it before proceeding any further.


Now, full disclosure, I was (and probably still am to at least some extent) a fan of Louis CK’s comedy and was surprised and disappointed to learn of his misconduct (even more surprised since apparently the rumours had been circulating for years). That may well colour my evaluation of his apology. I think Louis does well on some of Boyd’s metrics. He admits to and explains his wrongdoing; he does not evade it. He acknowledges that the accounts presented by the victims are true. He also clearly adopts an internal attribution of responsibility, not blaming the victims or anyone else for his mistakes. Some people challenge this reading by suggesting that he is deflecting blame onto his celebrity and power over the women by constantly mentioning their admiration for him. I tend to think that there is a more charitable reading: by acknowledging the power dynamic he could be recognising the fact that the ‘consent’ he received from these women (assuming he did receive it) was not morally transformative. He also clearly recognises the harm he has done to the women in question, saying that ‘he cannot wrap his head around the hurt [he] brought on them’. He goes even further by recognising the harm he has done to peripheral others who have been affected by his actions (e.g. those who starred in and funded his recent movie). Finally, he tells us that he is remorseful and cannot forgive himself for what he has done, which indicates a level of guilt.

But this is also where he starts to score less well. There is some diminution of responsibility through the claim that he always ‘asked first’ (and this is probably to avoid admitting legal liability). One thing that I’ve heard other people say, and that I tend to agree with, is that although Louis says many of the right things, the language is somewhat formulaic and robotic. He says he is remorseful and acknowledges the hurt done, but does he really feel it? It is very difficult to convey these emotions effectively in the written word, but I have to say that I do get a sense from the apology that Louis realises that there are certain things he ought to say and so he says them, but the sincerity is not obvious. There is also the fact that Louis was not timely with his apology. As mentioned, the rumours about this misconduct had been circulating for years, and some of the incidents in question go back nearly 15 years (if I have my facts right). He admits that he ran from the issue in the past, so he is obviously very late to the game in admitting his wrongdoing. Ironically, Boyd might score him well on responsiveness since he was pretty quick to respond to the revelation in the New York Times, but I score him less well on this front because I measure timeliness in a different way. He also doesn’t make any clear attempt at restitution or reform. He says he will step back from the limelight for a while and ‘listen’, but it’s not clear what this means. That said, reform is ultimately something that is best demonstrated through actions, not through words, so perhaps it is too early judge him on this front. Also, any attempt at restitution in this case might be judged insensitive or inappropriate.

Another problem with the apology is that he doesn’t actually say sorry to any of the women he harmed (i.e. he doesn’t use the words ‘I am sorry for what I have done to you’). People have really taken him to task for this in the media. But I’m not sure how important that really is. As per Martin’s account above, I don’t think the power of an apology lies in the formula of words so much as in what it tells us about the character of the offender. And as per Hallich's argument, you could separate apologies from other forms of public accountability and expressions of remorse. That said, the cultural meaning that attaches to the words ‘I’m sorry’ may be such that avoiding their use does tell us something significant about someone’s moral character.

Of course, I don't want to offer an apologia for Louis CK's apology, I just want to outline a way in which to assess it and the many others that are currently doing the rounds.




Wednesday, December 13, 2017

The Rise of the Robots and the Crisis of Moral Patiency




I have a new paper looking at the potential social impacts of AI and robotics. This one is quite speculative and I wrote it some time ago (about three years ago in fact), but it does cite Futurama and Wall E and talks about moral agency and moral patiency. Check out the abstract and links below.

Title: The Rise of the Robots and the Crisis of Moral Patiency
Journal: AI and Society
Links: Official; Philpapers; Academia; Researchgate
Abstract: This paper adds another argument to the rising tide of panic about robots and AI. The argument is intended to have broad civilization-level significance, but to involve less fanciful speculation about the likely future intelligence of machines than is common among many AI-doomsayers. The argument claims that the rise of the robots will create a crisis of moral patiency. That is to say, it will reduce the ability and willingness of humans to act in the world as responsible moral agents, and thereby reduce them to moral patients. Since that ability and willingness is central to the value system in modern liberal democratic states, the crisis of moral patiency has a broad civilization-level significance: it threatens something that is foundational to and presupposed in much contemporary moral and political discourse. I defend this argument in three parts. I start with a brief analysis of an analogous argument made (or implied) in pop culture. Though those arguments turn out to be hyperbolic and satirical, they do prove instructive as they illustrates a way in which the rise of robots could impact upon civilization, even when the robots themselves are neither malicious nor powerful enough to bring about our doom. I then introduce the argument from the crisis of moral patiency, defend its main premises and address objections. 
 
 




Monday, December 11, 2017

Episode #33: McArthur and Danaher on Robot Sex

51a5CvEIUsL._SX258_BO1,204,203,200_.jpg1491501055443-McArthurVICEContributor

In this episode I talk to Neil McArthur about a book that he and I recently co-edited entitled Robot Sex: Social and Ethical Implications (MIT Press, 2017). Neil is a Professor of Philosophy at the University of Manitoba where he also directs the Center for Professional and Applied Ethics. This a free-ranging conversation. We talk about what got us interested in the topic of robot sex, our own arguments and ideas, some of the feedback we've received on the book, some of our favourite sexbot-related media, and where we think the future of the debate might go.

You can download the episode here or listen below. You can also subscribe on iTunes or Stitcher (the RSS feed is here).


Show Notes

  • 0:00 - Introduction to Neil
  • 1:42 - How did Neil go from writing about David Hume to Robot Sex?
  • 5:15 - Why did I (John Danaher) get interested in this topic?
  • 6:49 - The astonishing media interest in robot sex
  • 8:58 - Why did we put together this book?
  • 11:05 - Neil's general outlook on the robot sex debate
  • 16:41 - Could sex robots address the problems of loneliness and isolation?
  • 19:46 - Why a passive and compliant sex robot might be good thing
  • 21:08 - Could sex robots enhance existing human relationships?
  • 25:53 - Sexual infidelity and the intermediate ontological status of sex robots
  • 31:23 - Ethical behaviourism and robots
  • 34:36 - My perspective on the robot sex debate
  • 37:32 - Some legitimate concerns about robot sex
  • 44:20 - Some of our favourite arguments or ideas from the book (acknowledging that all the contributions are excellent!)
  • 54:37 - Neil's booklaunch - some of the feedback from a lay audience
  • 58:25 - Where will the debate go in the future? Neil's thoughts on the rise of the digisexual
  • 1:02:54 - Our favourite fictional sex robots
 

Relevant links


 

Wednesday, December 6, 2017

Ethical Behaviourism in the Age of the Robot




[Thanks to the Singularity Bros podcast for inspiring me to write this post. It was a conversation I had with the hosts of this podcast that prompted me to further elaborate on the idea of ethical behaviourism.]

I’ve always been something of a behaviourist at heart. That’s not to say that I deny conscious experience, or that I think that external behavioural patterns are constitutive of mental states. On the contrary, I think that conscious experience is real and important, and that inner mental states have some ontological independence from external behavioural patterns. But I am a behaviourist when it comes to our ethical duties to others. I believe that when we formulate the principles that determine the appropriateness of our conduct toward other beings, we have to ground those principles in epistemically accessible behavioural states.

I think this is an intuitively sensible view, and I am always somewhat shocked to find that others disagree with it. But disagree they do, particularly when I apply this perspective to debates about the ethical and social status of robots. Since these others are, in most cases, rational and intelligent people — people for whom I have the utmost respect — I have to consider the possibility that my view on this is completely wrongheaded.

And so, as part of my general effort to educate myself in public, I thought I would use this blogpost to explain my stance and why I think it is sensible. I’m trying to work things out for myself in this post and I’d be happy to receive critical feedback. I’ll start by further clarifying the distinction between what I call ‘mental’ and ‘ethical’ behaviourism. I’ll then consider how ethical behaviourism applies to the emerging debate about the ethical and social consequences of robots. Then, finally, I’ll consider two major criticisms of ethical behaviourism that emerge from this debate.


1. Mental vs Ethical Behaviourism

Mental behaviourism was popular in psychology and philosophy in the early-to-mid twentieth century. Behaviourist psychologists like John Watson and BF Skinner revolutionised our understanding of human and animal behaviour, particularly through their experiments on learning and behavioural change. Their behaviourism was largely methodological in nature. They worried about the scientific propriety of psychologists postulating unobservable inner mental states to explain why humans act the way they do. They felt that psychologists should concern themselves strictly with measurable, observable behavioural patterns.

As a methodological stance, this had much to recommend to it, particularly before the advent of modern cognitive neuroscience. And one could argue that even with the help of the investigative techniques of modern cognitive neuroscience, psychology is still essentially behaviouristic in its methods (insofar as it focuses on external, observable, measurable phenomena). Furthermore, methodological behaviourism is what underlies the classic Turing Test for machine intelligence. But behaviourism became more than a mere methodological posture in the hands of the philosophers. It became an entire theory of mind. Logical behaviourists, like Gilbert Ryle, claimed that descriptions of mental states were really just abbreviations for a set of behaviours. So a statement like ‘I believe X’ is just a shorthand way of saying ‘I will assert X in context Y’, ‘I will do action A in pursuit of X in context Z’ and so on. The mental could be reduced to the behavioural.

This is what I have in mind when I use the term ‘mental behaviourism’. I have in mind the view that reduces the mental — the world of intentions, beliefs, desires, hopes, fears, pleasure, and pain — to the behavioural. As such, I think is pretty implausible. It stretches common sense to believe that mental states are actually behavioural, and it is probably impossible to satisfactorily translate a description of a mental state into a set of behaviours.

Despite this, I think ethical behaviourism is pretty plausible and common sensical. So what’s the difference? One difference is that I think of ethical behaviourism as essentially an application of methodological behaviourism to the ethical domain. To me, ethical behaviourism says that the epistemic ground or warrant for believing that we have certain duties and responsibilities toward other entities lies in their observable behavioural relations and reactions to us (and the world around them), not in their inner mental states or capacities.

It is important to note that this is an epistemic principle, not a metaphysical one. Adopting a stance of ethical behaviourism does not mean giving up the belief in the existence of inner mental states, nor the belief that those inner mental states provide the ultimate metaphysical warrant for our ethical principles. Take consciousness/sentience as an example. Many people believe that conscious awareness is the most important thing in the world. They think that the reason we should respect other humans and animals, and why we have certain ethical duties toward them, is because they are consciously aware. An ethical behaviourist can accept this position. They can agree that conscious awareness provides the ultimate metaphysical warrant for our duties to animals and humans. They simply modify this slightly by arguing that our epistemic warrant for believing in the existence of this metaphysical property, derives from an entity’s observable behavioural patterns. After all, we can never directly gain epistemic access to their inner mental states; we can only infer this from what we do. It is the practical unavoidability of this inference, that motivate ethical behaviourism.

It is also important to note that ‘behaviour’ needs to be interpreted broadly here. It is not limited to external physical behaviours (e.g. the movement of limbs and lips); it includes all directly observable patterns and functions, such as the operation of the brain. This might seem contradictory, but it’s not. Brain states are directly observable and recordable; mental states are not. Even in cognitive neuroscience no one thinks that observations of the brain are directly equivalent to observations of mental states like beliefs and desires. Rather, they infer correlations between those brain patterns and postulated mental states. What’s more, they ultimately verify those correlations through other behavioural measures. So when a neuroscientist tells us that a particular pattern of brain activity correlates with the mental state of pleasure, they usually work this out by asking someone in a brain scanner what they are feeling when this pattern of activity is observable.


2. Ethical Behaviourism and Robots
Ethical behaviourism has consequences. One of the most important concerns comparative claims to moral status. If you are an ethical behaviourist and you’re asked whether an entity (X) has certain rights and duties, you will determine this by comparing their behavioural patterns to the patterns of another entity (Y) that we think already possesses those rights and duties. If the two are behaviourally indistinguishable, you’ll tend to think that X has those rights and duties too. The only thing that might upset this conclusion is if you are not particularly confident in the belief that those behavioural patterns justify the ascription of rights to Y. In that case, you might use the behavioural equivalence between X and Y to reevaluate the epistemic grounding for your ethical principles. Put more formally:

The Comparative Principle of EB: If an entity X displays or exhibits all the behavioural patterns (P1…Pn) that we believe ground or justify our ascription of rights and duties to entity Y, then we must either (a) ascribe the same rights and duties to X or (b) reevaluate our use of P1…Pn to ground our ethical duties to Y.

Again, I think this is a sensible principle, but it has significant implications, particularly when it comes to debates about the ethical status and significance of robots. To put it bluntly, it maintains that if there is behavioural equivalence between a robot and some other entity to whom we already owe ethical duties (where the equivalence relates specifically to the patterns that epistemically ground our duties to that other entity) we probably owe the same duties to the robot.

To make this more concrete, suppose we all agree that we owe ethical duties to certain animals due to their capacity to feel pain. The ethical behaviourist will argue that the epistemic ground for this belief lies not in the unobservable mental state of pain, but rather in the observable behavioural repertoire of the animal, i.e. because it yelps or cries out when it is hurt, because it recoils from certain pain-inducing objects in the world. Then, applying the comparative principle, it would follow that if a robot exhibits the same behavioural patterns, we owe it a similar set of duties. Of course, we could reject this if we decide to reevaluate our epistemic grounding for our belief that we owe animals certain duties, but this reevaluation will, if we follow ethical behaviourism, result in our simply identifying another set of behavioural patterns which it may be possible for a robot to emulate.

This has some important repercussions. It means that we ought to take much more seriously our ethical duties towards robots. We may easily neglect or overlook ways in which we violate or breach our ethical duties to them. Indeed, I think it may mean that we have to approach the creation of robots in the same way that we approach the creation of other entities of moral concern. It also means that robots could be a greater source of value in our lives than we currently realise. If our interactions with robots are behaviourally indistinguishable from our interactions with humans, and if we think those interactions with humans provide value in our lives, it is also possible for robots to provide similar values. I’ve defended this idea elsewhere, arguing that robotic ‘offspring’ could provide the same sort of value as human offspring, and that it is possible to have valuable friendships with robots.

But isn’t this completely absurd? Doesn’t it shake the foundations of common sense?


3. Objections to Ethical Behaviourism
Let me say a few things that might make it seem less absurd. First, I’m not the only one who argues for something along these lines. David Gunkel and Mark Coeckelbergh have both argued for a ‘relational turn’ in our approach to both animal and machine ethics. This approach advocates that we move away from thinking about the ontological properties of animals/machines and focus more on how they relate to us and how we relate to them. That said, there are probably some important differences between my position and theirs. They tend to avoid making strong normative arguments about the moral standing of animals/machines, and they would probably see my view as being much closer to the traditional approach that they criticise. After all, my view still focuses on ontological properties, but simply argues that we cannot gain direct epistemic access to them.

Second, note that the behavioural equivalence between robots and other entities to whom we owe moral duties really matters on this view. They must be equivalent with respect to all the behavioural patterns that are relevant to the epistemic grounding of our moral duties. And, remember, this could include internal functional patterns as well as external ones. This means that the threshold for the application of the comparative principle could be quite high (though, for reasons I am exploring in a draft paper, I think they may not be that high). Furthermore, as robots become more behaviourally equivalent to animals and humans, we could continue to reevaluate which behavioural patterns really count (think about the shifting behavioural boundaries for establishing machine ‘intelligence’ over the years).

This may blunt some of the seeming absurdity, but it doesn’t engage with the more obvious criticisms of the idea. The most obvious is that ethical behaviourism is just wrong. We don’t actually derive the epistemic warrant for our ethical beliefs from the behavioural patterns of the entities with whom we interact. There are other epistemic sources for these beliefs.

For example, someone might argue that we derive the epistemic warrant for our belief in the rights and duties of other humans and animals from the fact that we are made from the same ‘stuff’ (i.e. biological, organic material). This ‘material equivalence’ gives us reason for thinking that they will share similar mental states like pleasure and pain, and hence reason for thinking that they have sufficient moral status. Since robots will not be made from the same kind of stuff, we will not have the same confidence in accepting their moral status.

It’s possible to be unkind about this argument and accuse it of thinking that there is some moral magic to being made out of flesh and bone. But we shouldn’t be too unkind. Why matter gives rise to consciousness and mentality is still essentially mysterious, and it’s possible that there is something about our biological constitution that makes this possible in a way that an artificial constitution would not. I personally don’t buy this. I believe in mind-body functionalism. According to this view the physical substrate does not matter when it comes to instantiating a conscious mind. This would mean that ‘material equivalence’ should not be the epistemic grounding for our ethical beliefs. But it actually doesn’t matter whether you accept functionalism or not. I think the mere fact that there is uncertainty and plausible disagreement about the relevance of biological material to moral status is enough to undercut this as a potential epistemic source for our moral beliefs.

Another argument along these lines might focus on shared origins: that one reason for thinking that we owe animals and other humans moral duties is because they came into being through a similar causal process to us, i.e. by evolution and biological development. Robots would come into being in a very different way, i.e. through computer programming and engineering. This might be a relevant difference and give us less epistemic warrant for thinking that robots would have similar rights and duties.

There are, however, several problems with this. First, with advances in gene-editing technology, it’s already the case that animals are brought into being through something akin to programming and engineering, and it’s quite possible in the near future that humans will be too. Will this cause them to lose moral status? Second, it’s not clear that the differences are all that pronounced anyway. Many biologists conceive of evolution and biological development as a type of informational programming and engineering. The only difference is that there is no conscious human designer. Finally, it’s not obvious why origins should be ethically relevant. We usually try to avoid passing moral judgment on someone because of where they came from, focusing instead on how the behave and act toward us. Why should it be any different with machines?

This brings me to what I think might be the most serious objection to ethical behaviourism. One critical difference between humans/animals and robots has to do with how they are owned and controlled and this give rise to two related objections: (i) the deception objection and (ii) the ulterior motive objection.

The deception objection argues that because robots will be owned and controlled by corporations, with commercial objectives, those corporations will have every reason to program the robot to behave in a way that deceives you into thinking that you have some morally significant relationship with them. The ‘hired actor’ analogy is often used to flesh this out. Imagine if your life were actually a variant on the Truman Show: everyone else in it was just an actor hired to play the part of your family and friends. If you found this out, it would significantly undercut the epistemic foundations for your relationships with them. But, so the argument goes, this is exactly what will happen in the case of robots. They will be akin to hired actors: artificial constructs designed to play the part of our friends and companions (and so on).

I’m not sure what to make of this objection. It’s true that if I found out that all my friends were actors, it would require a significant reevaluation of my relationship to them. But it wouldn’t change the fact that they have a basic moral status and that I owe them some ethical duties. There are different gradations or levels of seriousness to our moral relationships with other beings. Removing someone from one level does not mean removing them from all. So I might stop being friends with these actors, but that’s a separate issue from their basic moral status. That could be true for robots too. Furthermore, I have to find out about the deception in order for it to have any effect. As long as everyone consistently and repeatedly behaves towards me in a particular way, then I have no reason to doubt their sincerity. If robots consistently and repeatedly behave toward us in a way that makes them indistinguishable from other objects of moral concern, then I think we will have no reason to believe that they are being deceptive.

Of course, it’s hard to make sense of the deception objection in the abstract because usually people are deceptive for a particular reason. This is where the ulterior motive objection comes into play. Sometimes people have ulterior motives for relating to us in a particular way, and when we find out about them it disturbs the epistemic foundations of our relationships with them. Think about the ingratiating con artist and how finding out about their fraud can quickly change a relationship from love to hate. One claim that is made about robots is that they will always have an ulterior motive underlying their relationships to us. They will be owned and controlled by corporations and will ultimately serve the profit motives of those corporations. Thus, there will always be some divided loyalty and potential for betrayal. We will always have some reason to be suspicious about them and to worry that they are not acting in our interests. (Something along these lines seems to motivate some of Joanna Bryson’s opposition to the creation of person-like robots).

I think this is a serious concern and a reason to be very wary about entering into relationships with robots. But let me say a few things in response. First, I don’t think this objection upsets the main commitments of ethical behaviourism. Divided loyalties and the possibility of betrayal are already a constant feature of our relationships with humans (and animals) but doesn’t negate the fact that they have some moral status. Second, ulterior motives do not always have to undermine an ethically valuable relationship. We can live with complex motivations. People enter into intimate relationships for a multiplicity of reasons, not all of them shared explicitly with their partners. This doesn’t have to undermine the relationship. And third, the ownership and control of robots (and, more importantly, the fact that they will be designed to serve corporate commercial interests) is not some fixed, Platonic truth about them. Property rights are social and legal constructs and we could decide to negate them in the case of robots (as we have done in the case of humans in the past). Indeed, the very fact that robots could have significant ethical status in our lives might give us reason to do that.

All that said, the very fact that companies might use ethical behaviourism to their advantage when creating robots, suggest that people who defend it (like me, in this post) have a responsibility to be aware of and mitigate the risks of misuse.


4. Conclusion
That’s all I’m going to say for now. As I mentioned above, ethical behaviourism is something that I intuit to be correct, but which most people I encounter disagree with. This post was a first attempt to reason through my intuitions. It could be that I am completely wrong-headed on this and that there are devastating objections to my position that I have not thought through. I’d be happy to hear about them in the comments (or via email).





Sunday, December 3, 2017

Is Technology Value-Neutral? New Technologies and Collective Action Problems


Via Wikimedia Commons

We’ve all heard the saying “Guns don’t kill people, people do”. It’s a classic statement of the value-neutrality thesis. This is the thesis that technology, by itself, is value-neutral. It is the people that use it that are not. If the creation of a new technology, like a gun or a smartphone, has good or bad effects, it is due to good or bad people, not the technology itself.

The value-neutrality thesis gives great succour to inventors and engineers. It seems to absolve them of responsibility for the ill effects of their creations. It also suggests that we should maintain a general policy of free and open innovation. Let a thousand blossoms bloom, and leave it to the human users of technology to determine the consequences.

But the value-neutrality thesis has plenty of critics. Many philosophers of technology maintain that technology is often (perhaps always) value-laden. Guns may not kill people themselves but they make it much more likely that people will be killed in a particular way. And autonomous weapons systems can kill people by themselves. To suggest that the technology has no biasing effect, or cannot embody a certain set of values, is misleading.

This critique of value-neutrality seems right to me, but it is often difficult to formulate it in an adequate way. In the remainder of this post, I want to look one attempted formulation from the philosopher David Morrow. This argument maintains that technologies are not always value neutral because they change the costs of certain options, thereby making certain collective action problems or errors of rational choice more likely. The argument is interesting in its own right, and looking at it allows us to see how difficult it is to adequately distinguish between the value-neutrality and value-ladenness of technology.


1. What is the value-neutrality thesis?
Value-neutrality is a seductive position. For most of human history, technology has been the product of human agency. In order for a technology to come into existence, and have any effect on the world, it must have been conceived, created and utilised by a human being. There has been a necessary dyadic relationship between humans and technology. This has meant that whenever it comes time to evaluate the impacts of a particular technology on the world, there is always some human to share in the praise or blame. And since we are so comfortable with praising and blaming our fellow human beings, it’s very easy to suppose that they share all the praise and blame.

Note how I said that this has been true for ‘most of human history’. There is one obvious way in which technology could cease to be value-neutral: if technology itself has agency. In other words, if technology develops its own preferences and values, and acts to pursue them in the world. The great promise (and fear) about artificial intelligence is that it will result in forms of technology that do exactly that (and that can create other forms of technology that do exactly that). Once we have full-blown artificial agents, the value-neutrality thesis may no longer be so seductive.

We are almost there, but not quite. For the time being, it is still possible to view all technologies in terms of the dyadic relationship that makes value-neutrality more plausible. Unsurprisingly, it is this kind of relationship that Morrow has in mind when he defines his own preferred version of the value-neutrality thesis. The essence of his definition is that value-neutrality arises if all the good and bad consequences of technology are attributable to praiseworthy or blameworthy actions/preferences of human users. The more precise formulation of this is this:

Value Neutrality Thesis: “The invention of some new piece of technology, T, can have bad consequences, only if people have vicious T-relevant preferences, or if users with “minimally decent” preferences act out of ignorance; and the invention of T can have good consequences, on balance, only if people have minimally decent T-relevant preferences, or if users with vicious T-relevant preferences act out of ignorance” 
(Morrow 2013, 331)
A T-relevant preference is just any preference that influences whether one uses a particular piece of technology. A vicious preference is one that is morally condemnable; a minimally decent preference is one that is not. The reference to ignorance in both halves of the definition is a little bit confusing to me. It seems to suggest that technology can be value neutral even if it is put to bad/good use by people acting out of ignorance (Morrow gives that example of the drug thalidomide to illustrate the point). The idea then is that in those cases the technology itself is not to blame for the good or bad effects — it is the people. But I worry that this makes value-neutrality too easy to establish. Later in the article, Morrow seems to conceive of neutrality in terms of how morally praiseworthy and blameworthy the human motivations and actions were. Since ignorance is sometimes blameworthy, it makes more sense to me to think that neutrality occurs when the ignorance of human actors is blameworthy.

Be that as it may, Morrow’s definition gives us a clear standard for determining whether technology is value-neutral. If the bad or good effects of a piece of technology are not directly attributable to the blameworthy or praiseworthy preferences (or levels of knowledge) of the human user, then there is reason to think that the technology itself is value-laden. Is there ever reason to suspect this?


2. Technology and the Costs of Cooperation and Delayed Gratification
Morrow says that there is. His argument starts by assuming that human beings follow some of the basic tenets of rational choice theory when making decisions. The commitment to rational choice theory is not strong and could be modified in various ways without doing damage to the argument. The idea is that humans have preferences or goals (to which we can attach a particular value called ‘utility’), and they act so as to maximise their preference or goal-satisfaction. This means that they follow a type of cost-benefit analysis when making decisions. If the costs of a particular action outweigh its benefits, they’ll favour other actions with a more favourable ratio.

The key idea then is that one of the main functions of technology is to reduce the costs of certain actions (or make available/affordable actions that weren’t previously on the table). People typically invent technologies in order to be able to do something more efficiently and quickly. Transportation technology is the obvious example. Trains, planes and automobiles have all served to reduce the costs of long-distance travel to individual travellers (there may be negative or positive externalities associated with the technologies too — more on this in a moment).

This reduction in cost can change what people do. Morrow gives the example of a woman living three hours from New York City who wants to attend musical theatre. She can go to her local community theatre, or travel to New York to catch a show on Broadway. The show on Broadway will be of much higher quality than the show in her local community theatre, but tickets are expensive and it takes a long time to get to New York, watch the show, and return home (about a 9-hour excursion all told). This makes the local community theatre the more attractive option. But then a new high speed train is installed between her place of residence and the city. This reduces travel time to less than one hour each way. A 9-hour round trip has been reduced to a 5-hour one. This might be enough to tip the scales in favour of going to Broadway. The new technology has made an option more attractive.

Morrow has a nuanced understanding of how technology changes the costs of action. The benefits of technology need not be widely dispersed. They could reduce costs for some people and raise them for others. He uses an example from Langdon Winner (a well-known theorist of technology) to illustrate the point. Winner looked at the effects of tomato-harvesting machines on large and small farmers and found that they mainly benefitted the large farmers. They could afford them and thereby harvest far more tomatoes than before. This increased supply and thereby reduced the price per tomato to the producer. This was still a net benefit for the large farmer, but a significant loss for the small farmer. They now had to harvest more tomatoes, with their more limited technologies, in order to achieve they same income.

Now we come to the nub of the argument against value-neutrality. The argument is that technology, by reducing costs, can make certain options more attractive to people with minimally decent preferences. These actions, by themselves, may not be morally problematic, but in the aggregate they could have very bad consequences (it’s interesting that at this point Morrow switches to focusing purely on bad consequences). He gives two examples of this:

Collective action problems: Human society is beset by collective action problems, i.e. scenarios in which individuals can choose to ‘cooperate’ or ‘defect’ on their fellow citizens, and in which the individual benefits of defection outweigh the individual benefits of cooperation. A classic example of a collective action problem is overfishing. The population of fish in a given area is a self-sustaining common resource, something that can shared fruitfully among all the local fishermen if they fish a limited quota each year. If they ‘overfish’, the population may collapse, thereby depriving them of the common resource. The problem is that it can be difficult to enforce a quota system (to ensure cooperation), and individual fishermen are nearly always incentivised to overfish themselves. Technology can exacerbate this by reducing the costs of overfishing. It is, after all, relatively difficult to overfish if you simply relying on a fishing rod. Modern industrial fishing technology makes is much easier to dredge the ocean floor and scrape up most of the available fish. Thus, modern fishing technology is not value-neutral because it exacerbates the collective action problem.

Delayed gratification problems: Many of us face decision problems in which we must choose between short-term and long-term rewards. Do we use the money we just earned to buy ice-cream or do we save for our retirements? Do we sacrifice our Saturday afternoons to learning a new musical instrument, or do we watch the latest series on Netflix instead? Oftentimes the long-term reward greatly outweighs the short-term reward, but due to quirk of human reasoning we tend to discount this long-term value and favour the short-term rewards. This can have bad consequences for individually (if we evaluate our lives across their entire span) and collectively (because it erodes social capital if nobody in society is thinking about the long-term). Morrow argues that technology can make it more difficult to prioritise long-term rewards by lowering the cost of instant gratification. I suspect many of us have an intimate knowledge of the problem to which Morrow is alluding. I know I have often lost days to work that would have been valuable in the long-term because I have been attracted to the short-term rewards of social media and video-streaming.

Morrow gives more examples of both problems in his paper. He also argues that the problems interact, suggesting that the allure of instant gratification can exacerbate collective action problems.


3. Criticisms and Conclusions
So is this an effective critique of value neutrality? Perhaps. The problems to which it alludes are certainly real, and the basic premise underlying the argument — that technology reduces the cost of certain options — is plausible (perhaps even a truism). But there is one major objection to the narrative: that even in the case of collective action problems and delayed gratification, it is human viciousness that does the damage?

Morrow rejects this objection by arguing that it is only right to call the human actors vicious if the preferences and choices they make are condemnable in and of themselves. He argues that the preferences that give rise to the problems he highlights are not, by themselves, morally condemnable; it is only the aggregate effect that is morally condemnable. Morality can only demand so much from us, and it is part and parcel of the human condition to be imbued with these preferences and quirks. We are not entitled to assume a population of moral and rational saints when creating new technologies, or when trying to critique their value-neutrality.

I think there is something to this, but I also think that it is much harder to draw the line between preferences and choices that are morally condemnable and those that are not. I discussed this once before when I looked at Ted Poston’s article “Social Evil”. The problem for me is that knowledge plays a crucial role in moral evaluation. If an individual fisherman knows that his actions contribute to the problem of overfishing (and if he knows about the structure of the collective action problem), it is difficult, in my view, to say that he does not deserve some moral censure if he chooses to overfish. Likewise, given what I know about human motivation and the tradeoff between instant and delayed gratification, I think I would share some of the blame if I spend my entire afternoon streaming the latest series on Netflix instead of doing something more important. That said, this has to be moderated, and a few occasional lapses could certainly be tolerated.

Finally, let me just point out that if technology is not value-neutral, it stands to reason that it's non-neutrality can work in both directions. All of Morrow’s examples involve technology biasing us toward the bad. But surely technology can also bias us toward the good? Technology can reduce the costs of surveillance and monitoring, which makes it easier to enforce cooperative agreements, and prevent collective action problems (I owe this point to Miles Brundage). This may have other negative effects, but it can mitigate some problems. Similarly, technology can reduce the costs of vital goods and services (medicines, food etc.) thereby making it easier to distribute them more widely. If we don’t share all the blame for the bad effects of technology, then surely we don’t share all the credit for its good effects?