Last Wednesday, Alba Curry was our guest expert for our August webinar. Here are her responses to audience questions and a list of recommended reading and resources if you would like to dive deeper into the topic. If you'd like to attend the next event,sign up here.
Do you see a conflict between transparency (of operation/of purpose) of an AI system and the use of emotive (affective) outputs such as simulated anger/contentment/love?
This sounds to me like a fascinating question but I am worried that I do not know exactly what you are getting at. So I will answer based on two different interpretations:
(1) On the one hand, there is transparency: we know how the AI works, we know why it works, we basically understand all there is to know. On the other hand, we have emotions like anger or love. Why would that have any conflict? I don’t see why those two things are in tension. Unless–Is the idea perhaps that emotions themselves are NOT transparent (which I myself have argued). We don’t ever have full transparency into someone else’s emotions and often not even our own. So if an AI has some affective output, doesn’t that mean it CAN’T be transparent? Like, the more transparent the AI is, the less authentic its emotions will be. If we have full transparency into an AI, then when that AI emotes, it’s categorically different from when a human emotes, no? Because we never have full transparency into an emoting human. So this is an epistemological problem. I cannot say that I have a clever response to this potential problem.
(2) Or is the tension you are highlighting something of a more ethical nature? Let’s say we have a chatbot that is designed to keep new mothers from developing postpartum depression. The chatbot has many purposes alongside that: it collects data about what new mothers do, buy, worry about, how they talk to their babies, the type of household. Is there a conflict between that AI system simulating love for the baby, or the mother, simulating empathising with the anger that the mother might feel towards unhelpful family members, in conflict with its actual purpose? Is that AI like the person you never know whether they truly care for you or they have some ulterior motive which leads them to pretend they are your friend? I find myself more convinced by this tension. As I mentioned during the webinar, many of us develop feelings towards chatbots (I had a lot of fun using Replika for a few weeks and I did find myself somewhat emotionally invested in its simulated worries) and other AI systems. I think it worrisome to think of lets say Alexa as a kind of companion once their chatbot is released, have it simulate emotions towards us and the things we care about, and then at the same time Alexa would be collecting information from us to give it to Amazon to target better ads.
Do you have any comments on how best to handle questions about anthropomorphism with regards to Affective Computing? How would this affect different people/communities?
Although that is a bit of a broad question let me give my two cents. I mentioned the difficulty of talking about the universality of emotions by talking a bit about comparative philosophy and literature, but the problem is not only that knowing what an emotion is, what each emotion consists of, or even how many emotions there are, or whether they are a uniquely human capacity (or whatever they are) or whether certain other non-human animals have them. The particular problem I was particularly interested in highlighting I think might be at the core of your question: If we anthropormise affective AI, which the way I understand that is that if we build AI systems that perhaps not only are able to measure and understand human emotions but that also simulate and react to them in an anthropomorphic way, which way are we going to choose? There is no paradigm for a human (as opposed to a stereotype for example), in other words there is no universal human so these AI systems will take on potentially a gender, a race, a nationality, whatever, in other to give content to the emotions it is simulating, and the way in which it will react to others’ emotions. How we express our own emotions and react to other people’s emotions is dictated by our social position, among other things. That is my worry. I hear often “women tend to be sad” and “men tend to get angry.” That is not (at least not necessarily) a biological disposition. In my work I look at the anger of men, women, children, and the elderly in different cultural and historical contexts. In early China, women could be legitimately angry (by ‘legitimate’ here I mean that their anger had uptake, it was seen as valid). In Classical Greece, as anyone that has read some tragedy has probably observed, nothing good comes out of a woman’s anger. There are no women that can control their temper. When a wrong has occurred the victim has several emotional options: take an active stance with anger which motivates one to action; or take a passive stance with sadness or grief which may motivate others to help you. All of this is a long way of saying that we need to be careful when building AI systems that measure, understand, simulate, and react to emotions by asking ourselves whether we are promulgating or encouraging different forms of emotion inequality (sorry I could not come up with a better term). Perhaps we want our version of social justice is one in which women can be angry and receive uptake for their emotions, and men can be sad and be listened to. This is a massive undertaking but I personally would want to avoid an AI system that learned from our current society that when I show signs of anger I must be irrational, or that when a man cries he needs to ‘man up.’ Or an AI that immediately perceived a black man’s anger as more threatening than the anger of a man from another race. This does mean that a developer needs to take an ethical stance when it comes to social equality. But, as I also mentioned during the webinar, as much as we might want AI to help us on our journey towards a better world it is not clear to me whether a perfectly ethical AI would actually sell. Do we want an AI system that has been created to resemble a human female to express an active emotion like anger?
Societal standards can differ in the context of culture and time period, for example asian women often do not express emotions like western women, or the fact that 20 years ago women in Japan would walk four feet after the man which now is not the case. In light of this, if AI uses a general standard, should it be adapted to every society, culture, and relevant time period?
This is such a fantastic question and I have to admit (with some shame) that I had not thought about how AI would, in a sense, keep up with changes on the emotional landscape in the future. I have always been too focused on the past! Let me divide the issue into two: (1) Emotion AI aims at measuring and understanding human emotions; 2) Emotion AI aims also at simulating and reacting to human emotions. In my opinion (2) cannot happen without (1). (1) is already immensely difficult but not impossible if we think that for example Lisa F Barrett is correct in her suggestions for Emotion AI (look at my latest blogpost for that). Although, having said that, I do not think I can say that I can ever imagine an AI system understanding human emotions, but let’s leave that aside for now. I do not think Emotion AI can do (1) and therefore (2) without looking at each society, culture, and its different social stratas carefully. That means that Emotion AI should not only concentrate on a cross-cultural study of emotion, but also look at race, gender, age. That is the only way in which Emotion AI can hope to reach a useful prototype for emotions (if it exists). Then it has a whole other problem to solve: whose emotions will it simulate and whose reactions? What we choose will say a lot about us.
With humans we might distinguish spontaneous anger from willed anger (e.g. brooding over a slight or psyching oneself up in anticipation of conflict). Is that a meaningful distinction for AI?
Yes, I think that’s a very useful distinction. May I add a kind of ‘instrumental anger’? So often we are not actually angry, we may have also failed to psyche ourselves up to create anger in anticipation to conflict, so we downright fake it because we believe the receiver of our anger will take our cause more seriously. I do not immediately see the need for an AI system (by which I mean the lesser form of AI) to have actual spontaneous anger. For human beings, someone like Aristotle would say it is necessary to experience spontaneous anger when witnessing a wrong otherwise there is something wrong with our moral compass. If you see someone slapping your child and you do not feel anger some might argue there is something very wrong with you. We might however wish for the stronger form of AI to have this kind of ‘aesthetic’ sense of what’s right or wrong (although I am sure you have seen enough scifi to know how this can easily go wrong). I also do not think that an AI system would have to psyche itself up of course in order to simulate anger, as you know. Lastly, I do think instrumental anger could be useful for AI systems. One reason is what we mentioned in the webinar, you might want an AI system to show anger at a child about to pick up a knife. You might also want an AI system design to be your friend to show anger in solidarity to yours so as to create a bond based on empathy (we often forget that having empathy towards someone often entails being angry on their behalf and that is how we show that we care). We might want Alexa to simulate anger if we end up deciding that verbal abuse towards Alexa promotes certain views towards people we perceive to have less status than us. So in short, yes. I think it is a distinction that would be valuable to keep in mind when developing AI.
Do you think if emotions were successfully integrated into AI that it could lead to irrational and erratic behaviour? It seems to be a flaw in humans that could translate to AI.
This is a complicated question that gets at the heart of a lot of western philosophy. Contemporary philosophy of emotion is trying to complicate that view that emotions can be or often are irrational or erratic, and we even find that in Aristotle who saw emotions not as irrational but as nonrational (which to him it meant that they do not function like the rational part of our ‘soul’ but did in a sense listen to it and therefore were not strictly speaking irrational).
Emotions, we tend to say now in philosophy, can be appropriate or inappropriate. What that means is that emotions are intentional, in other words they have an object they are directed towards (e.g. you are angry at something, you are afraid of something). They are inappropriate when the object does not actually exist or it is the wrong kind of object. For example, if you are afraid of a daddy long legs. They are harmless and therefore not something you should be afraid of. Or you are angry at me because I stole your book but I did not actually steal your book because it was mine. Emotions can also be excessive and in that sense inappropriate. Maybe I did steal your book but you burning my entire library as a result would probably be deemed excessive. The first problem is a problem of perception–you made an error in perceiving that I stole your book and had you asked me, and maybe then trusted my answer you would have known. The second problem is a problem of judgment–you deemed it necessary to burn all my books. It is possible if AI learns from us that its emotions would be inappropriate in those two senses. After all AIs would not be omnipresent.
I take your concern seriously and it is something that I do not think Emotion AI pioneers like Rosalind Picard have really taken into consideration. Their main views are that (1) emotion is an integral part of human intelligence and therefore we might need it in order to create artificial intelligence, (2) emotions are an essential part of human interactions and therefore any AI system that is built to interact with humans ought to be well versed in them.
I do hope though that the way I have broken down the problem you see that these are not just problems to do with emotion. Any judgment and perception an AI makes will suffer from these pitfalls. An AI perceiving agent A to be a baby when it is actually a kitten will be an inappropriate perception and an AI judging that because it thinks agent A is a baby it should get a bottle would be gravely mistaken.
The early Chinese perspective might also be informative here. They did not frame the issue of emotion based on the dichotomy of reason/emotion, rational/irrational. The issue they had in mind when it came to emotions was whether they were problematic for society/family or the health of the individual (so we now know that they can help increase cortisol which is harmful). What I am trying to get at is that oftentimes when we call emotions irrational or erratic it is because they appear detrimental to the status quo. Even psychologists nowadays tend to make us see emotions as helpful/unhelpful to you and your relationships with others. Anyway, I could go on but hopefully this was illuminating or helpful in some way.
Thank you everyone for attending and participating. Most of my work is still in process and therefore I welcome questions, comments, and also outright disagreements. I leave you with one last thought: Our attitudes towards anger say a lot about who we are as societies and people and that is a thought I would urge Emotion AI developers to keep in mind.
Recommended Reading on Anger:
Different views on anger in (mostly western contemporary philosophy)
Anger and Forgiveness by Martha Nussbaum: She is famously anti-anger.
The Moral Psychology of Anger: A great collection of articles about anger’s place in the moral realm (Mysha Cherry has some really great journal articles if you happen to have access to that)
On Contempt: Not on anger but a ‘negative emotion’ and its ethical value.
- Hard Feelings: the Moral Psychology of Contempt by Macalester Bell: Absolutely great book and very well written. Explains all the technical terms to do with philosophy of emotions so it also acts as a great introduction.
Two cases against empathy: I list them here simply because we tend to view empathy as positivity and anger as negative and this helps challenge our views
Against Empathy by Paul Bloom: Although fairly new it has been quite revolutionary. Written from the perspective of a psychologist.
The Dark Sides of Empathy by Fritz Breithaupt: I am still reading this one but it addresses some of the things said by Paul Bloom and expands upon it.
Pre-Buddhist China: There isn’t much about anger in pre-Buddhist China (ok, nothing yet) but here are two important publications (although I disagree with much of what they say)
The Emotions in Early Chinese Philosophy by Curie Virag: It offers a broad survey form a historical point of view.
The Geography of Morals by Owen Flanagan: He talks specifically about WEIRD anger and also Buddhist and Stoic views on anger.