An Interview with Rosalind W. Picard: Adding Emotion to Computing

December 12, 2014

Rosalind W. Picard took a risk when she started telling colleagues that technology needed to be able to use and interpret emotions.

“’Emotion’ was not a word or topic anybody serious wanted to be associated with back when I first encountered its important role,” she said. She was trying to develop computers that could see and hear like humans when she realized that emotions influence what people see and hear or how they choose language or action—and that was being left out of computing.

For being a pioneer in the field of affective computing, Professor Picard of the Massachusetts Institue of Technology Media Lab received Sigma Xi's 2014 Walston Chubb Award for Innovation. She  became a Sigma Xi member in 1986.

Heather Thorstensen, Sigma Xi’s manager of communications, spoke with Picard about her work October 8 during a public Google Hangout. The conversation covered how Picard started to think about giving technology emotional abilities, what affective computing is today, and how it has applications, or potential applications, in areas such as personalized medicine, autism, epilepsy, education, and more.

There are some Internet connection issues during the Google Hangout but they do not last for the entire video.





 

Transcript

Heather Thorstensen: Hi everybody, welcome to our Google Hangout. Today we’re talking with Professor Rosalind Picard. She is the director and founder of the Affective Computing Research Group at the Massachusetts Institute of Technology Media Lab. She wrote a book called Affective Computing, which first came out in 1997, that helped start a new field of research relating to the emotional ability of computers.

Professor Picard is the winner of Sigma Xi’s 2014 Walston Chubb Award for Innovation and she will accept her award and speak at Sigma Xi’s Annual Meeting, November 7, in Glendale, Arizona. You can find more information about that meeting at Sigma Xi’s website, at www.sigmaxi.org.

We’re going to be live tweeting during this Google Hangout from Sigma Xi’s Twitter feed, @SigmaXiSociety. And we also have the ability for you to submit questions. If you’d like to type in any questions today to the right side of your screen then you will be able to ask your question and I will pass it on.

Professor Picard, thank you for joining us on the Google Hangout today.

Rosalind Picard: Thank you it’s a pleasure to be here.

HT: I wanted to start off by explaining affective computing: what it is and what are some of its potential applications. 

RP: In the beginning the applications were mostly focused on interactions where it’s very obviously with another entity that is social. But since then we have also developed a lot of applications that involve understanding emotion in non-social interactions such as how emotion influences cognition, how emotion interacts with seizures ... We’re looking at emotion for people who have a difficult time understanding it or communicating it with others, people with autism or people who are non-speaking. And we’re also looking at the core role that emotion plays in health and wellbeing, where we understand how it influences how you feel like taking care of yourself, motivation, attention, and also how it affects sleep and how sleep affects the emotional shaping of memories. So our technology that has enabled us to start to give computers some skills—emotional intelligence—has also given computers emotional abilities that are now useful in a very broad set of applications.

HT: I read in your book, Affective Computing, that when you started thinking about researching emotions, it’s something that you wanted to stay away from because you’re a woman in science and you didn’t want to be tied to being thought of as an emotional, female type of researcher. And I wondered what was the tipping point for you that made you think that this deserved more research attention?

RP: Absolutely. “Emotion” was not a word or topic anybody serious wanted to be associated with back when I first encountered its important role.  The tipping point was actually a series of things. One of which was  I was reading about synesthesia, the phenomenon whereby some people, when they experience one perceptual input simultaneously experience another that’s quite connected to it. For example, one man, Richard Cytowic, wrote about tasting chicken soup and saying it didn’t have enough points on it. He felt points in the palm of his hand. He felt shapes that went with tastes. And that’s a very unusual kind of synesthesia. The most common kind is a color ladder synesthesia. And I was trying to build computers that worked like our perceptual brains, that could see, that could hear. And I was curious how this vastly complex process of seeing and hearing and understanding what’s going on in the world around us happened in such a small package here in our heads.

And so as I was reading about these unusual perceptual experiences and neurological studies of them and so forth and I ran into this one, I was intrigued because Richard Cytowic wrote about how enhanced perception was not happening in the cortex that we studied so much with EEG, the visual cortex or auditory cortex, but it was happening in parts of the brain that we didn’t really even think were part of intelligence. It was happening down in what he called the limbic system ... So I thought, “The limbic system, that’s the home of memory, attention, and emotion.” And memory is important and attention is important and I was not interested in emotion but I thought, “I need to learn more about the limbic system.”

So I started digging into it more and I started to see that not only were its function vital for intelligent perception but the emotional part of it was influencing what we saw, what we heard, how we made decisions, how we chose language, how we chose action, everything that we were trying to do to make computers intelligent … was influenced by emotion. And we had left that out of computing completely. So I was really troubled by these findings. I thought, “Oh gosh, I don’t want to actually have to bring this up in front of my colleagues.” So I did a whole lot more reading, a whole lot more digging, until I was really convinced that it was essential. And then I started to go out with the news that people need to start looking at emotion.

HT: When you were giving your description about affective computing when we started, your Internet connection went in and out so I want you to repeat a bit about why we want computers to have emotional intelligence.

RP: When computers interact with people, it depends on what they’re doing. There may be some [instances] where they don’t need any emotion in the interaction. If it’s just sorting your files, you don’t need emotion in that. However, if it’s interacting with people face-to-face in any kind of way that involves language or interaction that could be perceived as social, then it’s got to have emotional intelligence. And the reason for that is that people expect that the system will see things like: Are things going well? Are things not going well? And learn from that. And just perceiving in that one bit, is it good, is it bad, is it perceiving the way that we intended, is it not? That is vital for any learning system and it’s vital for any intelligent system to adapt. So if it can’t see if what it’s doing is pleasing or displeasing, if it can’t hear that you’re annoyed or you’re delighted, then it’s not going to be able to intelligently adapt and it’s not going to be perceived as intelligent.

HT: When people think about giving computer emotional intelligence, I think that they probably think about science fiction movies and the ones that can easily interact with you. But I’m sure that there are so many challenges that go into that and I’m wondering if you can explain what kind of challenges this field of affective computing is overcoming to make sure this is actually being successful and these computers are having this emotional intelligence.

RP: Yeah, there are a lot of challenges. We have been inspired a lot by science fiction. Our students and faculty get together from time to time and watch some of the classic scifi movies, which are usually dystopic. The computers’ emotions run amok. HAL was the most emotional character in the movie 2001: [A Space Odyssey]. We were hard-pressed to find a single facial expression on any of the human actors in that whole movie, found one, otherwise the only emotion expressed in the movie was by HAL … It’s not enough to recognize that somebody’s face looks angry or pleased. We need to also understand it in a context and in a situation [in order to know] how to respond intelligently … When do they talk and when do they start looking forward and you should stop talking? When do you shift from taking action A to taking action B and these are usually assessed by a variety of things that we see and hear that are relating to the emotion. So we need to recognize not just the emotion but what’s going on in the situation. What are the goals of the situation? There are some applications where the goal is to really make the person angry and to see how resilient they are to staying calm and cool when these situations try to drive them crazy. There are others that will try to make the customer have fun and be happy. There are others where they’re not trying to influence emotion, they just want to understand what your experience is: were you skeptical of this information, were you delighted by this information, were you confused by this information? Now, if you’re sitting there and you are confused and your brow is furrowing and you’re not resolving it, then probably they’re going to lose you. So these are challenges that we face by situating these problems in real-world context and trying to understand not just what to measure but how to respond intelligently to what is being measured and communicated.

HT: We have a question from someone who is watching who asks: Are there some situations where users find attempts by computers to mimic emotions to be off putting?

RP: Sure. Yeah, I think again we have to go to the situation. If it’s a game and it’s a mimicy game and if you can try to be the one who doesn’t laugh first or something, then that’s fine. If you are trying to do serious work and your computer is sitting there mimicking something you’re doing, that’s really irritating and annoying, that’s emotionally unintelligent for it to be doing that right then. So this goes back to the emotional intelligence. It needs to know something about the situation and the goals in order to know what emotion and related behaviors are important to do or not to. And not doing them can be even more important than doing them.

HT: I also read in your book, Affective Computing, that perhaps by giving computers emotional ability it will enhance our own sense of humanness. But I know that some people already think that we’re losing our sense of humanness with being able to interact socially with one another because of technology. And I’m wondering what you think about that balance.

RP: Yeah, great question. Actually, we have a whole new initiative at the Media Lab that I’m co-leading on. [It was a guy at a boot camp,] during the break and he saw all the people gather around during break time and instead of interacting with each other they were all, you know, in shallow breathing hunched over their devices and he said that he was one of the main developers of iOS and he was despondent and he felt badly that he had developed this. He felt badly that something he worked so hard on to make people attracted to was causing people to no longer engage with each other face to face. And I think this is a real problem and we’re starting to recognize that when we develop technology, we need to take into account what’s cool or what’s possible, what’s sticky or what gets the most views or likes or whatever. But we really need to think about what is human well-being and how do we build technology that serves that greater purpose, that greater well-being? We know emotion is a key part of it. Positive emotion is a part of it but being happy all the time is not required for great well-being. In fact, that would be suspicious because a range of negative and positive emotions is important for healthy functioning. So how do we think about what is desired there? And we know that relationships are important so one of the things that we’ve worked on is technology that helps people better understand the emotions of themselves and of others so that they can learn how to better succeed in face-to-face human relationships.

HT: And you’re also the co-founder of Affectiva and that has a tool that marketers can use to measure facial expressions through a webcam and they’re using it to see how people are responding to advertising and branding. And I’m wondering where you think that this will take the marketing industry?

RP: Let me just say a little bit about the history of that, too. The product is called Affdex, people can go online with a webcam and try it. You can do a demo where you see the results of other people or you can do a demo where you opt in and turn on the camera. Well, you opt-in and say, “I give my permission for you to turn on the camera” … It watches you watching it … We did not decide we were just going to do something for market research. We decided we were going to build a better system that could work in face-to-face interaction for people on the autism spectrum.  And in order to get the technology to work really accurately, we needed more data because machine learning accuracy goes up with more data. We couldn’t afford to pay all these people to come in and act natural and they weren’t acting natural. So we needed to figure out how to get data online because that’s where you can get lots of people. So by going into the marketing, we actually got customers to pay for people to go online and give us their data so if you go use this application online, you can choose whether or not you want to share your data and if you share it, you might show up in a computer vision or machine learning publication where we’re actually using your data to build a more accurate machine learning system in order to help not just market researchers but a lot of people out there who have difficulty reading facial expressions and need a tool that helps them do that in real time.

HT: So by looking at the webcam on their screen, market researchers are taking that and they’re saying “that person looks like they’re not interested” and they’re kind of making an inference about what their facial expressions mean?

RP: Right. They’re finding out if their ads are boring and they should stop running them. They’re finding out if their jokes are not funny and maybe they should re-edit the video. Teachers are finding out if they’re boring and people are completely going off screen or off task. We’re also finding out things. We found that confusion actually is the emotion that most predicts learning. If the students shows a little bit of confusion while they’re trying to engage with the material then usually that’s associated with them showing more learning gains. Now, not confusion the whole time then that turns into frustration and the person throwing  up their hands and going off task but confusion that stays with engagement and resolves and ideally turns into delight like, “Oh, I get it.” So we can read those things on the face and if you’re doing an online lecture or some promo video or whatever it is, you can get that feedback and find out if what you thought would work is working.

HT: We have another question from an audience member who asks: Did your initial efforts to research emotion as it relates to artificial intelligence meet with resistance? And if so, was there a moment later on when you realized that other researchers were beginning to catch on to its importance?

RP: Yeah, oh yeah—definitely met with resistance. Surprisingly, there were some people who were enormously supportive, like Peter Hart, who was one of the founding fathers of pattern recognition, was enormously supportive and that meant a lot for somebody so respected and serious to be supportive. There were other people, I remember going to one of the computer vision conferences that I go to, they rejected my paper that year. I said, “Well, it’s pattern recognition of emotion” and they’re like, “Well, it’s not enough computer vision in there” and I felt my nose a little out of joint that I was rejected. But honestly, I learned people are uncomfortable with emotion. They didn’t think that it could be serious science. And they really didn’t know anything about it. And I hear them say stuff like, “She used to work on respectable stuff. What happened?” Well, later the exact same people came to me and had started working on it and were asking for my data. So they just needed time to learn about it, you know? They really didn’t understand what they were rejecting and once they realized that this is a hard problem, there’s real science, it’s really important, you can handle it as rigorously as any other problem that they tackled and the complexity is really impressive if you like hard problems, then they started to treat it with respect. But I did have to have a real thick skin in the beginning. There was a lot of rejection.

HT: For other researchers who might be interested in the crowdsourcing technique that you’ve used with inviting the public to come on and do the facial recognition, do you have any advice or tips for other researchers who are thinking about doing a similar type of way to collect data?

RP: I’m a big fan of crowdsourcing. In fact, one of my students just presented really cool work called Panoply, Rob Morris’s doctoral work, and he actually crowdsourced cognitive behavioral therapy by taking negative thoughts people have and using human crowdworkers to help people rethink their negative thoughts and convert them to a more positive frame. The psychiatrist would say, “Gee, you can’t use crowds for that, they’re not very good at that.” So he figured out a way so people can learn it and learn how to get better and give feedback and get kudos if they did well and it’s huge, huge success, very exciting. That’s a very creative way to think about the crowd.

We’ve also used the crowd, as I mentioned, to get facial expressions and get feedback. I think there are huge possibilities these days thanks to very pioneering [opportunities] like Amazon Mechanical Turk and others that help us get that. Now that said, there are a lot of tips to do it well. I recommend Rob Morris’s PhD thesis fresh on the print here at MIT for you and he’ll hopefully be sending out some papers soon describing more about the findings. There are really clever ways to get the crowd to feel good and want to do the right thing and be very altruistic and helpful and honest and ethical and there are ways where some crowdworkers just want to make a buck and they’ll just punch whatever button they have to to get through the screen and you get garbage data. So there are ways to bias them to get really great results and we worked hard on that so take a look at that, too, if you want to know more about our work.

HT: I also wanted to talk a little bit about Empatica, of which you are the co-founder and chief scientist. It sells medical-quality wearable sensors that monitor people’s bodies like heart rate or skin [conductance] and stress, and I wondered what are the key uses for this technology? Is that something you’re mostly marketing to other researchers at this point?

RP: Yeah, right now today researchers can go online and can buy sensors from Empatica and that is a product right now, this is the one I’m wearing right now, it’s a product designed for researchers and for some sophisticated users to learn about themselves. It’s not a consumer facing product today. That said, the data that we measure is the skin conductance which is the electrical changes measured non-invasively from the surface from the skin that go up when you sweat but they also go up even when you’re not feeling sweaty, when you just have something exciting happening. It’s innervated by the sympathetic nervous system which is also known as the fight or flight response. So things that make you afraid or that are threatening or you got to give a big talk, your boss is coming in, you’ve got an important meeting or there’s a cute guy or girl coming by, or whatever it is—things that wake you up and make you a little bit excited, positive or negative, can make this response go up. Anticipation tends to make it go up and uncertainties make it go up. Kinds of stress that involve uncertainty and high stakes situations make it go up. Physical activity that makes you sweat also makes it go up. So we provide that information.

We also provide PPG, photoplethysmogram, … [for a] signal that gives you heart rate changes and how much constriction there is in the blood vessels in the extremities related to that blood flow. We also measure activity which is what you can get most in the commercial wearables these days that gives you information about sleep and steps and we measure temperature, that’s important for context. I tend to use that a lot for telling where I was during my day. Temperature almost always changes when you leave one room and go into another, if you go into a taxi, you go outdoors, you get in the car. I might know that my meeting started at 12:00 and ran to 1:00, but I see the temperature didn’t change until 12:10 which tells me I was 10 minutes late for the meeting and that helps show the truth about where I was, when.

So we take all that data and we use machine learning pattern analysis to help interpret it in different situations with a focus today on several medical situations, especially epilepsy, depression, PTSD, and sleep.

HT: Can you talk about how those things help with something like epilepsy? How would that connect to epilepsy?

RP: One of the surprises in our work was that one day we had a version that was in sweatband and a student came to me and said, “Hey, my little brother has autism and he’s non-speaking. Could I borrow one of your skin conductance sensors over Christmas break to see what’s stressing him out?” Because we have learned from working with a bunch of kids on the autism spectrum that outwardly they might look very calm, like even sort of in their own world, while inwardly their stress signals could be absolutely through the roof and so they were being misunderstood. People would say, “He looks like he needs more stimulation. Let’s pep him up.” But inwardly, we’re reading how his heart rate is through the roof and his skin conductance is through the roof and that kid needs to be left alone and needs some help calming down. So the student came to me and said, “Could I borrow this and try to figure out what’s really going on with my little brother?” And I said, “Sure, in fact here, take two, take a soldering iron, it’s probably going to break,” because it was one of our hand-built lab versions back then.

So he puts two sensors, one on each wrist, of his little brother and I’m back in my office here at MIT and I’m looking at the data like, “Hmm, that day looks typical,” “Hmm, that day looks typical,” and I go to the next day and I’m like, “Wow!” It looked like a sensor is broken. The signal was so big. In fact, it looked like both sensors were broken because one side was huge and the other side hardly had a little blip and I thought, what’s going on here? And furthermore the data looked fine before and after that weird incident.

And so I’m an electrical engineer by training and I started running a bunch of tests with the sensors trying to figure out what could have caused this and finally I could not explain it with anything that was a bug in the sensor and I called up the student at home and I’m like, “Hey how’s your Christmas break going? Sorry to bug you at home. Any idea what happened to your little brother?” and I gave him the time and the date and I thought there’s no chance he’s going to know what happened. Well, he says, “I’ll check the diary.” So I’m just sitting here kind of like, “Dear God, what are the odds that he wrote something down?” I’m praying that there will be something. And he comes back and says the date and the time and I said “yes” and he said, “That was 20 minutes before he had a grand mal seizure.” And I thought, “Wow, that’s really interesting.”

Well then [there were studies in] Boston where they had simultaneous electroencephalogram recordings and our sensors and heart rate and ECG. So EEG [Picard points to her head], ECG [she points to her heart], EDA [she points to her wrist], electrodermal activity, and usually we were not firing 20 minutes before a seizure. Usually we were firing when the seizures started. The student at home: Most diaries probably have inaccurate timing and we’ll never know if his was 20 minutes before because we didn’t have an EEG on him.

But, we are finding that we have a response that is a very good seizure detector and so we built an automated version of the software that could run on a wearable device and Empatica is planning to come out with a device that will allow people to get their seizures detected. If you have a response in advance, it will show you that. We don’t expect most people will with what we currently detect. However, some of the other information we’re getting in there right now may be able to give some people an advance warning. There’s another piece of information it gives that’s very exciting. We now know what regions of the brain are generating the skin conductance … and when those regions are activated they can turn off breathing. So we find for the most serious and dangerous seizures a bigger response on the wrist and so we can now alert people to the fact that there’s not only seizure going on, but you should check on this person and this is a very mild seizure or this is a potentially really dangerous one, you need to take the following series of steps to give them a better chance to survive it. 0

HT: Somebody watching is asking: Is the medical profession and caregiving areas key impact areas for your research? It sounds like you’re working with [the healthcare industry]. Are those professions really becoming interested in what you’re doing and what kind of applications can you imagine for them?  

RP: Yes. We have huge interest from medical areas and it’s wonderful to see. I think the medical areas realize that so much of medicine is based upon a very small piece of your life when you show up in the doctor’s office and you say, “Oh, yeah, I have this pain that comes and goes” versus could we measure continuously in daily life what it’s like for you? How much is that pain happening? How is it affecting your sleep? How is your messed up sleep affecting your perception of pain? How is your sleep, your activity, your stress affecting your likelihood of becoming depressed? Could we prevent depression? “Your activity is declining and your sleep is really messed up and your stress is really out of whack.” … So instead of waiting until you’re really sick, could you measure things that prevent it? And we don’t know the answer to those big, exciting questions but we now have emotion-sensing technology to measure pieces. So we can start to measure the impact and association of these signs and so forth. People will [be asked for] permission to share actions with other people … So we’re honoring people’s feelings, respecting their privacy, letting them choose what they want to share or not.

HT: Do you think that people are becoming more comfortable with this kind of technology now that there are consumer products out there that are already tracking some things that people do as far as their activity?

RP: Yeah, definitely. I think people are surprisingly hopping on the band wagon, so to speak, with quantified self and starting to learn about themselves. That said, there’s still some hard problems like most people think it’s interesting for about a week or two and then they’re bored, right? It’s like exercise equipment. It’s not really designed to be used long-term, I know from the Sears repairman who’s come to fix ours many times. He’s like, “You guys actually use this every day. It wasn’t designed to actually be used as much as you use it.” So we have to think about how do we add real value to people’s lives with this, right? And there is a goal of 10,000 steps by the end of the day or whatever but there’s a lot more to it than that. It’s really valuable for people to see—“Holy cow, look at my commute and how stressed out I got. Do I really want to waste my heart health on a commute? Maybe I should rethink the commute.” Or other aspects of your life that the data can now show you objectively are affecting your heart, your respiration, your autonomic stress. We’re not telling you what to do. We’re still learning what to do, what even makes sense. But together with the information and various medical experts we’re trying to figure out how do we truly help each person figure out what works for them. And it’s wonderfully and fundamentally different than the way medicine in the past has been where you just take an average from this group of people and we throw out all the outliers. “Here’s sort of the way they are and here’s what we recommend for them.” And I don’t know about you, but I find that pretty unsatisfying. What if I’m the outlier that got thrown out or what if there is actually five clumps up there and what they’re saying works for the average doesn’t actually work for four of the five clumps? What we all want is what helps you. What are the optimal settings for you? And how do they change over time as your demands in life change? So emotionally smart technology helps us get a better handle on what that is.

HT: I know that you’re wearing a sensor today and I wonder what kind of things you’ve learned about yourself from wearing that.

RP: I’m wearing our E3 today for researchers measuring my skin conductance, blood volume pulse, and surrounding temperature. I’ve learned a lot. Gee, I’m not sure I want to tell these things [laughs]. I’ve learned some things that stress me out quite a bit that I hadn’t realized. They turn out to be the more personal things, like things that have to do with my family can cause a much bigger response than giving a keynote in front of a hundred CEOs. Something that is personally significant that my child says can cause a much bigger response than some big famous person coming through the lab. People respond to what matters most to them. It can be very individual, what affects you. It can be fun to share with others and see what makes their data go up or not. I have one example I’ve shared in the past where my peak skin conductance response over a day at Six Flags was not riding the biggest, highest roller coaster, although that was a huge peak and I love roller coasters. It was not riding this horrible ride that I don’t want to talk about that makes me sick. Why do kids want to go on this ride? It was just awful. One of these things that spins you around and swings you and I had to do it twice to be a good mom. That had a big peak that just kept going for a while. But the biggest peak of the day actually was in the morning getting out the door for fear the whole event was going to be called off and some other things that were threatening it. To me it was more important that the event happen on that particular day, because it had been called off once before and it was a special meaningful event for my son’s birthday party, than my body being tossed around by some extreme ride.

Similarly, we’ve been in games, a lot of emotions can happen in games. A lot of emotions happen in games. People are very facially expressive. Their physiology’s going up, up, up. Their character is charging and the enemy’s coming and the emotions are building, they’re clenching their jaw and they’re doing all this stuff. Emotions are high and their character gets killed and more emotion.  But the biggest emotions we find are not any of those. They’re when the controllers stop working. It’s when something in the real world that’s real, that really matters to them, goes wrong. The make pretend world can be very emotional but the real world usually trumps it. Our bodies know what’s real and what matters and it’s very individual but the signals we measure show much bigger responses to things that really matter to people. We see it. In the lab, we show pictures of emotional things and you get little peaks but then this person leaves the lab and their boyfriend calls them and it’s like, ahh, real emotion. And that’s where we see that the real world matters the most.

HT: Someone who’s watching has asked: It sounds like people in your lab might come from a lot of research backgrounds. Can you comment on the type of research environment that seems to work best for your particular research?

RP: Yes, we have people from lots of backgrounds. I’ve had people from computer science, psychology, neuroscience, human-computer interaction, English, physics, mechanical engineering, electrical engineering, lots and lots of different fields. People who are MDs have applied to be part of our group. We usually admit people who don’t already have a doctorate, we want to start out a little more green, but I’d say the things that are most valuable are curiosity, passion, hard work, wanting to learn—a real hunger to learn—an open mindedness to whatever the data are going to say. Maybe you really don’t want the data to say this but they say this. You have to go where the data take you. And be willing to learn. I’m constantly looking for people who can be humble and willing to set aside thier pet theory and learn what we’re going to discover because it’s really about discovering, finding out something you don’t already know. So lots of backgrounds, lots of open minds, great spirit of working together. People need to listen to each other and learn from each other and share ideas and work as a team and that way we get a lot further.

HT: Another person has asked: Could your current research with crowdsourcing to fix negative emotions be used to combat online trolling?

RP: Interesting, great idea! Maybe. Happy to talk more about that if you have ideas. Yeah, mail us. The person who would probably give you the fastest response on that is my student Rob Morris. You can get his info on our Affective Computing web pages and our papers.

HT: I wonder what do you see as the future of affective computing and where do you think that the field is heading into the future?

RP:
Yeah, it’s interesting. It has gone from kind of a crazy idea that people were embarrassed that I was doing to now it has got its own IEEE journal, that’s the Institute of Electrical and Electronics Engineers. It has got an international society, it has got an international conference, it has got a lot of momentum now and it’s going in a lot of interesting areas. I hope that it will go in a lot of areas. I hope it will go to medicine. We already see it in marketing. We’re seeing it take off in medicine. We’re seeing it in education, we’re seeing it in human–computer interaction. We’re seeing it going into automobiles and into different service uses, you know, helping people learn how to serve people better, doing a lot with autism and human interaction as well. What I hope is that wherever it goes, that it will remember that the reason that it was formed was to show respect for human feelings, to recognize that they’re valuable. This is a very important part of being human and not just treat emotions like numbers and never treat people like numbers but to recognize that this is a really important part of who we are and always, always and however the technology is used, show respect for those feelings that people have.

HT: There seems to be so many different parts of affective computing, and so many different applications and uses for it. I wonder what is your favorite and what you find most interesting and most enjoy to work on.

RP: One of my absolute favorites has been working with people on the autism spectrum. They’re fascinating. They’re wonderful. They’re so interesting and I love how their minds work. And we learn so much from interacting with them. Plus, they’re just so honest and forthright, in my experience. [One said,] “Roz, I think affective computing is a bad idea.” And I said, “Really? Tell me more. Why do you think it’s a bad idea?” and she said, “Well, you know, I don’t like looking at faces. Faces and eyeballs freak me out. And if you’re going to make computers more emotional, more like people, then I’m not going to like them as much. I want my computer to be like a computer and not like a person.” And I’m like, “Well that’s really important feedback.”

 We’re not trying to make computers look more like people. They don’t have to look more like people. In fact, we could set up a computer–human interaction where you don’t have to look at eyeballs or a face but if I want to see one, I could. In fact, we could have a face-to-face interaction, mediated by computer where you see the parts you want to see and I see the parts I want to see and we don’t have to see what we would see in a real face-to-face situation, we could tune it up to our own liking so the computer could start to be smart about making that interaction easier. It could block my eyes so you don’t have to look at eyeballs. It could allow me to read stuff. Maybe you don’t send facial expressions well, some people don’t know how to make facial expressions, and if you wanted to learn how, it could help you—it could suggest them, it could overlay them, it could make you look better than you actually do it, if you wanted to, and have yourself bootstrap a little bit and learn that. These are things that are possible. Now, it doesn’t make them all desirable. There’s what we can build and there’s what we should build. So we need to think about which things are actually good to build in which situations. But these things are all possible now.

We should not just think that everything that we could do should be inflicted on everybody. We are staunchly opt-in. We would not want to use any of this stuff in ways that people didn’t opt-in,  ask for it, find benefit from it. So these are all things that I feel passionate about and excited to be a part of and when they can also help people, like when the person with autism comes back to us and says, “This is really cool, this really helps me, I’m better understood now, now they believe me when I say that I really felt this,” it’s kind of sad they didn’t believe them before when they just said it, but if our data helps them really believe it now, then great. And now the person has more credibility and they should have had that credibility. We feel good that our technology is helping people be better understood and so autism is one of my favorite applications so far.

HT: Another person watching says: Thanks for talking about your work on epilepsy. It sounds as if intelligent devices are helping to complete a communication circle between physicians and patients. As these devices improve, how do you envision life might change for the chronically ill?

RP: I would love to see the chronically ill be better understood in terms of their emotional experience and what’s influencing it. We’ve actually done this work with measuring of pain models and it’s still a huge problem area. We don’t have many easy solutions. But we do suspect that stress makes it worse. Not sleeping well makes it worse. Worse pain leads to not sleeping better and there’s kind of a vicious cycle. To an extent, we can help people pinpoint and better understand what’s actually going on, especially non-speaking people, to be better understood. For many of us, if we think we understand our feelings, but then when we see the data it’s like, “Oh wow, yeah, I guess I wasn’t really paying attention to that.” It’s kind of these eye-opening moments of truth with the data that show you, “Gee, I thought I was exercising 30 minutes but my data shows that it was actually 30 minutes punctuated by six breaks, maybe only a total of 15 minutes.” So we start to get the facts and some people are uncomfortable with that.

Again, we’re opt-in, we’re not going to inflict them on anybody. But if you want them, you have them and they make it easy to share what’s going on with the doctor or a spouse or a friend who can start to see that you’re really trying, you really are suffering, or you really are having rotten sleep and stress really is a factor and maybe you don’t know what to do about it, or maybe you do. But you can start to make real progress.

I remember one woman before an event she was pacing and when she paced, her skin conductance response dropped but she didn’t know this at the time. And her friend sitting nearby said, “Stop pacing! You’re driving me crazy. That’s not helping.” So she stopped pacing, believing him, not trusting her own body but believing him, and she was on the autism spectrum, she started flapping and rocking, these kind of repetitive movements that were her backup to try to comfort herself. Then she goes off to this event where she’s giving a speech. It was very stressful for her—and huge skin conductance peak and then afterwards we see it going down. Afterwards, she looks at her data and she shares it with her male friend. And he looks at the timestamp and he looks at it going down and then going up and her big peak during the top and he says, “I’m not going to tell you to stop pacing anymore” and the next morning I saw the two of them before another event, walking back and forth. Her data showed that the pacing was associated with her calming herself. And when he saw that, he knew to back off and let her do what helped her.

HT: That’s so interesting because we’re talking about affective computing as computers interacting better with people but it sounds like this is technology that helps people interact better with other people.

RP: Yes, ultimately it’s about technology that helps us better understand and communicate emotion, whether it’s in a human–computer interaction or a human–human interaction or a human–human mediated by a computer. If we can use the technology to help us better understand that emotion and better communicate it and be better understood then that’s a great value.

HT: We have another question that asks: Does your research have potential applications for professions where workers might suffer fatigue from repetitive activities, like air traffic controllers?

RP: Interesting. Gosh, you reminded me of a study we did a long time ago, with Carson Reynolds and Jack Dennerlein in the Harvard School of Public Health, a pressure mouse that measured how hard you squeezed it and when you were more stressed. Gee, I probably have this mouse in my office here. Yes, I do [Gets up to go get mouse.  And when you were more stressed and you squeezed it, this mouse— here was our original pressure mouse—so when you look it has all these little pressure sensors on it and when you were more stressed and you squeezed it, it could make what you were drawing, or the line trace, thicker and redder, and we found that when we did things that irritated people online, they squeezed the mouse harder, like when we used pull down menus and bad usability, people would squeeze it harder or you told them you had to use some bad formatting, like separating ages with commas or whatever. And Jack Dennerlein of the Harvard School of Public Health found that—he studies repetitive stress injuries and wrist injuries and what we thought was from too much computer use—but it turns out he said the bigger factor was not how much repetition was in your movement, but was how stressed you were. And indeed when people were stressed and they squeezed the mouse harder Jack, through his implanted electromyogram sensor, measured that the muscles that were most activated by the stress were the same muscles that caused the carpel tunnel and the RSI [Repetitive Stress Injuries]-type problems. This suggested that relaxing and causing less frustration to people on the computer might benefit those wrist problems a lot of people have using computers. So that was a nice example of how affective technology could help give a better insight that could lead to us treating not how many times you’re typing or moving, but treat the stress that might be exacerbating the condition instead.

There seems to be a cognitive load, an overload issue, and a fatigue issue and we’ve done some measurements of things that change and go up with cognitive load and go down with disengagement, although we have not recently looked specifically at air traffic controllers. But I know there has been some work looking at their facial expressions and signs of interest or engagement or backing off, looking fatigued and disengaged and that is possible today with technology out of our lab.

HT: Another question is: Your research is really fascinating. I can imagine a lot of future researchers being inspired by your work. Who inspired you early in your career?

RP: Just about everybody who’s done really great work. I don’t have a couple key mentors. I wish I did. I think, being a woman, it has been kind of hard to find them. I have been very inspired by several of the people in the MIT Media Lab who have been real pioneers, showing me that it was OK to do things that made me uncomfortable, that it was OK to take risks.

I remember Jerry Wiesner, who our building was named after. He was one of the former presidents of MIT and advisor to, I think, five presidents of the United States. I invited him out to lunch one day with some trepidation, because we were on a project together, at least he was the big name on the project. I was the little junior, unknown person. And I said, “We’re on this project together, can we go out to eat and you give me some advice?” And the advice he gave me has been so helpful. He said the most important thing to do was to take risks, was to really take risks. And I realized a lot of what I was doing was stuff that I knew how to do, I knew it could succeed, maybe I didn’t know exactly how to solve it or what the outcome would be but I was on solid ground, I was pretty sure it would succeed. When affective stuff came along I thought, “Oh golly, this is a huge risk. This is like, I could ruin my whole career being associated with this.” So I had Jerry’s words fresh in my mind … I was trying to learn about this emotion stuff and see if it actually led to anything valuable. It was a big risk. I felt very inspired by what he said and inspired by the example of so many people in the Media Lab who were also taking risks and weren’t afraid of being laughed at if they did something that didn’t work—just curious, passionate, trying to make the world a better place and learning whatever it took to move things in that direction. 

Excerpts from this interview were published in the Sigma Xi Today section of the January-February 2015 issue of American Scientist.


More About Sigma Xi: Sigma Xi, The Scientific Research Honor Society is the world’s largest multidisciplinary honor society for scientists and engineers. Its mission is to enhance the health of the research enterprise, foster integrity in science and engineering, and promote the public understanding of science for the purpose of improving the human condition. Sigma Xi chapters can be found at colleges and universities, government laboratories, and industry research centers around the world. More than 200 Nobel Prize winners have been members. The Society is based in Research Triangle Park, North Carolina. www.sigmaxi.org. On Twitter: @SigmaXiSociety

DONATE NOW

SOCIAL MEDIA STREAM