In this episode Byron and Alessandro discuss AI, social signals, and the implications of a computer that can read and respond to emotions.
Alessandro Vinciarelli has a Ph.D. in mathematics from the University of Bern and is currently a professor in the School of Computing Science at the University of Glasgow.
Byron Reese: This is Voices in AI brought to you by GigaOm. I'm Byron Reese. Today our guest is Alessandro Vinciarelli. He is a full professor at the University of Glasgow. He holds a Ph.D. in applied mathematics from the University of Bern. Welcome to the show, Alessandro.
Alessandro Vinciarelli: Welcome. Good morning.
Tell me a little bit about the kind of work you do in artificial intelligence.
I work on a particular domain that is called social signal processing, which is the branch of artificial intelligence that deals with social psychological phenomena. We can think of the goal of this particular part of the domain as trying to read the mind of people, and through this to interact with people in the same way as people do with one another.
That is like picking up on subtle social cues that people naturally do, teaching machines to do that?
Exactly. At the core of this domain there is what we call social signals that are nonverbal behavioral cues that people naturally exchange during their social interactions. We talk here about, for example, facial expressions, spontaneous gestures, the posture, how we talk in a broadcast, the way of speaking – not what people say, but how they say it.
The core idea is that basically we can see facial expression with our eyes, can hear the way people speak with our ears… and so it is also possible to sense these nonverbal behavioral cues with common sensors – like cameras, microphones, and so on. Through automatic analysis of the signal into the application of artificial intelligence approaches, we can map the data information we extract from images, audio recordings and so on into social cues and their meaning for the people that are involved in an interaction.
I guess implicit in that is an assumption that there's a commonness of social cues across the whole human race? Is that the case?
Yes. Let's say social signals are the point where nature meets nurture. What does it mean? It means that at the end it's something that is intimately related to our body, to our evolution, to our very natural being. And in this sense, we all have a disposition of the same expressive means, in the sense that we all have the same way of speaking, the same voice, the same phonetic apparatus. The face is the same for everybody. We have the same muscles of disposition in order to express a facial expression. The body is the same for everybody. So, from the way we talk, to our bodies… is the same for all people around the world.
However, at the same time as we are a part of society, part of a context, we somewhat learn from the others to express specific meaning, like for example a friendly attitude or a hostile attitude or happiness and so on, in a way that somewhat matches the others.
To give an example of how this can work, when I moved to the U.K. … I'm originally from Italy, and I started to teach in this university. A teaching inspector came to see me and told me, “Well, Alessandro, you have to move your arms a little bit less, because you sound very aggressive. You look very aggressive to the students.” You see, in Italy, it is quite normal to move the hands a lot, especially when we communicate in front of an audience. However, here in the U.K., when people use their arms – because everybody around the world does it – I have to do it in a bit more moderate way, in a more let's say British way, in order to not sound aggressive. So, you see, gestures communicate all over the world. However, the accepted intensity you use changes from one place to the other.
What are some of the practical applications of what you're working on?
Well, it is quite an exciting time for the community working on these types of topics. After the very pioneering years, if we look at the history of this particular branch of artificial intelligence, we can see that roughly the early 2000s was a very pioneering time. Then the community established more or less between the late 2000s and three or four years ago, when the technology started to work pretty well. And now we are at the point where we start seeing applications of these technologies initially developed at the research level in the laboratories in the real world.
To give an idea, think of today's personal assistants that can not only understand what we say and what we ask, but also how we express our request. Think of many animated characters that can interact with the actual agents, social robots and so on. They are slowly entering into reality and interacting with people like people do – through gestures, through facial expressions and so on. We see more and more companies that are involved and active in these types of domains. For example, we have systems that manage to recognize the emotions of people through sensors that can be carried like a watch on the wrist.
We have very interesting systems. I collaborate in particular with a company called Neurodata Lab that analyzes the content of multimedia material, trying to get an idea of its emotional content. That can be useful in any type of services about video on demand. There is a major force toward more human computer interfaces, or more in general human/machine interfaces that can figure out how we feel in order to intervene appropriately and interact appropriately with us. These are a few major examples.
So, there's voice, which I guess you could use over a telephone to determine some emotional state. And there's facial expressions. And there are other physical expressions. Are there other categories beyond those three that bifurcate or break up the world when you're thinking of different kinds of signals?
Yes, somewhat. The very fact that we are alive and we have a body somewhat forces us to have nonverbal behavioral cues, how they are called, to communicate through our body. And even if you try not to communicate, that becomes somewhat of cue and becomes a form of communication. And there are so many nonverbal behavioral cues that psychologists group them into five fundamental classes.
One is whatever happens with the head. Facial expressions, we've mentioned, but there are also movements of the head, shaking, nodding and so on. Then we have the posture. Now in this moment we are talking into a microphone. But, for example, when you talk to people, you tend to face them. You can talk to them by not facing them, but the type of impression would be totally different.
Then we have gestures. When we talk about gestures, we talk about the spontaneous movements we make. So, it's not like the OK gesture with the thumb. It's not like pointing to something. These have a pretty specific meaning. For example, self-touching… that typically communicates some kind of discomfort. It is restrictive movements we make when we speak from a cognitive point of view. Speaking and gesturing is a cognitive bimodel unit, so it's something that gets lumped together.
Then we have the way of speaking, as I mentioned. Not what we say, but how we say it. So, the sound of the voice, and so on. Then there is appearance, everything we can do in order to change our appearance. So, for example the attractiveness of the person, but also the kind of clothes you wear, the type of ornaments you have, and so on.
And the last one is the organization of space. For example, in a company, the more important you are, the bigger your office is. So space from that point of view communicates a form of social verticality. Similarly, we modulate our distances with respect to other people, not only in physical tasks but also in social terms. The closer a person is to us from a social point of view, the closer we let them come from a physical point of view.
So, these are the five wide categories of social signals that psychologists fundamentally recognize as the most important.
Well, as you go through them, I guess I can see how AI would be used. They're all forms of data that could be measured. So, presumably you can train an artificial intelligence on them.
That is exactly the core idea of the domain and of the application of artificial intelligence in these types of problems. So, the point is that to communicate with others, to interact with others, we have to manifest our inner state to our behavior – to what we do. Because we cannot imagine communicating something that is not observable… Whatever is observable, meaning it is accessible to our senses, is something that is accessible to artificial sensors. Once you can measure, once you can extract data about something, that is where artificial intelligence comes into play. At the point you can extract data, and the data can be automatically analyzed, then you can automatically infer information about the social and psychological phenomena taking place from the data you managed to capture.
I can see, as I sit here taking it in as you're explaining it, I can think of 100 positive uses of this technology and then 100 abuses of it. So, just starting with the positive ones, what are some additional use cases that you think about. So, I'm on tech support, and the AI can tell I'm getting really frustrated, and it would elevate me to another level. Would that be an acceptable use?
Yes, definitely. Let's say the important thing is that there is a common ground with the use of artificial intelligence. As you deal with data about people, you have to properly protect that data. So, for example, the idea of any application that recognizes your level of stress or frustration… as long as it is a person, that it can be OK. But if it happens for example in a company, it can also become a way to monitor the performance of the people. So, there is always a little bit of a balance between having applications that work well, that perform well. But we also have to be aware that, particularly in this case, we capture mined data that has to do with people and in particular to aspects of people that in principle we cannot even observe from outside. So, the use of this data is very delicate because we have to ensure that they [those observed] are properly protected.
And this is even more true for example once we move to other potentially positive applications. At this moment there is a major effort to analyze the behavior of people who might have mental issues. There is one big issue in psychiatry that fundamentally everything is based on observation, most of the activity of psychiatry is based on the observation of people, of the behavior of people during particular tests, during the interaction with the interviewer and so on.
These technologies can help a lot to figure out the exact symptoms a person has, so they can support doctors in diagnosing mental health issues, the cognitive degeneration issues related for example to age – and aging in a lot of developed countries is a big issue. This is at the same time exciting because artificial intelligence is a bit like a microscope that helps us to see in observation, patterns and relationships between patterns that we cannot get by the naked eye.
So, we obtain technology that works and fundamental insights about the phenomena we observe, but at the same time we provide the people that run this technology with an enormous power, because they can really penetrate more and more deeply into the life and the inner life of people in the case of the technology we're talking about today. So, from this point of view we always have this balance, this careful balance between going deeper into technology but ensuring a safe and positive use of them.
Would it be possible to build a lie detector?
Well, this is an interesting question. It is very interesting [because of] the different points of view between the attitude of Europe about this type of technology and the attitude of the U.S. In the U.S., for example, it is something accepted, recognized as something that can be used and even potentially can be used in a court…in that case to decide whether a person is telling the truth or not with consequences that can mean certain people going to jail.
In Europe, for example, this has been banned. So, our research about this topic is not going to be funded by the European funding agencies. Now, the fundamental issue is that trying to see whether a person lies fundamentally means to see whether lying is an action that leaves physical traces in the behavior of a person. All the research shows that most of the cues – if not all the cues – that seem to be associated with lying can be associated to a number of other phenomena. For example, a person might show hesitation in answering a question, which is a typical cue associated with lying, not because he's lying, but because maybe he's a bit nervous or he's not fully comfortable. This ambiguity makes certain countries say, “Well we don't want to do research on that because it is an instrument that is not sufficiently reliable to decide whether a person lies or not.” Others say, “OK, we are aware of this, but still it is an opportunity that can provide evidence that even if it is not conclusive at least it can support the decision in one direction rather than the other.”
So, you're saying it could potentially be an unreliable lie detector. It could be a lie detector that works sometimes.
So, let me ask about a different kind of [question]… You gave your five kinds of social signals. I've always thought… If I were a credit card vendor and I had a form online, that people could apply for my credit card. And it asks, What is your income? And I write, $50,000, and then I think, “No, that’s probably not enough, and I put $100,000… Uh, no. that may be too much, I'll put $75,000.” And so, I'm in this box. I keep changing it by a lot. Then it comes to my Social Security number, and I should know that by heart. But I type it really slowly like I’m copying it off something. So, in one case I'm probably making something up. And you could see that because… If all you see is the submitted form, you don't know that. But if you're watching the person fill out the form online and you see they keep changing the income or they're typing the Social Security number a certain way, aren't those social signals that a person may be being untruthful? And do those social signals have a name?
Yes. Well this is an interesting problem because as you can see when I've mentioned the list of social signals I have always mentioned signals that you typically display in face-to-face interactions. The reason is it's our way of communicating that has been shaped through evolution. And in evolutionary times, there was no other communication than face-to-face. So, our entire cognition, our entire brain is fundamentally designed around face-to-face communication. Nowadays we start to communicate more and more through these technological interfaces that in a sense force us to leave our body out of the scene. So, in a sense we are going to basically figure it out without body language, the popular name for non-verbal communication, without the body.
And the typing behavior… The way people type into an interface can be a trace of social and psychological phenomena. So, potentially, what you mention is true, and I'm not aware of work done to figure out whether a person is lying, but I can tell you that I have seen the work about children learning to type words into a computer through interfaces that can detect the speed of typing. You can see very clearly that whenever the children are confronted with a word they find particularly difficult, and they are learning the correct grammar, correct authography… whenever they face something difficult, they slow down typing, they have more hesitation or they use backspace to correct a lot. In that case, for example, the way the children type gives you an idea of what they find difficult in a text they are typing. So, it's not exactly the same example, but you see how typing is giving an indication.
In my laboratory, we are studying how people type in the case of social chats, like on a certain platform where it is like a conversation, but you do it by typing. That is something that is very common on many company sites as a form of customer care. Rather than calling a 0-800 number, you can basically type with an operator that can give information. One thing we have discovered, which is quite interesting, is that men and women tend to type in a different way -- different speed, different tendency to use punctuation, different tendencies to correct misspellings and so on. What is interesting about this are the various applications – including the one you mentioned about possibly detecting lies when people type in a certain way – is that apparently our brain still forces us to use social signals. Social and psychological phenomena leave traces in behavior, even when the communication takes place into an interface that does not feel natural.
We are at the very beginning of this type of understanding. It is only now that the community is trying to figure out our global way of communicating. The use of social signals we mentioned at the beginning, that starts from our brain. So, our brain basically gets squeezed through these new interfaces where we do not have our natural apparatus for communication.. So, we move toward this analysis of body language without the body. And the example you mentioned is definitely in the direction of my work.
So, let's take another hypothetical case. Let's talk about a robot that provides companionship to an elderly person. Further, let's assume this elderly person knows this is just a robot and that it's programmed to respond to nonverbal signals it gets. The elderly person who purchases the robot is fully informed about the technology. Is that a real use case that you might see on the market at some point? You know, it can tell when you're depressed, and so it tells a joke. When you're happy, it mimics that back. Is that something you see hitting the market?
Yes. This is something a lot of people are working on – exactly like this. And, according to the forecast and predictions of the main analysts, it is a scenario we're going to see in the next 10 to 20 years. And, there is a very interesting point to mention. Elderly people, or whoever uses a social robot, knows rationally, consciously that is a machine and not a person. However, we know exactly because there is always our brain working. We have these two layers of the brain, one which is conscious, where we know and we think about what we do. Then there are a number of unconscious processes going on. These processes do not really manage to make the distinction between a mechanical object, a robot that mimics the behavior of an animal or of a person, and a real person or a real animal. So, from this point of view, these objects are extremely effective because rationally we know that they are objects, but actually they stimulate, they activate the same processes that get active whenever we interact with another living being.
We have already on the market a large number of robots that cost sometimes very little, in the limit of $100-$200, and they have a degree of interactivity that has very positive aspects. One example is particularly interesting. There is a robot called Paro that is a kind of seal, one of these animals that swims through the water. It has fur and is something you can stroke. You can touch it with your hands and it reacts with simple movements. It has been shown, in particular in the case of elderly people, to be very beneficial from the point of view of the feelings. This is a very physiological, mechanical, psychological reaction we have towards anything we perceive unconsciously to be alive and to have a certain type of appearance.
So, according to all analysts, there is substantial agreement that in the next 10 to 20 years we are going to have social robots becoming a kind of common type of object, like today’s cellular phones or smartphones. They are going to be particularly important in this type of scenario where for many reasons total assistance is not possible. So, we talk about it in the case of the elderly having companion technologies, [it could be] very important in the case of education, providing some form of intelligent tutoring and emotionally driven tutoring to the largest possible number of children. It could be a bargain for teachers, making education much more effective for a larger number of people.
So you're familiar I assume with the work of Weizenbaum and ELIZA?
Let's set that up for the listener, because Weizenbaum would say even if people were fully informed of it, it's still a bad technology. So, the setup of this is, there was a computer science, AI person back in the 60s [Joseph] Weizenbaum, an MIT professor who wrote a program called ELIZA. ELIZA was a chatbot. You would tell it your problems, and it was incredibly simplistic. You would say, “I'm having a bad day.” It would say, “Why are you having a bad day?” “I'm having a bad day because of my mother.” And it would say, “Why is your mother causing you to have a bad day?” It's really simple. But what Weizenbaum found was that he saw people pouring their hearts out to ELIZA even though they knew it was just a program, and he kind of turned on ELIZA and on artificial intelligence. He said that when the computer says, “I understand,” it's just a lie, because there's no I and there's nothing that understands anything. And he became very nervous about using machines in a way that people would empathize with them. He would, I would assume, deeply object to giving machines the ability to what he would say is manipulating people, manipulating our emotions by faking certain emotional reactions. What are your thoughts on that?
This is one of the very deep ethical problems, ethical issues our community faces. ELIZA's just the beginning. Nowadays, we really have evidence, and there have been experiments in using, for example, virtual agents to deliver some kinds of therapeutic processes to people affected by depression, post-traumatic stress disorder and so on. What has been observed, which is very interesting, is that not only do people tend to react to these objects just as if they are human… and this is something that we know exactly because our brain tricks us, because it is unable at a deeper level to distinguish between living people and simply something that looks like it is alive.
But it turns out that people tend to disclose much more about their inner life to a machine than they do with another human being. This is at the same time very interesting because for people to disclose about things they feel or are particularly disturbed about, it's something that really helps to release. So, potentially, these technologies can help a lot. And the reason why people disclose more is that knowing that it is a machine, they don't feel judged. They don't have the feeling that someone will know [too much] about [them] and so on.
So, the real point is what happens with the data that gets collected during the process. Clearly, if this data is recorded, stored and used for purposes that are different from the benefit of the patients, this opens several scenarios that from an ethical point of view are quite worrying. For example, if we ensure that these technologies do not record anything or do not store any memory or data of the interaction, then they can potentially be very positive. So, in general when I talk about these issues, I tend to think that the danger and the problems never really come from machines. They come from people.
This is really about [disseminating] as much awareness as possible of how these machines work, what they can do and the full mechanism of the management of the data these machines collect… and on the possibility of not collecting the data, not storing the data, so that we do not give any power to the people managing these technologies [that can be used] against us.
But fundamentally I say that the problem here is not technology, but this really is in our self, us as people that develop and use this technology to set up a proper context, a proper framework, to avoid the dangers that can come from them.
So, Radiolab is a popular podcast, and I remember an episode of theirs I listened to… and they took a simple animal called a Furby. It's a toy.
And the thing about the Furby is if you turn it upside down it says, “I'm scared,” in a plaintive voice, and children when they heard it would want to right it because they felt bad for the Furby. So even though they knew the Furby didn't feel pain, like you said, we're so wired towards empathy that even knowing it doesn't mean we're immune from it. I wonder if it works in reverse… that if you make a robot that looks like a person, has a human voice, and let's say it is a companion and it doesn't record or save any data… then one day it breaks, and you throw it away. You just throw it on the trash heap or whatever, you recycle it. Could that in any way have a negative effect on the whole concept of human rights? If you teach people it's OK to take something that looks and speaks and acts like a human and treat it a certain way that you would never treat a person, does that somehow lessen… I'm thinking specifically of this case in Japan with this robot they let loose in a mall, and they found that little kids would try to get in its way and block it. And then they would hit it. Later, they were asked, “Do you think the robot was upset?” They would say yes. Eighty percent of them did. So, I wonder if in making things that have human voices and human names we aren’t somehow lessening what it means to actually be a human and have human rights? Do you have any thoughts on that?
Yes. This is one of the most interesting and most open questions. As I mentioned earlier, this technology is going to become something common in our everyday life. At the moment, there is no answer exactly, because. [observations] like the one you mentioned, about the children basically mistreating the robot in a public space… at the moment we have a lot of anecdotal evidence of what can happen, but we do not really have a real serious study about if it can happen.
I think the closest thing I can imagine is for people of my age… I remember when I was a teenager when video games started to become a very popular form of entertainment and accessible to everybody. There were a lot of video games that included some violence. You are shooting people. You are shooting up living beings. There was an enormous debate about whether this would make the following generation a generation particularly violent or prone to mistreat others. Now, after 25-30 years, we observe that this is not really the case. And if there is any increase in the violence of society, I think it would be hard to attribute it to the dissemination of these types of video games.
My hope is that somehow we are going to observe the same type of phenomenon, meaning that because we have objects that look like people, look like living beings, but at the same time we can dispose of them… will not transform us into people that will dispose of other people in the same way.. However, this is definitely an open question. And with this, machines become more and more realistic... and what is important better and better designed to activate those cognitive processes that we activate when we interact with others, because that is the trick. At this point, we can only have hope that the effects you mentioned are not going to take place and we keep distinguishing clearly the difference between dealing with living beings – people, animals or whatever else – and machines.
You know when I think of other applications of this technology, the most commercial one imaginable is using it in advertising. Is that something that people are doing? And is that something people should do? Is it inherently ethical? I mean, on the one hand, you say reading somebody's emotional state and then showing them advertising that's consistent with that… there's nothing wrong with that at all. And other people might say, well maybe there is. First of all, is it being done? And second, should it be done?
Yes. What is interesting here is that social signals are used a lot in advertising. Think, for example, how famous people or very beautiful people – and we go back to appearance as a social signal – are used in order to convince us to buy certain products. So, at the moment there is no use yet of technologies that might look like an agent, like a robot, to convince people to buy certain things rather than others. However, we know that there are some commercial platforms that… think about Facebook, Google and so on, that somewhat analyze our emotions – in particular Facebook – and the content we access depends on that. Based on the particular emotions or particular social cues, meaning the manifestation of our inner state with the way we manifest or use technologies, they somewhat decide what type of content and what type of advertising we receive.
If you remember, very recently there was the case that made a lot of noise about Cambridge Analytica, a company based in the U.K., that somewhat manipulated the content people were receiving toward a certain type of emotion and oriented voting patterns in one direction rather than the other. So, it is already being done, in agents that work in the way we have mentioned, in our consumption of online material or in the way we interact with other people on social media.
Once again, I think is a matter of… it's not the technology, it's the people that use it that make the difference between an ethical and a non-ethical use. On the one hand, it can be used to help people make right decisions oriented toward healthy food, oriented toward more ecological consumption of energy… having behaviors that might be more desirable from the point of view of ethical expectations.
But at the same time, it might be oriented [toward urging you to] vote in a certain direction rather than another without really being aware of it, influence consumption that is not particularly good for health or for the environment and so on. Once again, the technology itself is not ethical or unethical. Technology opens up the possibility for doing good and for doing bad. And the point is really how we as humans, as a society, in politics, etc., decide what we allow and what we do not allow. However, for sure it is happening already. There have already been important cases, and in the direction of understanding how we are sending us the right advertising at the right moment, it is something that is going to happen soon, if it is not happening already.
Where do you get data to train your data models? Where do you get a whole lot of people doing a whole lot of expressions?
Yes. In general, we always start with laboratory data. We invite volunteers to participate in our experiments. Funny enough, in most cases its students of the university, so most studies in the literature are always about university students. This is in general the material we use to start obtaining the models, because we can control the conditions. We can know exactly what we get. It helps us to get inside. And then there is an enormous amount of material nowadays online, on the web. Think of the major repositories, like YouTube, [which is] often accompanied by descriptions. So, we can use those descriptions to figure out when there are people laughing, when there are people crying, when there are people being happy, when there are people being unhappy, and so on. And it is a material which is quite chaotic mathematically, because you really find many types of effects [online]. However, one of the interesting things about artificial intelligence approaches is that they manage to dilute the noise in actual data. And so that type of material can be used to train models that then appear to work pretty well in the application suite.
I remember reading a long time ago that it was positive that people's eyes dilate when they look at something desirable. So, in magazine ads they would darken people's pupils. Have you ever heard that little anecdotal piece of trivia?
Yes, the study of how pupils dilate goes back to a long time ago. These were studies made by Daniel Kahneman that got the Nobel Prize in economics, because he opened the big field of behavioral economics. And, yes, it is true this is a very observable physiological reaction we have, even though we must pay attention because it is quite ambiguous.
Once again you see both our social cues and our physiological signals are always ambiguous. Nature has designed communication to be ambiguous because we always need to negotiate the meaning. In a sense, we always need to protect what happens inside us. So, pupils dilate when you see something desirable. Pupils dilate also when there is less light. So, if the environment gets darker, you need to dilate the pupils to absorb more light from the environment. So, it's always very difficult to figure out whether this change in the size of the pupils is related to what you see or simply changing lighting. So, when you're in a laboratory, when you are in a controlled environment, you can trust that observation 100 percent. But, of course, when you're out in the real world, you never know exactly where it can come from.
So, you're a university professor. Do you apply this technology in the commercial world? Are there any initiatives there you can talk about?
Yes. In this moment after the very pioneering stage, there are a lot of companies around the world… many of them people who have been studying with those most active in these types of domains. In my particular case, I have a collaboration with a company called Neurodata Lab, which focuses in particular on the analysis of multimedia material as a form of data mining, trying to figure out exactly what is the emotional content of data in order to provide this content to people who might be interested in particular types of emotions.
Another company I collaborate with, for example, is Klewel. It is a company that records oral presentations, and we have a project for the automatic analysis of the performance. So, is a person speaking in a way that sounds interesting, that engages the audience or not?
In other cases, there has been a lot of work…Probably one of the very first companies was called Affectiva, which a spinoff of the MIT Media Lab. It analyzes the emotions of people through different types of sensors. You can have sensors that you can carry on you that measure your physiological signals and figure out your emotional state. Or you can install it on your computer and through the webcam it looks at your face, and through the analysis of the facial expression somewhat captures and tries to figure out what is your emotion.
And then now there is the big area of social robotics. I collaborate with a company called SoftBank Robotics that produces some very famous, very popular robotic platforms like the “NAO” and the “Pepper” that interact with people. They are very nice. They look a bit like children, and they have a nice appearance. We are studying for example how to change the gestures of these robots so that they can quickly engage with people in public spaces, which are typically very noisy, very chaotic… This is suggested by the way animals associate, where in noisy environments, gestures or movements are used rather than acoustic signals. We are applying the same types of ideas to robots in noisy shopping centers so they can attract the attention of visitors, give directions about where to find certain things or simply to direct crowds from one part of the shopping mall to another.
So, these are a few practical examples, but you can find many others. There is for example a very interesting company called audEERING that analyzes speech in order to find the wide range of different characteristics that go from mental issues and depression to the amount of alcohol consumed in the last hours and whether that amount is large enough to compromise your ability to speak — meaning a person has been drinking excessively. [It can also] try to figure out the particular accent of a person. [The company can] produce a personalized service or adapt a particular service to a particular case. It can analyze the emotion a person is expressing through her or his voice so you can react to an artificial agent correctly.
So, these are the most known, the most practical examples covering the vast communication channels and various types of social signals we have been mentioning
Well, that is all very fascinating. It looks like we're out of time, and I want to thank you for sharing some of your work with us. I think you're entirely right. This is something we're going to see a whole lot more of in the future. And we also have a lot of unanswered questions about it, but it's fascinating to be sure. So, thank you for joining us.
Thanks to you. Thank you very much.
Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.