Episode 54: A Conversation with Ahmad Abdulkader

In this episode, Byron and Ahmad talk about the brain, learning, and education as well as privacy and AI policy.

:: ::

Guest

Ahmad Abdulkader is the CTO of Voicera. Before that he was the technical lead of Facebook's DeepText, an AI text understanding engine. Prior to that he developed OCR engines, machine learning systems, and computer vision systems at Google.

Transcript

Byron Reese: This is Voices in AI brought to you by GigaOm. I am Byron Reese. Today our guest is Ahmad Abdulkader. He is the CTO of Voicera. Before that he was the lead architect for Facebook supplied AI efforts producing Deep Texts, which is a text understanding engine. Prior to that he worked at Google building OCR engines, machine learning systems, and computer vision systems. He holds a Bachelor of Science and Electrical Engineering degree from Cairo University and a Masters in Computer Science from the University of Washington. Welcome to the show.

Ahmad Abdulkader: Thank you, thanks Byron, thanks for having me.

I always like to start out by just asking people to define artificial intelligence because I have never had two people define it the same way before.

Yeah, I can imagine. I am not aware of a formal definition. So, to me AI is the ability of machines to do or perform cognitive tasks that humans can do or learn to do rather. And eventually learn to do it in a seamless way.

Is the calculator therefore artificial intelligence?

No, the calculator is not performing a cognitive task. A cognitive task I mean vision, speech understanding, understanding text, and such. Actually, in fact the brain is actually lousy at multiplying two six-digit numbers, which is what the calculator is good at. But the calculator is really bad at doing a cognitive test.

I see, well actually, that is a really interesting definition because you're defining it not by some kind of an abstract notion of what it means to be intelligent, but you’ve got a really kind of narrow set of skills that once something can do those, it's an AI. Do I understand you correctly?

Right, right, I have a sort of a yard stick, or I have a sort of a set of tasks a human can do in a seamless easy way without even knowing how to do it, and we want to actually have machines mimic that to some degree. And there will be some very specific set of tasks, some of them are more important than others and so far, we haven’t been able to build machines that actually get even close to the human beings around these tasks.

Help me understand how you are seeing the world that way, and I don't want to get caught up on definitions, but this is really interesting.

Right.

So, if a computer couldn’t read, couldn’t recognize objects, and couldn’t do all those things you just said, but let’s say it was creative and it could write novels. Is that an AI?

First of all, this is hypothetical. I wouldn’t know, I wouldn’t call it AI, so it goes back to the definition of intelligence, and then there's a natural intelligence that humans exhibit, and then there is artificial intelligence that machines will attempt to make and exhibit. So, the most important of these that we actually use sort of almost every second of the day are vision, speech understanding, or language understanding, and creativity is one of them. So if you were to do that I would say this machine performed a subset of AI, but haven't exhibited the behavior to show that's it good at the most important ones, being vision, speech and such.

When you say vision and speech are the most important ones, nobody's ever really looked at the problem this way, so I really want to understand how you're saying that, because it would seem to me those aren't really the most important by a long shot. I mean, if I had an AI that could diagnose any disease, tell us how to generate unlimited energy, fix all the environmental woes, tell us how to do faster than light travel, all of those things, like, feed the hungry, and alleviate poverty and all of those things, but they couldn't tell a tuna fish from a Land Rover. I would say that's pretty important, I would take that hands down over what you're calling to be more important stuff.

I think really important is an overloaded word. I think you're talking about utility, right? So, you're imagining a hypothetical situation where we're able to build computers that will do the diagnosis or poverty and stuff like that. These would be way more useful for us, or that's what we think, or that's the hypothesis. But actually to do these tasks that you're talking about, it probably implies, most probably that you have done or solved, to a great degree, solved vision. It's hard to imagine that you would be doing diagnosis without actually solving vision. So, these are sort of the basic tasks that actually humans can do, and babies learn, and we see babies or children learn this as they grow up. So, perhaps the utility of what you talked about would be much more useful for us, but if you were to define importance as sort of the basic skills that you could build upon, I would say vision would be the most important one. Language understanding perhaps would be the second most important one. And I think doing well in these basic cognitive skills would enable us to solve the problems that you're talking about.

Well that's interesting, so because in a way what you said earlier is, “humans aren't any good at multiplying two six-digit numbers,” and that's why we made calculators. And maybe the computers aren't any good at understanding natural language, because it's just too nebulous and too organic and it evolves, and every word I'm using you're hearing in a different sense, and it may be that that's just not a terribly useful thing. If you could do one thing, are you really playing to the computers strength by working on that particular question?

Right, so it's actually I think our thinking is clouded by what we think, or what we know or is commonly known as computing. There's this Von Neumann architecture, not sure whether you've heard this term before, this sort of paradigm, the computer paradigm that since the invention of computers early on that we've been actually using, and brains do not follow that alignment. There's some kind of a distributed power processing that goes on. In fact, neural nets earlier on were referred to as PDP architectures, parallel distributed processing architectures, so it is computation, but it's in a very different paradigm that we actually just we're not...it doesn’t mean that we cannot convert machines or cannot make machines do computation this way. Everything that we've done so far until recently, or the majority, has been more of a Von Neumann architecture, and it has taken different change in the paradigm. By the way, we can actually achieve artificial intelligence not necessarily in the way actually your brains think, but brains do it, I'm sorry. But it doesn't mean that we cannot be inspired, that we cannot be inspired by learning how the brain works. And in no way, it has to be done in exactly the same manner. So, when I think calculators can do this and that, it's because they have been built to do [those things] with a certain computation paradigm. It is obviously not the adequate one for performing these kinds of cognitive tasks, and the current trends in AI are trying to change that, and implemented using the hardware that was built originally for Von Neumann architecture.

So, if somebody were to ask you to describe the state of the art, where would you say we are in the way you understand how you're describing AI?

We're nowhere close. We're very far from making that sort of behavior, but we're making good strides. And it's kind of obvious to me that we're going to achieve that, achieve these tasks, or build these machines that would help us do cognitive tests, not in the way that brains do it, it's going to be different. It's going to be utilizing some of the strength that compares to what the computer has, and brains.

Well are you familiar with the European human brain initiative, that project?

No, not...

It's a multi-billion-dollar project, it's been funded, it’s European and it really is actually trying to build - it says we know of one intelligent thing in the universe and that's the human brain, so why don't we model how that works, so you're saying you think generally speaking that's a fool’s errand?

I wouldn’t go as far, but I'm guessing, if I were to guess that what we are going to - the machines that are going to be really useful and help us with these cognitive tasks, are not necessarily going to - look at flights. Do airplanes have wings that sort of, that we move in the air? Do cars look like horses to you? It's ok to be inspired by these things, but they don't have to really look like their biological equivalents. It doesn't have to - we don't need to...

How do you think it is that humans do transfer learning so well, and we have such difficulty getting machines to do it? Because I assume that that's really at the core of the kinds of problems you grapple with, that humans can, with a training set, generalize, and apply knowledge across vast domains, that don’t even look like they have anything in common. And yet we're still at this phase with computers the way we do AI now, that you can solve a problem, but then don't ask it to do anything else. So why do you think it is that we're so good at that?

Yeah, if you look at the latest strides and latest trends, we're so focused on supervised learning. We love lots of this to actually perform the order of cognitive tests, and it's very clear that humans do not do that. I don't like very much that term of unsupervised learning, but I use it for lack of a better term. And I understand there are lots of folks trying to coin a different terms for it, but the focus in the past ten/twenty years have not been unsupervised learning, and there's a lot of that that goes - you don't show your kids a million dogs, and tell them this is a dog, you show them a few dogs and then they see structured data right now, dogs are not random beings, there is something similar about them that maybe we cannot explicitly say, but there's some structure there that the brain does a very good job at capturing and abstracting, and only then you need just a very few examples to learn what you're trying to learn. I think [over] the next few years, the next big challenge for artificial intelligence is unsupervised learning, as done in strides. You can see that most of CIFAR-10 is an unsupervised version of computer vision. The next version of ImageNet is going to actually be smaller or maybe the same size, they just supervised a set and a much bigger unsupervised set. I think this is critical for us to actually do reasonable strides in AI, do the significant improvements in unsupervised learning, and even cloning it correctly.

So I get your point about we don't fly the same way birds fly, we figured out a different way to do it, but do you believe we will be able to develop strong AI without understanding how it is that we are intelligent, because we don't really understand even how our own brains work, so is cracking that riddle part of strong AI?

Yes, it's not necessarily no, but we will be definitely inspired by it. We will discover, and we are discovering things about the human brain that actually helps us build better models, artificial intelligence models. It's not a condition, I don't think we have to fully understand how the brain works to achieve that, but definitely discoveries like this enlighten us big time. We've known, for example, that the number of feedback connections in the brain are way more than the feed forward connections for a long time. Scientists have known that for a long time, and do our models actually exhibit that? Not many of them, actually only RNNs started, is the latest, and the RNNs have been around for a long time but we have only got the attention only recently, and only recently we had the CPU power and the data to show that they actually add value. But for a long time, most of our neural networks did not have any feedback connections. Also, it's known that the brain - so as you know the current models, they work in discrete time, there's no continuous time, the brain is not like that. There's no concept of discrete time, it's a continuous thing. It's analog. Can we... maybe our models need to be analog. So, it's things like that, that we discover about the brain can help us I think, and deliver better models. But we don't necessarily have to crack it completely open, no pun intended, to mimic its behavior.

You know we had this nebulous term that is used, “the mind,” and a consensus definition of what that is would be, all the stuff the brain does, that it doesn't seem like a three-pound organ should be able to do, Your pancreas can't write a poem, or at least mine can't, and yet your brain can and where do you think that comes from? Do you think it's some kind of emergence, or how is it that these hundred billion neurons with incalculable connections between them, come to do what they do, do you have any theory on that or is that not even germane to the discussion? Because the examples you were giving, which were all good, were about architecture, but I'm talking about, like we don't even know how to encode a memory, or read a memory or anything like that, so is any of that even remotely necessary to figuring out how to make strong AI?

I wouldn't go as far as necessary, but it's really good, so I think there are two. I mentioned one of them which is sort of the continuous time, the analog nature of it, and that's a fundamental difference from what we're doing, and the level of information content that you have if you have an analog system is way more than a digital or discrete quantized time signal. So, that sort of just...orders of magnitude increases as information content, the size of the information that you can actually store. Also, the human memory is really weird, we fill lots of the gaps. We're able to abstract things, it's exhibited in associative memory, like once I start to talk to you about a story, you somehow fill the rest of the details yourself from your past experiences, and you don't really need to store all of the details. So, there's a very, very efficient way of storing things, memories, and pictures and our notion of certain concepts and people. It's stored in a very, very efficient way… something that we're not doing in our chips today. So, I think these two aspects are one of the most critical that we need to think about and try to incorporate in the models that we're going to invent for the future.

So, not only do humans have this brain that does these things we don't really understand and this mind that manifests additional abilities, but we're also conscious, in other words we experience the world as opposed to just sensing it. So, one can imagine a computer that can certainly sense the world, but it doesn't necessarily experience it, there's no self. Do you think that that notion of consciousness of there being an experiential entity, there's something that is experiencing, is that necessary for strong AI in your book?

So, I'm hesitant to say, I'm not sure, but my senses, no not necessarily. Consciousness is a fairly subjective concept that we came up with, right? It's not clear to me what it means sort of mathematically for example, or in a more objective manner. I think this is something that, this is how we're reasoning about it ourselves. This is our own language, but I don't think it has to be sort of done. In my mind it's more sci-fi, than a real thing…

Yeah, I guess the difference is, if you stub your toe, and you drop your iPhone, there is a difference between what your iPhone experiences and what you experience. You've felt pain, the iPhone presumably didn't. And is there a chance that that difference is your ability to perceive the world and make sense out of it and order it, out of all the stuff that you do, all the magic that you can somehow muster, all of it comes from that simple trick that you actually experienced the world? Or do you think that's just science fiction?

I think it's science fiction, I mean, this is how we sort of perceive it, but that doesn’t preclude building systems that I like to call multi-sensory. There are a few of these, like a few of the systems I haven't seen much actually. Systems that we built that actually incorporate multiple senses, vision and sort of speech and you name it. And maybe we'll - that thing that we call consciousness is going to emerge when we build systems like that, sort of in a serious way. It’ll have these senses interact, that have very, very different sources. I incorporated the same time into AI systems, and once we learn how to do that maybe this concept that we like to call conscious emergence can be modelled objectively in a better way, and then we can understand it, yeah.

Well one of the things that I'm curious about is your mantra I read about at the beginning of the show, I actually pulled from your website, and it describes deep text as “a text understanding engine.” Do you think that computers understand anything or is that just a convenient way to speak?

It's somewhere between; it’s actually because we are using a certain language, these are the terms that we have to...When we say understanding, it means that we can abstract things, we can bottle them with a limited sort of amount of information and we can predict either the future or missing parts or behavior of a human that we can’t see in a reasonable way. You can call this understanding. This is how we express it to ourselves. It is how well we can predict gaps.

So, to wrap up on this kind of highly theoretical thing: the range of people’s opinions on when we'll have a general intelligence range from five years to five hundred. When do you believe it will be? In my experience, my own little shorthand is just having people on the show and asking them and talking to people and what not. Where would you be in that continuum?

I am on the higher end of things. I don’t know about five hundred but definitely not five years.

So...

We are nowhere close -

Right. So, people who have that belief generally feel a few other things and I am just curious if this is true with you. So, most people who believe it is far out [also] believe, there is not some single discovery that is going to unlock general intelligence for us like “aha, eureka we have it,” but it's going to be a bunch of hard work and its figuring out how to do a hundred things that are all kind of hard. Would you agree with that?

I would agree although - so it's going to be incremental, but there are going to be discoveries, which are quantum leaps, but there's going to be unsupervised learning, for example is one of them.

Well that would be the second...

As far as we know.

Right, that we're eventually going to be able to make a learner, you just point at the internet, and it just takes it all in and figures everything out from that.

Yeah, essentially, some supervision, it will be minimal and lots and lots of structured data that is just not… The problem with the word unsupervised, it's not really unsupervised. We are kind of cheating; it is supervised. These things are not random, but there is no explicit learner, nobody is explicitly teaching us. We discovered this on our own.

So, there's a number of different discussions about AI that happen in any one time in society, and one of the big ones going on right now is around automation, and the fear that these technologies are going to automate away so many jobs that there aren't enough jobs in the future. Other people counter that and say they've had a lot of automation in the past and that's never happened. What is your thought on the impact of these technologies on employment?

It's inarguable of course, it will definitely replace jobs and, and we're seeing it, but there are going to be new types of jobs. We are going to direct our intellect and abilities to solve other problems, to do other things and that happened in the past and it will continue to do that. But there is a responsibility by the government and the like, to change the education system and adapt to the changes that hasn't been happening at the right speed, and train people and provide training programs for them to cope with that. My favorite cartoon is an old picture of two people, I think in the twenties or something like that, in a train reading the newspaper, and at the bottom it says, “look at how new technology is tearing the fabric of our society, the newspapers causing people not to talk to each other.” So, it has always happened, maybe it's happening at a faster pace and we'll deal with it, but it requires, it definitely requires some attention and conscious effort to deal with that. So, I'm not scared about work. There are going to be new skills and new tasks to be done and we will find them out as we go. But there has to be a concerted effort to adapt the education system in particular to cope with these changes.

How would you change the education system?

Well clearly memorization is not like...what's the point in memorizing lots of equations, lots of events we have? We have Google, and others, right? So, we have to focus clearly more on creativity, and dealing with, like any repetitive job will eventually be done by a machine, even if it requires some intellect, like diagnosing cancer, or diagnosing some... or looking at the x-ray or looking at the MRI, and doing that, and there's a little bit of intellect on it, but it's sort of repetitive. So non-repetitive intellectual jobs, the skill sets that are required to actually do that, education should focus on that. I think the education system that we have was built by the great empires to create clones to fill the jobs that these empires needed, and we're way past that right now. And most repetitive jobs, machines can do them, and now recently with machine learning, even ones that require some cognitive task, we can do some of these too. So, it's time to focus on the skills that are required for the remaining set of tasks, and it was creativity...

Fair enough. There's also a vigorous debate about the use of this technology in warfare, and specifically building weapon systems that can make autonomous kill decisions, and the debate goes: On the one hand they say they'll do it better than people, because they don't get tired, they don't make mistakes, they think faster and all of that. On the other hand, people are like, “you're building the Terminator!” How do you think that... Without even asking you how you feel about it, what do you think is going to happen? How do you think that's going to shape out? Are we going to end up with killer robots, like it's just kind of inevitable?

No, this is sci-fi really. It saddens me to see some of the resources...

Wait just a second, I don't know that it's sci fi. A landmine is an artificial intelligence terminator device, right? It's a device that kills anything that weighs over sixty pounds. A drone is an AI terminator; a smart bomb is an AI terminator. The question is...

I guess yeah.

They're getting smarter and smarter, right. And so, they exist, that's not science fiction. The question is, like the Koreans already built a weapon system for twenty four million dollars and it can kill somebody four miles away, and it broadcasts a warning beforehand, “you are about to get shot,” or something like that, and the only reason they later put in that a human has to authorize it, is only because the customers were asking for it. But the technology to identify a person in a restricted area and will kill them, I mean, there's nothing particularly tricky about that, right? You've said that AI is going to automate repetitive tasks. Killing people can be as repetitive as anything, right?

Well it has to be shaped by humans, but I'm most worried about humans. Humans have caused a lot of damage over history with very low tech, so we don't need this technology to create disasters. And that is the core of the problem. Of course, I mean we're making them more lethal and we're making them easier to do, but it's just like any tool that we have, it can be used for good or bad. The problem is humans and the broken system that we have today, and why are these things happening? We are focusing on the wrong...this is not really the cause, right? We're focusing on the wrong thing when we say that. So, of course the capabilities will increase and there has to be safeguards against that. There has to be regulations. There has to be oversights, but I don't see that as the cause of the issue. It's just a tool that's going to be used for good or bad.

I mean I hear you. It is a tool used for good or bad. I think the fear is that they increase asymmetry, so a nuclear weapon was always a big deterrent because it was incredibly hard to build one and expensive to acquire one. The computer virus may not be the same. It may be just as simple as copying a sophisticated AI from one flash drive to another, for $3. So the idea that all of a sudden you empower a radical few to do vastly more than they could have done fifty years ago, I think that's really the concern.

It is. The disasters that we've had in human history were caused by leaders of great countries. So again, I think we're focusing on the wrong sort of facet of the problem, when we actually say that. I'm not denying that they are more lethal, I'm denying that such phenomenon is going to happen. But let's focus on the root causes of the existence of such problems in the first place, and the necessity to go to war. We're evading these kinds of questions when we only have to worry about sort of the tools that people have. Why are there conflicts in the first place? I think these are more important questions to try to answer.

Fair enough. So, one more dystopian setup and then let's put on our rose-colored glasses and look at all the good that might come from all of this. So you know in the past, you may have thought that whatever government is recording my call, or can track me with my phone, or keeps track of everything I post, or whatever it is you think is going on. But you could always say, “yeah, well, there's seven and a half billion people in the world. Nobody can really... I'm lost in this enormous amount of data,” and now all of a sudden with artificial intelligence and not even science fiction of the like, the kind of stuff you're building, every phone call can not only be recorded, but it can be in your words “understood.” I saw that lip-reading is now... AI's are as good as humans at that, and so any camera that doesn't have a microphone can read lips of everybody, every document, every single thing, can all of a sudden be... Is privacy there for other than your private thoughts, is that really the only thing you have left, because in the end, so much of your life echoes on in some digital way that AI can chew up and spit out? Are we seeing a different shift, a change in what it means to have privacy, or is privacy just an outdated notion?

Yeah, it's between the last two statements, I think we just have to accept the fact that there's just more of us that's being exposed. I still think that the amount of information that we have, that is just being shared and created every day, it’s just very daunting to actually search and from what I've seen, technology being developed, well that is, I think it's a little pathetic. It's very hard to search through this information, but perhaps this will be solved someday. So, we have to just get used to the fact that there's just more of us that is being exposed these days. It's a change, and maybe you can argue that it's been happening all of the time. It's gradual, but there's been a deep increase in it, recently. So, yeah, it's just something that we have to get used to, and of course there has to be regulations. There has to be oversights to these things and more scrutiny, but it will happen, and we just have to get used to it.

Well let me throw one more of the dystopian side. Anybody who listens to the show knows I'm incredibly optimistic about these technologies. I don't think you… get all of the benefits of the optimism... the price of liberty is term of vigilance, and the price of an optimistic future is being aware of these. I worry about two other things, and one is IoT devices being fundamentally non-upgradeable, so you know every day literally millions of new things get plugged into the internet that have vulnerabilities by definition and they cannot be patched. Is that concerning to you?

You mean...

You know, every camera, every internet toaster, every refrigerator, every car, everything that's connected. Most of them, they don't have RAM in them right, like they have hardcoded chips in them that we can't update, we can't patch, and so if you ever figure out how to exploit them, that can't be fixed.

Yeah, I was just thinking, because you and I know that they have been actually exploited, we... there are reports that these vulnerabilities were found by certain entities and they were not reported because they were a way to actually snoop on people. Yeah, I get it, yes, this will happen, and I don't think vulnerabilities… it's going to be just a fact of life that you could very well be monitored without knowing it and we just have to get used to that. And at the same time people will become more clever, it's sort of a war. People are also going to come on the other side with ways to actually be more private, and try to keep more secrets, and the warfare will continue without any side winning. Every side is going to get better.

Well I hope it's that simple. I definitely believe the whole “each side gets better and better and better,” but I worry about, like I said legacy systems that can't be upgraded, and it isn't so much that I'm worried somebody is going to turn on my camera and see me walk through the living room in my underwear. I'm more thinking like, every convection oven that's connected to the internet gets put in cleaning mode for twenty hours until it catches fire or something. I don't know, that's the thing right? And then the last one is do you believe that our infrastructure in the U.S. is inherently brittle? Do you think that our power grid is vulnerable and other systems like that are just so antiquated that they are highly vulnerable to attack, and is that a concern?

It is a concern definitely, and I think there has to be more... there has to be actually on the government level, it's kind of shocking to see that the government has not... like they'll give you... The government of France has commissioned a member of the parliament recently, who happens to be a mathematician who has a fields medalist award, to actually come up with an AI strategy. I haven't seen anything like that in the U.S. Perhaps President Obama had some AI and big data advisers, but of course, I think in general even without AI, our infrastructure is outdated. It needs major improvements or re-architecture if you will. And I don't think it's going to be as expensive as people think, but more importantly there has to be… AI is important now, there has to be an AI policy. What's the implications as you point out, what's the implications on government agencies, even like different kinds of crimes, cybercrimes and all of that, Bitcoin, all of that, just tons of things to think about, and I haven't seen anything close to that. I haven't seen anybody talk about this, it's disappointing about that.

The trick involved in it is political will aside, as my very first question suggested, there's not even an agreement on what AI is, so it's an inherently hard thing. It’s as nebulous a term as high tech, and so it's like we need a high-tech policy. It's just really hard to think: “What does that look like?”

Sure, I don't think we need to agree 100% on that. It's ok, I think there is a least common denominator, let's agree on it. It's a phenomenon that is affecting everybody’s life, in one way or the other, and it's going to affect each one of us more, we just can't ignore it, we have to be prepared better for it, and I don't feel we need to agree precisely on the definition to actually take such a step to be honest.

So, we've gone through all of the jobs and the war and the infrastructure and all of that, so now take a minute and tell me, in your most optimistic view of the future, what's it going to look like, what are we getting for taking all this additional risk, what is going to be awesome about the future of artificial intelligence?

That most mundane tasks will be done by machines, in a better, even more efficient way. The world is going to become more egalitarian as a result. People are going to employ their intellect and intelligence capabilities in doing things that give them satisfaction, rather than things they actually have to do to earn money. Things are going to be more transparent. We are going to know more about what's going on around us, we're going to actually get closer to each other and know more about each other’s cultures, languages, ideologies, and such. We're going to become closer to each other. I think in the limit, this is my expectations/hopes, idea of borders and such, I think will become ridiculous, we're going to... this is something that's exhausted its utility, and we're going to come across something else, that actually brings us together and helps us manage the resources that we have on this planet together. It's really over-optimistic and maybe it's going to take some time, but I think this is the ultimate result I hope.

So starting with just a couple of those, the world will be more egalitarian; that's maybe a hard case to make, because one thing about technology, is it makes it easier than ever to make a billion dollars. More billionaires now are self-made people, than ever before, and technology allows you to touch so many people’s lives, that the ability to make vast sums of money seems to be going up. I wouldn't have been surprised if you had said, “we'll solve poverty, that extreme poverty will vanish,” but the idea that somehow the rich won't continue to get ever richer, I'm curious as to your thinking behind that.

I mean there are going to be rich people, but I think the income gaps are going to become less and less. There are more opportunities, it's more of a meritocracy rather than right now, if you're born in the wrong place or the wrong time or whatever, it's harder for you to make it, it's less of a meritocracy. Technology is going to make it more... if you're really smart, if you have an idea, you can actually make more opportunities, and it takes a shorter period of time to try them out, and less of a risk. There are [fewer] things that are sort of cast in stone, people are more creative, and they have the tools to experiment more. I think from this angle, you'll see more people do it, and yes there are going to be maybe, people will make more money definitely, but there are going to be maybe more of them, that's what I'm talking about.

So when you're not thinking about all of this and the problems of the world, and their solutions and all of that, you're the CTO at Voicera. Tell us two things: how did that come to be on your personal journey, how did you end up there, and then tell us a little bit about Voicera's mission and what excited you about that.

Absolutely. Yeah, so as you saw from my background, I worked at various big companies, all big companies and I learned a lot and was mentored by great people, but I felt I could be much more impactful in a smaller set up, much more efficient. So it was the idea that I would sort of leave and try to do something small was always in my mind. In fact, I think I should have done it a little bit earlier. So, I always thought that AI—I actually like to call it machine learning more than AI—can actually have a significant impact on people's productivity. That’s something that I'm very passionate about, and the voice remains the only modality in the workspace that is hardly digitized. We've digitized everything, documents, e-mail, but voice remains.

Like, we all hate meetings, and we walk out of the meetings without anything tangible things and we’re not sure who said what and what did we agree on. And it just amazes me that this space is just still open. There are some attempts here and there. One of the reasons is that speech recognition is, amongst the other machine learning problems, one of the hardest, and that is causing hesitation on people delving into that.

Before I left Facebook I spent a year, year and a half, thinking about ways for AI to improve people’s productivity in the enterprise, and I shared a lot of this thinking with my founder Omar [Tawakol], who was actually running a company called Bluekai which was bought by Oracle. But we used to meet a lot and talk about these things, and did a little bit of skunk work on the side. We initially were thinking about e-mails but very soon we discovered that the likes of Google and Microsoft who own e-mail essentially, can very easily build anything that we can build, but meetings we didn't see that there's much attention paid to meetings. And we felt there's a lot, in the current state of the art of the technology, there's a lot that can be done to make meetings much more efficient and much more useful than they are today.

So, yeah, we started officially January 2017, I joined in February 2017 and we’ve built an amazing product since then. The beta is out so you use it now for free. We came up with this persona called Eva (enterprise voice assistance). We have a male equivalent to it, called Evo. And Eva joins the meeting as simple as just sending her an invite. Eva figures out a way to join, the least common denominator is telephony. She will just call if she needs to, and Eva records the meeting. During the meeting you can actually interact with Eva using voice commands, instruct her to take notes, or remind you or to schedule a meeting and we're working on expanding these tasks. After the meeting you can actually search for when John said the word 'enterprise’ for example. If you didn’t attend the meeting, you will get some meeting insights that tell you what the meeting was all about, what are the salient terms that were uttered in the meeting, sort of an x-ray that's of a skeleton of the meeting, or for the terms and the categories of topics that were discussed. The holy grail of what we are trying to build is a meeting summary, five or six sentences, and that you can look at and can get a sense of what the meeting was all about. At the end of the meeting you get very high accuracy transcriptions of the highlights of the meeting, whether they are action items or decisions or something like that. We have over one thousand users who are using us right now, on a free basis. We'll be rolling out paid services pretty soon and we've learned a lot from their interactions and their feedback and we improved the product quite a bit since then, and we have tons of features that are coming also that's going to make it much more appealing.

I'm curious why you named it Voicera, how you chose the name and why you refer to it as her?

Yeah, it's probably my bad, no we have to personalize it as I pointed out as a male and a female. Eva was because it's an acronym for as I pointed out, enterprise voice assistance, so it's more appealing but we also have an Evo persona. You can also refer to him as a male.

No, but I mean I wasn't talking about the gender in particular, why would it have gender at all? Do you think that there’s any potential risk in that, as it were?

Right, yes and we had people comment on that as well. Having said that, being male or female, having a gender is something that perhaps people, especially in a professional set up, maybe relate to more because there's a presence of somebody taking notes in a meeting, a human being taking notes in a meeting, so maybe it's more personable, that is sort of the argument. But as I said, both genders are available, depending on... just like a GPS you can choose to have a lady give you directions or a gentleman.

And so, entice us with your vision. Is your vision that we don't have as many meetings and the few we do have are going to be more actionable and so forth, is that the goal?

Yes, it's exactly, I was telling my folks over here, “wait a minute, we'll make people need to meet less,” that's absolutely the... I wouldn't say goal, but sort of a consequence, because your meetings will become much more efficient, it will become shorter, and we will not need to meet as much.

Well that, you left out of your list of things about how awesome the future was going to be… and you won't have to have as many meetings. So, Ahmad, if people want to keep up with you and what you're doing, do you have a blog or how can they follow you and just keep up with all of the developments?

I think LinkedIn. I've been asked to actually get into the habit of blogging. But my LinkedIn posts that talk about the latest...what we do here, is the best way to actually keep up with that.

Well, actually I think it's a great goal, and I certainly fully accept the premise that meetings are inefficient, or at least the ones I'm involved in. I could be the reason that they are inefficient; I underline that. But nonetheless, thank you so much for taking time out of your schedule and going on that journey with me through all these different things and telling us a bit about what you're doing, so I wish you the best of luck.

Thank you very much Byron, it has been a pleasure.

Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.