Episode 56: A Conversation with Babak Hodjat

In this episode, Byron and Babak talk about genetic algorithms, cyber agriculture, and sentience.

:: ::

Guest

Babak Hodjat is the founder and CEO of Sentient Technologies. He holds a PhD in the study of machine intelligence.

Transcript

Byron Reese: This is Voices in AI brought to you by GigaOm, I'm Byron Reese. Today my guest is Babak Hodjat, he is the founder and CEO of Sentient Technologies. He holds a PhD in the study of machine intelligence. Welcome to the show, Babak. Rerecorded the intro

Babak Hodjat: Great to be here, thank you.

Let's start off with my normal intro question, which is, what is artificial intelligence?

Yes, what a question. Well we know what artificial is, I think mainly the crux of this question is, “What is intelligence?”

Well actually no, there are two different senses in which it's artificial. One is that it's not really intelligence, it's like artificial turf isn't really grass, that it just looks like intelligence, but it's not really. And the other one is, oh no it's really intelligent it just happens to be something we made.

Yeah it's the latter definition I think is the consensus. I'm saying this partly because there was a movement to call it machine intelligence, and there were other names to it as well, but I think artificial intelligence is, certainly the emphasis is on the fact that, as humans, we've been able to construct something that gives us a sense of intelligence. The main question then is, “What is this thing called intelligence?” And depending on how you answer that question, actual manifestations of AI have differed through the years.

There was a period in which AI was considered: If it tricks you into believing that it is intelligent, then it's intelligent. So, if that's the definition, then everything is fair game. You can cram this system with a whole bunch of rules, and back then we called them expert systems, and when you interact with these rule sets that are quite rigid, it might give you a sense of intelligence.

Then there was a movement around actually building intelligence systems, through machine learning, and mimicking how nature creates intelligence. Neural networks, genetic algorithms, some of the approaches, amongst many others that were proposed and suggested, reinforcement learning in its early form, but they would not scale. So the problem there was that they did actually show some very interesting properties of intelligence, namely learning, but they didn't quite scale, for a number of different reasons, partly because we didn't quite have the algorithms down yet, also the algorithms could not make use of scalable compute, and compute and memory storage was expensive.

Then we switched to redefinition in which we said, “Well, intelligence is about these smaller problem areas,” and that was the mid to late 90s where there was more interest in agenthood and agent-based systems, and agent-oriented systems where the agent was tasked with a simplified environment to solve. And intelligence was extracted into: If we were tasked with a reduced set of tools to interact with the world, and our world was much simpler than it is right now, how would we operate? That would be the definition of intelligence and those are agent based systems.

We've kind of swung back to machine learning based systems, partly because there have been some breakthroughs in the past, I would say 10-15 years, in neural networks in learning how to scale this technology, and an awesome rebranding of neural networks—calling them deep learning—the field has flourished on the back of that. Of course it doesn't hurt that we have cheap compute and storage and lots and lots of data to feed these systems.

You know, one of the earlier things you said is that we try to mimic how nature creates intelligence, and you listed three examples: neural nets, and then GANNs, how we evolve things and reinforcement learning. I would probably agree with evolutionary algorithms, but do you really think… I've always thought neural nets, like you said, don't really act like neurons. It's a convenient metaphor I guess, but do you really consider neural nets to be really derived from biology or it's just an analogy from biology?

Well it was very much inspired by biology, very much so. I mean models that we had of how we thought neurons and synapses between neurons and chemistry of the brain operates, fuels this field, absolutely. But these are very simplified versions of what the brain actually does, and every day there's more learning about how brain cells operate. I was just reading an article yesterday about how RNA can capture memory, and how the basal ganglia also have a learning type of function—it's not just the pre-frontal cortex. There's a lot of complexity and depth in how the brain operates, that is completely lost when you simplify it. So absolutely we're inspired definitely, but this is not a model of the brain by any stretch of the imagination.

So, we'll get off the definitions here in just a second, but you gave a history of how we think about it. But when you boil it down, what do you think intelligence is?

For practical reasons, I define intelligence as facets of human and biological intelligence that we can capture, model and make use of. Typically they manifest themselves as systems in which we describe the problem and expect the system to come up with a solution. So there's some level of learning and abstraction that we expect from these systems, that we don't expect from, for example, programmed or engineered based systems.

But I struggled right now as you can see I use three or four sentences just to get the meaning across. It's a very slippery meaning and partly because intelligence describes an emergent behavior of many parts, be it in nature or complex systems, and of course the brain being one of the most complex that we know. And because it's an emergent behavior, it's very difficult to pin point an exact definition for it. We're reduced to defining it based on its manifestations and how it operates and interacts with the world.

But the programs we have right now are programmed and engineered, and they're deterministic and they rhyme, [but] they're as dead as fried chicken right? I was just reading about animal behavior that's fixed animal behavior where there are these geese that, if they find one of the eggs has fallen out of the nest, they've got this way that they can move it back into the nest, and if you pull that egg out, they just keep doing it as if there's an egg there, they're still moving their head. I mean they are running a program like we run programs. So where do you see emergence happening in the systems we have today?

Well in an evolutionary system, that very behavior was an emergent behavior, that in some or other through generations, helped the geese survive. In the brain itself or in our neural network based deep learning systems, the behavior of these systems, in many cases, these systems are black box systems. The behavior is not engineered. We give it the input, we give it the expected output and it makes abstractions that capture the behavior. We then throw it input that it's never seen before, with the expectation that the behavior is going to key off of the abstractions that it's done and will be within the realm of what we expect.

I'll give you an example. One of the key triggers for the latest boom and interest in AI, was a paper that Google Brain published I think now three years ago, where they said, “Yeah we've trained these deep networks and they can identify cats in images captured from YouTube.” On the surface, big deal, right? So what—you've been able to do some pattern recognition and you know what a cat is. When you actually read the paper, the fact is that nobody labelled the data as whether an image has a cat in it or does not have a cat in it. This was auto-encoding, which means that what they asked the system to do, is take an image, compress it, in a loss-y way, so losing information in that compression. Then decompress it, and the fitness function, the loss function was: how close is the decompressed version of the image to the original image?

That's how they trained this thing. After the fact, they said, "Let's look at these nodes in the deep network and see what has it identified, what has it found in there," and they realized that, oh, it can distinguish between images that have cats and don't have cats in it, therefore it has abstracted the concept of a cat. Nobody taught it the concept of the cat, but it had abstracted that out, then they thought, wow this is interesting, there's a lot of people and women particularly in YouTube images. Let's see if it's understood that, and lo and behold it did, so it's like after the fact, interrogating of the model to see whether or not it has actually identified a concept, and it had.

And then they got all excited and thought, maybe it has identified cars as well, there are a lot of YouTube videos with cars in them. Well, to their disappointment, it had not. So, which brings us to some of the weaknesses of this approach. We do not know what it's learning and what it's not learning, so we're still in very controlled environments. We know how to build these systems and use them, but in some cases it's difficult to understand, but to your original question, is this something mechanical that we just programmed into the system? No, that's the fascinating thing about these learning-based systems, is that the learning is what they do, that's why it's so exciting that Alpha Go beats the world champion in Go, and then later Alpha Zero was able to supersede the program-based tree search-based systems for playing Chess, because the basis for what Deep Mind did was machine learning, at least a big part of that, combined with tree search, so it's machine learning that's exceeding the state of the art from humans.

If you took that system of Google's that you were describing so eloquently, and you ran it again and gave it the exact same images in the exact same order, it would come up with the exact same instance right? The exact same conclusion, and if you did that a million times it would come up with the exact same instance. The system itself is as mechanistic as a wind up clock, I mean, just because you can wind this clock up and it does all these permutations within permutations and wow, you can see the whole galaxy spinning around on its axis, it doesn't mean there's anything going on other than wind it up and watch it go.

Well yes, we're getting into philosophy here a little bit.

Well let me ask you a different question then.

Well just to finish up on that though, I think the same question could be asked about the brain too. So then you're in this question of whether we have intentionality or not and so forth, and I think you'll get lost in the philosophy of it. At the end of the day what's important is, is it manifesting behavior that is familiar to us as being intelligent?

One more philosophy question and then I'll move on. Computers can only do math, right? That's what they do really well, they do math really fast. Do you personally think that all intelligence is reducible to math?

I personally do believe that, that all intelligence can be reduced to numbers. There is a class of thought that does not believe that—I think it's very much based in our need for applying some sort of mystique in the operation of the brain. There's some science behind that as well, where they believe for example, that quantum fluctuations drive the way creativity takes place in the brain and some of the decisions that we make, which, in computers, we have to make use of random functions, which are not truly random, they're still a function that behaves randomly, but it's not truly random. So there are some distinctions made at that level. Again, I think from a practical standpoint, that distinction is really not material.
I can’t help but to notice on my display that, probably the conference room you’re calling in from is named Gödel. Is there also an Escher and a Bach, or is that a reference to some other aspect of Gödel’s life?

One of the things that I really like about him is his incompleteness theorem, which Turing also worked on, and Turing is the father of many things, including, not just computing, but also neural networks and genetic algorithms. It’s fascinating how these guys thought, yet you look at, on the surface, the groundbreaking paper that Turing wrote is about incompleteness, but the way he did it, and what he invented just as the device to make his proof happen, what is now called the Turing machine, is the basis for computing and how we think about what's computable and what's not computable. Clearly in his later work, he believed that intelligence is computable. So going back to the source, even though there is this incompleteness theorem that he also contributed to, he does believe that it's computable.

Right, but of course, there are problems Turing machines couldn't solve, right? Like the halting problem, that even that device doesn't have... like there's not necessarily an algorithm to everything, though is there?

The halting problem is the way... well, the Turing machine can compute anything computable, right?

Right, but not everything is computable.

Yes, exactly, so that's the definition...

So you founded a company called Sentient Technologies?

Yes.

I assume, ‘sentient’ is one of those interesting words that's often misused, like in science fiction. So sentience needs to consent something, so you didn't start a company called sapient technology, you did one called Sentient Technologies. Tell me about naming that, and then tell me about its mission.

We started off being called Genetic Finance actually, which is not a very sexy name, and the company was, when we started about 10.5-11 years ago, about using scaled AI in evolutionary computation and trading in the stock market, hence the name. When we decided that we would like to spin off the trading piece, the hedge fund and use the technology itself in other areas as well, was when we needed a better name and I personally like the sound of Sentient, as a name.

Sentient.ai is a really cool combination. My fascination with this word is its older meaning, which was how it was used in the 1800s, back when the popular consensus among scientists was that logic can in fact model everything. It's before the whole incompleteness theorem and so forth, and back then, sentience was referred to as everything else that the brain can do. So, if you take away the logic part of thinking, then what you're left with, which is the delta of that, that makes you a sentient being is what's interesting. To me that's again, a tip of the hat to emergence, which is what makes AI fascinating. For those of us who work in AI, we work for those moments when the AI is actually surprising us, it's learned something or it's made something an abstraction that we were not expecting.

And so what is the mission of Sentient Technologies?

It’s to make a better world through AI. We do believe that AI has ubiquitous applicability, we're very practical minded about it, we built products using AI, we don't have a professional system, we do not charge for... we don't have a consulting business. We actually built products. Building products means that the product has to stand on its own merit, not the fact that it has this magical AI behind it, so the AI needs to provide the differentiation that allows the product to disrupt a certain market and we believe we've done that with trading and sentient investment management which we've spun off.

And we're doing that in full funnel digital marketing. We're building products there that allow you to optimize adaptively your website, your mobile experience, the journey, all the way from the top of the funnel, which is your ads, the business budget management and the audience selection as well as the look and feel all the way to the conversion, to email re-marketing and so forth. That's the journey that a user goes through, and we believe that AI can orchestrate that whole process and improve it.

We've used the core AI which is called LEAF (Learning Evolutionary AI Framework) in other areas as well ranging from cybersecurity, cyber agriculture, healthcare, more in research projects, collaborations with academia, but more recently, we've also been exceeding the state of the art in AI-related benchmarks, by a process I tongue-in-cheek call "cheating." If you have the state-of-the-art deep network for, say, the omni-glots benchmark or the state-of-the-art recurring deep networks of like LSDM for the language modeling benchmark, just take that and evolve it. Evolving not just the hyper-parameters, but also the construction, the engineering of that deep network will improve it. Because you're using evolutionary computation, you can improve it in several different objectives not just one. You can make it better on performance scale, you can make it smaller, faster, a number of different areas, and so that's where we're doing a lot of research and some collaboration that's quite fascinating.

Well your website has a bold claim, like it proclaims when you get there, "we built the world's most powerful distributed AI platform."

Yep.

Justify that claim and tell us about that platform.

At its peak we harvested compute capacity from third parties. These were idle cycles. You can think of it as… SETI@home or Folding@home, so we did that, we harvested this from different compute sources, idle cycles in data centers, all the way to internet cafes and game centers in Asia. At its peak we had a single AI system running on about 2 million CPUs. We also trained and evolved deep networks on more than 4,000 GPU machines spread across many sites. That's I think one of the largest, if not the largest distributed...

So give us a success story, tell us a story of a problem that you applied this platform to and the solution and how much better it was, like tell us a story.

Sure, so obviously we've done this in trading but I can't share too much there. But in digital marketing I can certainly share that. You know the state of the art for example, and improving a website design is to do AB testing, so you get a bunch of designers together and they come up with a new design that's design B, and then you re-direct your traffic to design B as you're still running your income on design A. You do that for a while until you have statistical significance and if B is better than A, you pick B.

Unfortunately, the ratio that's quoted a lot in the industry is only 1 out of 7 of these AB test results in B being better than A. So what we did there was we actually evolved, not just a single page, but a funnel of pages that constitute the user journey on a website, and allow many degrees of freedom. [If] you want to change the header, the image in the background, the placement of various different widgets, the call-to-action, anything you want to do, so the designer now has full artistic freedom on the various different pages.

We then adapt and evolve that against live traffic that's coming to the site, which means that, in one case, I can tell you, one of our customers, ABOVE Media, they ran this for 8 weeks and the search space size, which is moderate to small by our own standards, was 380,000 different possible designs, as opposed to 2 in AB testing. After only 8 weeks, they were able to increase their conversion rates, by I think in the order of 46% or 47%, so that's a success story right there.

Now a system like that will consistently adapt and improve, so you can actually run this in campaign mode or just permanently, and that's the level of disruption that we're bringing to just this one small case which is AB testing, improving your website or mobile funnel of design. Just let this thing run permanently and adapt to the traffic that's coming in, the various different users that are using your system, the changes that you might make, or various new products that you might now want to sell or whatever that's fine, we'll just adapt to that and constantly improve it.

So, you said a minute ago that you don't have a professional services department—that you make products and then people use those products, but these sound like specific applications you're doing for specific customers?

No, that's the funny thing. It's good that you bring it up. This is just a couple of lines of java scripts that our customers include in the top of their webpage, everything is hosted by them and then there's an editor that's WYSIWYG and accesses their website for example and puts wireframes around it. Our customers use it, the designers add our customers or their agencies are the ones that are actually using this. We set them up and they're off and running, we're not doing professional services for them.

Got you, so what's that product called?

It's called Sentient Ascend, and you can go to ascend.ai which is the website which is going to be talking all about [it] and we have some case studies there and so forth.

So what would be your thing maybe you're not working on, because you don't have a product developed yet, but what would you love to bring all of your learning and experience to bear on? Give me a big meaty world problem?

World problem, okay, so, I want to talk about cyber agriculture, which is a world problem I think. Growing food, growing plants in controlled environments is now very much doable, and there are many different companies and research academia, that are working on this. The one that we worked with is Open Agriculture at MIT Media Lab. We did a little research project with them, but I would really like to see that be industrialized and out there.

As a startup, it's hard for us to be doing too many things at the same time, but hopefully someday we'll get to that, but here's the problem statement. We can now take all of the context away from growing stuff, you can grow things, these are container-sized greenhouses, which is a misnomer, because there is no light coming in from the outside, there's no interaction with the outside world whatsoever, other than a power cable coming into this thing. It's hydroponic, so you're growing stuff essentially in water, and you have many actuators and sensors that allow you to make changes to the environment in which these plants are growing, ranging from the minerals in the water, the humidity in the air, the amount of light or spectrum of light that you're shining at what time, and so forth.

And you can even control the time where you inject stressors into the water. These are half-dead bacteria that kind of prompt the plant to produce more volatiles to defend itself, and by virtue of that it becomes more tasty, so you have a lot of control over this environment. But here's the problem: We don't know how to grow things if you take away the context. So you know, humans have been growing things for thousands of years now but if you take the context, the latitude, longitude, the weather, the soil type, you take those things away, we don't know how to grow things. We don't know what recipe actually works; in fact we're even lucky to be able to grow things, not have things die in our kind of environment. So, [to] have a recipe that allows us to survive a plant in an environment like that, let alone have it thrive, so, that is a problem for AI.

And if you actually had systems that were energy efficient, and were able to grow things non-GMO/GMO, in these sorts of environments anywhere in the world, then not only are you tackling world hunger, you're also tackling global warming because you're not transporting all this food and plants and so forth around, and siloing them and putting ozone on them and so forth. So the implications are huge, so what we did is we actually did this for basil. We modelled how basil grows and against the model we evolved these evolutionary computation in our core leaf technology, we evolved recipes for growing basil. Basil is "fast" in its growth—it's 6-8 weeks from seedling to full plant, and this whole process of creating a model, having a hypothesis as to what recipe for growth of the plant will maximize the fresh weight and the taste of the plant, and then trying that out in the actual greenhouse, and then bringing that back in to refine the model, that is called black box optimization.

We ran this process for a few months, and we had some very fascinating results from that, one we kind of know, it's like AI reinventing the wheel, which was that, the larger the plant, the less tasty. We kind of know that these tiny tomatoes are tastier and juicier. We know that, but what we didn't know was that everybody thought that basil needs at least 6 hours of sleep every 24 hours, in other words 6 hours of darkness at least every 24 hours. But this was a variable in the model and the AI very quickly found that if you shine light on the basil 24/7 during a certain period of its growth, it thrives. And nobody knew that. So the AI actually, not only was able to replicate and produce reasonable results, it was actually able to advance the science of agriculture in this particular case.

You said there's no consideration from lat/long, but presumably the temperature of the environment around the container affects the inside of the container, doesn't it?

No, that's also completely controlled, the temperature is controlled, everything, there's so many degrees of freedom in this thing, it's just crazy. And for every actuator I think they have 2 sensors, they have multiple cameras in there that take photos, it's a wealth of data that's getting produced every time you grow something in there and that's where AI thrives.

So, tell me again who is heading up or where can somebody learn more about that?

So, of course on our website you can look at our research blog and there are entries around our cyber agriculture work, which is the AI side of what we do. There's a paper I think if not published yet, coming out soon, I think we have it on our archive already, on the scientific side of what we've done there. If you go to OpenAg, which is through the MIT Media Lab website, if you go to OpenAg, or just search for OpenAg online, it's fascinating what those guys are doing. In fact everything they do from the specs for these, what they call 'food computers,' these containers from the specs to the actual recipes that we helped produce are open and available on their blog.

So, you said you're an optimist about this technology and its power to make the world better, and I'm in that camp as well, but I'm going to give you 4 topics on the other side of that column, 4 problem areas. I want you to pick one of them and just talk us through your perspective on it.

So one of them is: the effect of AI on privacy, that once you have good voice recognition and image recognition, every camera, every phone call, we're no longer lost in an anonymous pool of data, like everything can be logged and mapped and modelled and so forth. The second one is the effect on warfare, obviously machines making kill decisions and lowering the political cost of warfare and all of that; and the third is, the ability of AI to attack infrastructure, to bring down electrical grids and whatnot; and the forth is, broadly speaking, security—the ability of people to attack, maybe non-state actors, to attack banks and steal stuff and bust open encryption and all of the rest.

So again, that's privacy, warfare, infrastructure and security. Which of those keeps you awake at night?

As an optimist...

Well feel free to tell us why we shouldn't worry about those too, that would be great.

Oh, absolutely we should, I think you know. In all of those, first of all, technology is interchangeable with AI. I don't think we should simply say that this is a problem that comes with AI, this is a problem that comes with the misuse of technology as a whole and, so that's the way I would be looking at it.

Absolutely, but every time you're talking about AI, you keep using words like, "understand" and "learn" and those aren't words I use to describe a sprinkler system in my yard, or any other piece of technology. I know you're talking about a technology that, you're implying these human-like adjectives to its mental capabilities and that's like a phase ship, that's like the difference between 33-degree water and 31-degree water, like something materially different happens, and so to just say, well, these are the same problems we've always had and AI doesn't really change them, I mean do you really think that? If you think that, then AI's power to transform the world for good is equally impaired.

No, I think again, the same analogy holds. Technology is inherently neutral. AI is inherently neutral, and it depends on how we delegate responsibility to it—if that can be done—and how we regulate and how we use it, or abuse it, that is the question, right? I mean, nuclear power versus nuclear bombs is a great example of that. No AI there, but yeah, very very scary. So, if it's abuse, again it's a neutral, inherently neutral system, technology is neutral inherently.

But let me say this, the risk is really that webecome complacent, and we abdicate our responsibility towards technology. I think that's the main risk. We've seen manifestations of that, beyond even the four categories that you just mentioned, like in the election that happened in 2016, and we're in the US so we see that first and foremost, but it has happened in a number of other countries as well. This is an abuse of a technology, not necessarily through AI, but by virtue of the end users abdicating their responsibility towards the use of technology, and being sort of reactive to what is thrown at them, as being the absolute truth.

We have a choice, for example when it comes to news, to either go back to a world where we get our news late, and through only limited sources, or a world in which there are many many new sources available to us, but, in the latter case if we really feel that's our right to go there, it increases our responsibility as to how we would select from those sources available to us, and how we make decisions with respect to them, versus being reactive to whatever is fed to us as being the absolute truth.

I'm using this as an analogy, I think it holds for a lot of what we do, if we want self-driving cars to reduce deaths on the road—and they willdo that, very clearly injuries and deaths and accidents are going to reduce significantly—are we prepared to abdicate responsibility to those who build these systems as far as the life-death decisions that have to be made? Or are we going to come up with a framework for how, ethically to be making these sorts of decisions. These are questions that I think are not being asked and not being answered, and we're just kind of sitting there saying "well technology is so complex, and therefore we'll just let it do whatever it wants to us." It doesn't help that we're also benefitting from technology in a very free manner, right? Facebook is free, much of what you do on your cellphone is free...

It's only free if you value your time at zero, but go ahead...

Exactly, no, I mean, but that is true, but because it's free, you also kind of feel like whatever comes at you, you'll just take and you'll be reactive to it versus proactive. I think that's...

Well you know I'm with you on all of that. The price of liberty is eternal vigilance and I am with you that technology is neutral, [like with] metallurgy, you can build ploughs or you can build swords. But you have to say, or maybe you don't, that technology and its basic ability to multiply human effort, means that, as technology advances, you get more asymmetry. Now you could have, with genetic engineering, a small lab of people make a more deadly kind of smallpox and bring it back.

Whereas in 1804, you may have had somebody dump a goat in a well and poison the water, that technology increases our ability to affect the lives of a billion people for betterorworse. So, all the people in the world, fact checking what they're reading doesn't stop a small band of bad actors from using these technologies to incur enormous amounts of damage on the world. Is that an intractable problem, and we just have to say "yeah, that's the price of the time in which we live, or the genie's out of the bottle" or whatever metaphor you want to use?

Right, but I think: what are our options here? Right? I don't think we can turn back the clock and go back...

So the genie's out of the bottle, yeah?

The genie isout of the bottle, and so, okay if that's the case, which means again, there's more responsibility on our shoulders to have the institutions that can control these things [do so], and have a level of regulation and control and a direction around these things. At the end of the day, it comes back to our tools. Our tools are what define us as humans and allow us to change the world, and if the world is going in a direction that we're not happy [about], it's the tools that will help us correct that. And AI being part of that toolset, science being part of that toolset. The last thing we want to do is just to let it go off and organically just do the harm that it's doing.

When you hear high-profile individuals and you know who they all are, say that "we're going to make an AGI and then it's going to become a super-intelligence, and then it's going to turn on us all and we're doomed, we're doomed." What's your reaction to that?

I'm very skeptical, just simply because I know the state of the art in AI, and I think we're still...

You know I'm with you on that, and virtually every single guest on this show, and they're all people who are either practitioners or they're at university, they're deep in it, they all say the same thing, as do I. But my question is, what's the disconnect? Why are seemingly very intelligent people [worried] about that?

You know if you look at the profile of these folks, they're really not... with all due respect to them, I don't think they're actually that involved in AI at this level. But that aside, I think this all started with Super Intelligence, Nick Bostrom's book perpetrating this idea. It's easy to predict the distant future, it's very hard to predict the nearer term future, and it's hard to place a point in time in which something willhappen.

Whether or not we have the capability to build an autonomous system that manifests all the facets of intelligence that we have, and is more capable than we are. Yeah, I think that's a logical extension of where we are. Is it going to be human-like? No. I really don't think it is because it's a very very specific configuration of the brain evolved over thousands of years for humans, that is almost next to impossible to replicate.

So it will disappoint as far as being human-like, but whether or not it will have the capacity to supersede us in decision-making and so forth, maybe, how far are we from that? Many many many decades, or even centuries. I think we're very far from that with our kind of technology, and therefore I think we should be worrying about more immediate things than AGI. I think the amount of money and thought and worry that's going into [AGI] is completely disproportionate and completely fueled by our science fiction conception of where AI can go, versus where it is right now.

Yeah I mean, there's a phrase for it, which is called 'reasoning from fictional evidence,' which is I think what...

Yeah I believe that...

We're all susceptible to it. I mean, myself included, you can't watch all those stories and I don't think it's a conspiracy. I think that, I'm not going to pay $9.50 to go see [a movie] and everything's perfectly normal in the future, nothing bad is going to... you know, you want something...

But then, just on that topic real quick, I don't know if you say Ex-Machina...

Yeah.

The joke I make about that one, is this genius guy build these androids that are completely human-like, he forgot to put an off-switch on them.

I know just one word, like a safe word, right? ... don't you think they would have put one big red button that just...

Yeah exactly, come on man...

...the whole thing just like shuts down. It's also like in Star Trek too, you know they seem to have forgotten how to make a fuse. One thing hits the ship and it all blows up because they don't have a fuse that gets flipped, a circuit breaker that's like, "Ahh I clipped the breakers, all the lights went out." It immediately goes till the whole ship explodes.

Exactly.

So, I used to be really annoyed by dystopian movies, because I have to go see them all because everybody asks me about them. Then I read, it may have been Frank Herbert say that sometimes the purpose of science fiction is to keepthe future from happening. And to your earlier comments about, "we just have to be vigilant," I think maybe, that amount of the dystopia just, that these things can be misused.

I would agree.

So let's flip to the optimism, let's put our rose colored glasses back on, and let's talk about the bottom billion, the people in this world that seem to be the most intractable problem that we all face, and frankly, that is not to our credit, that somehow we live in a world of plenty, and yet there's a billion people that live on... half the world still lives on $3 a day or less.

How do you think these technologies which increase productivity in the developed world where they're applied, how do they help the bottom billion, and are we helped by the fact that everybody has a smart phone now, and that will be a conduit to these technologies? Or how do you see somebody in a really really disadvantaged place, a Marie Curie or Leonardo da Vinci born into abject poverty? How do you see them being able to achieve their potential in the world of the future?

Absolutely, that human potential is huge and untapped and I think different technology is reaching out, maybe not to the folks that are living at that abject poverty, below $3, but it is trickling down. The access to technology, access to education is just amazing, what you can learn, just by watching YouTube videos, or taking online courses you can take. When it comes to AI, you can take Andrew Ng's Machine Learning course anywhere in the world using a single cellphone, and these technologies are becoming cheaper and more accessible.

The internet itself is reaching out and it's to the benefit of a lot of these corporations to make this wider and wider scale availability. So there's the educational impact, there's the job impact and the fact that as a productivity tool, these systems can be used by folks, mom and pop shops that have an idea and want to do something are in a better place to do that if a loan via things like micro-loans and micro-financing and so forth that are augmented by these sorts of technologies and communication-based systems, online payments, the list goes on. And then you have the other side of this, which is how can technologies such as AI help improve the state of living of these folks.

I talked about agriculture, we didn't have time to talk about healthcare, but that's there too. If you're optimizing a system for price, for accuracy, for minimizing risk for example from intervention, you just take that and use it in a lot of these areas including healthcare, agriculture, these areas that I just mentioned, and you can only improve these models. Hopefully someday we will use them in social sciences as well, to promote democracy and the rule of law and help pull these societies into what we know now. We've seen this happen time and again, these societies, these countries hit a certain stage and from there on, if they follow the blueprint, they are out in the second world and first world, and economic giants.

It's just not happening, primarily I think because of lack of education, lack of reaching these folks, and we see a lot of that. We've hit a lot of bumps in the road in the past few years, but things like the Arab Spring and awakening that was fueled by what's called the Twitter revolution or the Facebook revolution and so forth. That's very real and it's still there, it gets beaten down and you know, that's just the nature of the beast. It's two steps forward, one step back, and I think the optimist in you and I—we can see that there is a path here, and technology can definitely help us to get to that next level.

All right, well that's a great place to leave it. I see we're at the top of the hour. Can you tell us—if people want to follow you or your company—give us a couple of URLs or Twitter handles and the rest?

Sure, the company is www.sentient.ai is the website. As I said, our digital marketing product is under: www.ascend.ai, a lot of the scientific papers and breakthroughs and so forth also are reachable through our website. My own twitter handle is @babakatwork, the sentient.ai twitter handle is also out there. I welcome comments, welcome people friending me on LinkedIn, again Babakatwork is the username, and it's been great talking to you. This is wonderful, some really interesting topics we covered here.

All right, well thanks for being on the show.

Pleasure.

Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.