In this episode, Byron speaks with Andrew Busey about the nature of intelligence and how we differentiate between artificial intelligence and 'real' intelligence.
- Subscribe to Voices in AI
- Google Play
Andrew Busey is a serial entrepreneur with a focus on building products. He created the first web-based chat systems, the first chat with customer service products, and many other early e-commerce, social, and gaming platforms. He most recently founded Convertible to make it easy for big brands to build experiences on Facebook Messenger, Twitter, Alexa, Google Assistant, and other next-generation conversational platforms. He has 26 patents and one novel, Accidental Gods. He has a computer science degree from Duke University and an MBA from the Wharton School at the University of Pennsylvania.
Byron Reese: This is Voices in AI brought to you by GigaOm, I'm Byron Reese. Today my guest is Andrew Busey. He is a serial entrepreneur with a focus on building products. He created the first web-based chat systems and the first chat with a customer service rep option and many other early e-commerce social and gaming platforms. He most recently founded Conversable to make it easy for big brands to build experiences on Facebook Messenger, Twitter, Alexa, Google assistant and other next generation conversational platforms. He has 26 patents and one novel, Accidental Gods. He has a computer science degree from Duke University and an MBA from the Wharton School at the University of Pennsylvania. Welcome to the show Andrew.
Andrew Busey: Thanks. I'm excited to chat.
Well I've read your book, Accidental Gods. Of course we know each other in real life, but I did read the book. I'd love to start talking [about it] before we get into AI. Tell me
the whole premise of the book and why you wrote it. Because I think it's fascinating.
The book is about my views on AI in some ways. I started thinking a lot about both: from a philosophical point of view, there's all sorts of things to think about in religion and where we come from and why; and then there's also the converging point of view of ‘What is intelligence and how does it exist?’ So the book is really about what would happen if there were things that we created that were like us, that had an intelligence and sentience and awareness but weren't aware of us. How would that play out? And so that was the premise of the book, which conveys a lot of my views on certain areas of artificial intelligence as well, but also where we came from. Since writing that book, there's been a lot more broad conversation about simulation theory and are we living in a simulation, and those types of things that dovetail with with a lot of the book as well.
Because you wrote that a while ago.
Yeah, I self-published it in 2014. I think I wrote it mostly in like 2009.
So where do you think we came from?
I think statistically speaking, there's a high probability we're living in a simulation of some sort --mostly on the theory that I think at some point the future we’ll be able to build a simulation that's roughly as complex as we are, so…
Well that's always the argument. But to just flip it around: that all begins with the assumption that consciousness is a programmable property and the fact that we experience the world is programmable and that we are basically machines ourselves. Doesn't that assume a certain view, because a character in a video game right now doesn't experience the game, correct?
Certainly not as we would understand it.
And yet we experience the world. So why is that not proof that we're not in the simulation?
Well I think we're still early in the process. I mean you know we've only really been thinking about hard problems for best case, 6,000 years, worst case a few thousand less -- depends on when you view a lot of this stuff in Egypt starting. I think that's not very much time in the grand scale of our understanding of the universe, which is also somewhat constrained still.
And so I think our advancement of technology really has only been happening at the level that we're talking about right now with computers and games and simulations and all that have that type of complexity or any remote semblance of that type of complexity for less than 100 years, and more like 30 or 40, 50 maybe at the most. So to think that we've even touched the beginnings of what can be done with technology…
But the basic assumption there is that an iPhone and a person are the same type of thing -- maybe an orders of magnitude difference in scale, but the same basic computational device in a way. Is that true?
In a very simplified level I might agree to that. I think that humans are computational and brains are effectively a type of computer. I think they're much more complex, and we obviously don't really understand how the brain works either. So it could turn out that there's things that we just don't understand yet that exist in the brain that are a big part of what gives us consciousness and self-awareness, which I think are sort of the defining traits of, at least as you describe them, as sort of seeing and understanding the world and having a sense of place in it. I think that's a pretty interesting way of viewing the world, and I think that it's going to be a it's going to be a while before you really understand how the brain is creating that and whether you know what that really means.
It could turn out that like in the brain, there's some type of quantum computation for example. I don't think that's necessarily what it'll be, but at the neuron level that we just don't really understand. It could be that because neurons are not as binary as a neuron is represented in a neural network, that you know things are, and it's more adaptable in different ways than we really understand. Those could all be different types of computational machinery that we just have not figured out yet.
Just take neural networks for example. When the original neural network designs were created in I guess like the late ‘50s, and they were discarded because they didn't really do anything -- mostly because they couldn't perform at an efficiency level that delivered any value. And then in the beginning of the 2000s people started trying to run neural networks again, trying to run them on new CPUs, then new GPUs and they're like ‘holy crap’ these things do some pretty amazing things if you put them on tasks enough. Computational systems that are designed to do that type of processing, [and] GPUs are much better at linear algebra than CPUs.
Turns out that you can build even better more specialized hardware for processing sensors, which is a lot of what Google GPUs and what Nvidia is doing with TPUs. Those things make these neural networks orders and orders of magnitude faster. They allow more complex forms of neural networks to be created. They allow things like backward propagation to work, which really helps make neural network training much better. Those things just weren't even possible when the neural network idea was conceived, and now because computation has advanced enough that these mathematical functions can run orders and orders of magnitude faster, we're seeing all sorts of new ways to use them. And that's what's really causing the machine learning explosion that we're seeing right now. And I think that's just the tip of the iceberg.
Well I'll just ask one more question about consciousness and then let's move on. But right now would you agree [regarding] the idea that matter can experience the universe, we don't really have a way to think of that scientifically? My hammer doesn't experience the nail. And yet, just the idea that inanimate matter can have a first person experience just seems so implausible, and it sounds like what you're saying is: somewhere in the brain we're going to figure out how that happens even though we don't really have any way to understand it right now. Isn't that kind of a punt on the question -- just like the article that [says] ‘we shall know when we understand the brain, all will be made clear?’
Yes I'm happy to deviate from punting, so… To unpack what you what you said, I would argue that computers are not inanimate matter in the same sense that the hammer is, right? So things are passing through a computer right, [and] it understands time. It maybe doesn't understand it, but it uses time and time as a core function of its mechanics, just like the human mind.
So there's a lot of things for example we don't understand about time. And I think time is probably pretty critical to consciousness, because you can't really understand yourself. You can’t build predictive models of what you're going to do and how you're going to act if you don't have an understanding that something's going to happen in the future. You can't learn things if you don't understand things that happened in the past.
And so things like time are happening in the physical world and there's things in our brains, chemicals and electrical signals and all sorts of other stuff that are analyzing the data that our sensory organs like our eyes and skin and nose and ears and whatever are detecting and they're accumulating lots of data. And computers can do big parts of that. They take that data, they put it, they send your visual (what your eyes see) and they process that data and it gets sent to your brain and your visual cortex and then you do something with it. So we lose I think, our deep understanding when we start to get at why are we doing certain things, and you can get diverged on a lot of conversational paths there like, “Do we have free will? Are we stuck in kind of a…?” “Is the universe kind of just ticking along and we're just riding it, or we just process that in a different way that makes us feel like we have free will and choice?”
I think those are unknown questions and we just don't have the data. But I do think that it's not say, comparing the brain to a hammer and saying that it's an inanimate object, -- not a fair comparison to even your iPhone and human brain example. Your iPhone is not an inanimate object. It's doing things like… it's not necessarily smart and self-aware and analyzing the universe around it. But if you applied kindergartner level observation to it, it might look that way like it might pop up. I might get a notification at some point that says “Hey you know, Duke is about to play a basketball game. You might want to watch that on ESPN.” Well that seems like it's aware of things around it. Right?
Even though we understand that that's just programming and that's just pulling data from all sorts of data sources, it does look like it absorbs some data feed somewhere that said Duke's game is coming. It knows that I want to watch Duke games and it notified me of that fact. So in some ways, that's not that different than a lot of things that humans do.
Fair enough. You mentioned Duke. When did you graduate from college?
Way to make me feel old: 1993
So you and I are basically sitting on 50 you must be about 48?
Back in 1993 when you were at Duke and you were studying computer science, what was the skinny on artificial intelligence?
I almost went to U Penn undergrad in an AI curriculum that was computer science and psychology. I think that at the time, AI was much different. People thought you could program things to make decisions using more expert systems that were more complex, but dichotomous tree kind of things. So there was stuff like LISP and things like that and it never did it. It wasn't I think the same as the situation that we have now, which has changed a lot because of basically just computational speed, data and networks. None of those things were really that amazing when I graduated. So there was the Internet. But there was no Google. When I graduated there was no Yahoo. There was no Google. There was no internet browser...
Mosaic came out in March of ‘93. So you know you're right.
I was in fact the product manager for Mosaic and Spyglass. That was my first real job. So I am in Champaign, Illinois. Why does the commercialization of the Internet happen? Mosaic was developed by Marc Andreessen and a bunch of other people at the National Center for Supercomputing Applications at the University of Illinois. And as a way of basically creating a more interesting and amazing client to access the World Wide Web that had been created at CERN really. It's lot about adding graphics and imagery to it that made it much more compelling to people. So that was a pretty big leap forward for getting people to use computers and networks.
Yeah and if you think about it, what's always interesting to me about the Internet is you've just implied it's kind of big and dumb. All it is is computers communicating on a common protocol. And yet think about what it did to society in 25 years.
It created $25 trillion dollars in wealth, a million businesses, it transformed media, politics, so many things. How do you compare the impact of narrow AI just what we know how to do now -- just machine learning -- is it going to have an effect equal to that, [or] massively more? How big of a deal do you think [it will be], again [given] no big breakthroughs? Just plain old, like we know how to do now?
Just machine learning based computer vision will change the world.
We'll talk about that...
In ways that I think people just can't conceive.
I want you to run with that. But you're right because 25 years ago we all looked at the Web and none of us thought Uber. None of us thought Etsy. None of us thought eBay, Open Source, Wikipedia, Google. Eventually, you do because it gradually dawns on you, but you're like one of those forward-thinking people I know, so you tell me what computer vision alone is going to do to the world.
So I think computer vision alone will change... if you apply just computer vision to just human faces. Imagine that any camera anywhere in the world is connected to the Internet and that given what we understand of databases and availability of information just from Facebook and Google, that you can basically look… It's not that hard, in my view, anyway to imagine a world where you're immediately identified anytime a camera sees you and I think that technology exists today, if basically the databases are not quite there yet. The computer vision is not quite there, but it's clear that it's going to get there.
It's not like a wild leap, but people are still in a little bit of denial that that's going to happen because that fundamentally changes privacy. It fundamentally changes everything. Like if you imagine… politics is an easy one. Imagine that you go to a political fundraiser or any event that has to do with politics where inevitably someone's trying to get money from someone. If you could wear a pair of glasses that were like Google Glass but look cool and didn't look like they had a giant camera on them, (which again technologically not that hard, even if you just look at the Snapchat glasses spectacles something like that that looks like less obtrusive and less like sunglasses and can show things on the lens, which again is not technically challenging) scan an entire room, suck in every face in the room, quickly look at a database and identify everyone, pull every piece of public information on each of those people. You could probably see the net worth of 90% of them because a lot of it is on the internet already, especially if they're like public figures or they've ever been on an S-1 or any sort of S.E.C. filing.
Lots of data that exists in the world today is terrifying and if people actually realize how easy it was to get to, in three steps. But once there's a real incentive to pull that data together and deliver it in cohesive ways like tying it to a facial recognition, it's pretty scary.
And so to me there's gonna be all sorts of things that are both good and bad to come out of that, that I think people just haven't really imagined yet. And so there's probably a bunch of amazingly cool use cases like in Uber or a Google or an Amazon types of things and lots of companies will get created that we just haven't thought of yet. But I think the combination of instant facial recognition by basically any camera tying that to the degree of data sources that exist on the Internet today is going to create a bunch of unintended consequences and both positive opportunities and some probably pretty negative things that I think most people just don't imagine.
Well let's talk about that a little bit. You're right. I agree wholeheartedly that privacy has always been guaranteed in a way because there's just so many people doing so many things making so many phone calls and there's just no way to watch everybody. But with cameras able to read lips as well as a human, not only can those cameras -- like you said, recognize everybody -- but essentially listen to what everybody says. They can ‘listen’ to every cell phone conversation, voice-to-text. They can read every email and it doesn't even take nefarious tools to put all that together. It's the same tools that you used to look for cures for cancer, right?
Let's just talk about the government aspect of it for a moment. Do you think that you're just speaking about the United States, or in other parts of the world, do you think that a government surveillance state is inevitable to come out of that? Because once it's able to be done, once governments are able to watch everyone, they will? Or do you think that's the kind of thing we just head off with legislation, customs, mores and the rest?
I think that'll be a sliding scale and I think it'll probably, in the United States, it will fundamentally shake people's views of what freedom means. I think in China it's probably inevitable. And I think China's mostly pretty advanced in all these areas of AI. I think it fits some of the… obviously the government there is a little more aggressive on that front. I think it's going to be the places where it will be the most impactful, and I'm sure it's harder to predict the outcomes.
There are places like the UK which already have a pretty comprehensive coverage with cameras. CCTV in the UK gives the government a pretty comprehensive degree of visual coverage right now that the United States doesn't have, and the United States probably won't have for awhile, if ever, because many of the problems that we have in the US -- like people complain that our internet is not as fast as it is in Korea or whatever else -- is caused by the fact that the US is just big and spread out and doesn't have the coherent full stack government that China has.
But in the UK where it's smaller and CCTV has been part of the culture and the systems for a pretty long time, it's already there, right? So it's not a big step to pump all that data into something that recognizes every face, matches it up with the name and so you can, in a couple years type in somebody's name and immediately see the last CCTV camera they passed, know exactly where they are, track them in real time across multiple CCTVs. That gives you all sorts of things that are both very powerful if used in the right way to stop crime and terrorism and things like that, but also have huge opportunities to abuse.
Yeah, it's kind of interesting. I had a guest on the show, you might know him: Jared from Argo Design here in town. He said that he thinks we will return to something like the Victorian era where you live in this little village with three hundred people and you know everything about everybody because of gossip, and it's a small town and everybody sees everybody come and go. But you were polite enough never to mention what you knew to them.
So you had a bifurcated set of: your public knowledge, then you had this other layer where you kind of knew all the dirt on everybody. So do you think we should just kind of... (and we'll just talk about the US) give up on privacy, or not?
Because on the other hand, I think whenever I go through a tollbooth, I know it takes a photo of my license plate right? In theory it could start tracking everywhere I went and that license plate thing could be hooked up to databases of deadbeat dads and outstanding warrants and everything else, and they could just keep police there all the time to go after people as they pass through. It would be a disincentive to use the toll road I assume. So we haven't really done those things. We don't have traffic cameras at every corner in this country. Is that just because we have a knee-jerk aversion to it, or the technology is just not there yet?
I think for whatever reason, I don't understand and I haven't done the research on why there’s such a degree of penetration of like CCTV in the UK versus here. I think partially it's harder to do it in a broad system. I mean the United States has a lot of accidental checks and balances with the sort of combination of city, state and federal. It's hard to get your act together to do any of those things. So like putting up cameras on every corner… if the federal government tried to do that, the states would react negatively, cities would react negatively. I think the cities would put up traffic cams to look for people running red lights and stuff like that and they're able to get away with that as a safety feature and on toll roads it's a convenience feature.
I think people have really not fully thought through the downstream effects of that, and so the fight against them hasn't really started. But I think if they were at it, if they started just randomly adding sort of CCTV level coverage everywhere, that there would be a pretty significant ‘up in arms’ kind of reaction. I think partially that's because it's so [much] later in the process, there's a much higher level of sensitivity around privacy especially in the United States. I do disagree with the Victorian era model of that. There's going to be a little village of 300 people. I think there'll be villages of 300 people that are off the grid.
I think what he meant is that because of social media we know everybody's dirt. It will develop a new morality. You know if somebody...
I understand that. I just don't agree. I actually don't know, maybe he just doesn't use the internet as much as me or he never uses Reddit, but that doesn't seem to be the morality that's involved. It seems to me...
And maybe it's different when it's real people. I think we will more likely… the thing that I thought was going to happen for a pretty long time that just hasn't happened yet is, I actually have expected that some industrious real estate developer will eventually go do this and they’ll find some random place, maybe it's a university town like Waco or Champaign-Urbana in Illinois or whatever. That's a really small smaller city than Austin or Dallas or whatever, and they'll build like a 5,000 unit apartment complex that doesn't have any windows. It just has fast internet and it'll be dirt cheap and basically lots of people will just go live there and they'll mostly just live in their rooms doing things online.
Where I don't think we're quite to the point where that level of it is viable from a financial point of view both from building in that style, but also because it's just not as easy as we would imagine to make money on the Internet for an average person. But I think it's kind of moving in the direction where we'll start to see some more and more opportunities to do that in the future. I also think we'll see movement towards things, maybe not true basic income, but things like that over the next few…
That's probably a longer horizon thing and a side effect of machine learning and AI, but where people are sort of like disenfranchised and so they're making enough that they live in you know an apartment-ish type of place that's not you know maybe in a major hub, but gives them Internet and everything else and people just disappear into games and other online things and that's going to be a major change...
Is that a prison?
Not really. Mostly because I don't view it as bad. I think some people view people… like you if you imagine somebody sitting around playing a game and playing games online all the time and that's really all they ever did, some people would view that as a depressing and sad thing; because I've grown up playing games, I love playing games, I've made game companies in games, I don't I don't view gaming for example as negative.
And I think we're going to see things like esports create huge opportunities for people who play games to make money online. But not just that, you know I'm the LeBron James of Fortnite, like maybe ninja is, makes millions and millions of dollars a year. It’s obviously people like that. There's gonna be lots of opportunities for e-sports coaches and all sorts of other things that arise as games become more mainstream and accepted. And as you know, they become a path out of a jobless kind of reality.
I don't know for sure this can happen and this is maybe a little dystopian, but if AI, for example, somehow magically solves a bunch of problems, like let's say we could 100% solve autonomous driving and delivery so we no longer need truck drivers and we saw autonomous restaurants somehow, we don't need to [hire] people, like so we took away all the tellers, drivers and restaurant people in the United States, and maybe all the retail people, like half the population of earners suddenly don't have jobs because of machine learning and AI related things.
Again I'm not saying that's going to happen but let's say even if it's 20%, those people are gonna to have to find something to do and they're gonna to have to get either re-educated and move somewhere else, assuming there are new opportunities and that they're willing to be re-educated. But even if they're not, we need to find stuff for people to do and new opportunities and based on a minor action in the gaming world that I see on a regular basis that’s maybe different than what’s reported, there are people making a reasonable amount of money.
I occasionally hire people to play certain games with because it's just easier than spending hours looking for random people. If I can find a pseudo-professional that's really good, that can help me through stuff. And you know lots of those people are veterans. I played a lot of games with a guy who was amazingly good and fun to play with, that's a veteran between jobs and he's just trying to make a few bucks.
So as that becomes more structurally integrated into gaming and esports and it becomes more of a socially acceptable thing, so there's a lot of people in the gaming community that think that type of thing is hugely negative. I disagree. But that becomes sort of more accepted. To me it's sort of like, if I want to go to like basketball camp with Steph Curry and I pay like whatever and I go for a week-long basketball camp, why is that like bad? I'm not a professional basketball player but maybe I like to play basketball. That's a dream for me, or desirable to play with someone who's amazing. So all those things are kind of evolving and they're moving in a direction that I think is under the cover of what people think about and see on a regular basis.
Well, I want to put a pin in the job one because well, let's talk about that for just a minute because if you went back to ‘93 when Mosaic came out, and you went to people and said “Hey you know in 25 years this thing is going to have billions of users. What's it going to do to employment?” There were plenty of people who said “Well you know the stock brokers are gone, the newspapers are gone, the Yellow Pages are gone, the travel agents are gone.” They would've been right about everything, but nobody gets all the new stuff. So I'll ask you a very direct question. Has the Internet created or destroyed more jobs?
I don't actually know the answer to that. I'm sure there is an answer.
Well we've maintained full employment throughout the internet era, so it can't have been dramatically one way or the other.
Yeah I think it's probably pretty neutral. But I think something like Amazon has a lot of downstream job creation around delivery systems and logistics and the things that have lots of jobs. Those are somewhat threatened by true autonomous vehicles and things like that, like delivery and if you just look at Uber as a giant job creator right now, if there are truly autonomous cars either, whether it's Uber or Tesla or GM or [something] we don't know about, Waymo…
Then cost of delivering goes way down and then all kinds of new businesses take advantage of this autonomous country.
I think the half-life of a job is 50 years. I think 50% of all jobs vanish every half-century. And if I were to show you 250 years of unemployment data for this country and I said, “Andrew find where electricity came out, find where the assembly line came out, show me where on that thing we replaced all animal power with steam power.” You can't even see it move in the line even a little bit, and the assembly line?
I had this theory and you're a smart guy, so tell me what's wrong with it: that all technologies that increase productivity always increase wages because all increases in wages come from people being able to use technology to do more. And so there's an infinite number of jobs because the minute you empower people to do more and more people do more, and they're more productive. So the nail gun didn't end the need for carpenters, forklifts didn't end the need for warehouse workers and it never once happened. So why would this one be different?
Yeah I mean that's always the question, right? This time it's different. And I think that the thing that's different this time -- and I don't have a particularly strong view on what's actually gonna happen, I think there are plenty of places where jobs could be created. I think growing wages is different than job creation, so it's also possible that there are lots of places where new jobs could be created that are service jobs that maybe suck up some of the wages from the people getting higher wages or wealth creation or whatever. But I think that we are entering an era where things might be different, right?
So at the high level, words like machine learning and AI have a lot of marketing and nebulous meaning to them. But one thing that differentiates those things in my mind, AI, AGI, like true autonomy that's probably a long way away. And if we actually achieve that is a fundamental civilization changer, whereas machine learning I think has potentially a big impact on society, but doesn't sort of totally refactor our civilization as we know it. So if you think of machine learning -- though it does have that word learning in it -- which is really true in that a true machine learning system is absorbing tons of data and doing something with that data in hopefully a reasonably smart way, whether it's identifying who it saw in a video camera or that it's supposed to turn right.
It's those vertical AI use cases that are not general intelligence and crazy like pop culture AI. These machine learning use cases like autonomous cars and robot pizza delivery people and drones that bring stuff to your house and dark warehouses that just can pick and pack everything. Those eliminate huge swaths of jobs that are, I think harder to replace and so assembly lines made things easier. So it increased productivity, it didn't replace workers.
Some of the machine learning to use cases actively replace people with machines, which has not been done before. So when you move from horse-drawn carriages to trucks, the truck carries more things but still needs a human. When you take the next step to it's an autonomous truck, and there's a robot that loads it from a dark warehouse and a robot that unloads it at a local distribution center, before another robot puts it into an autonomous car that takes it to the person's house who ordered it: You've removed humans from everywhere in the loop and sure there's gonna be jobs created in the companies that make the robots and the autonomous trucks. But, there's gonna be people that build these warehouses, so in the short term probably the job hit will not look as big, but I think there's some risk that long-term it's a significant change.
I think the biggest problem with the Internet is you don't know what's true and what's not. Is there gonna be a solution for that, or is there not really objective truth?
Ah man that's a total…I think for some things there's objective truth. I don't think for everything there's objective truth. I think there's a lot of conversations that are more philosophical and moral and ethical that don't necessarily have objective truth.
Well maybe I overstated it. Do you think there's this whole thing about: did this happen or didn't it, this whole fake news, phrase word, and idea. Do you think there's a technical solution to that, that could work? Because is the web able to be self-correcting? Is it able to turn things green that are true and red that are false or anything like that? Or is there always so much viewpoint qualification proviso?
Take a simple statement like, “smoking kills you.” Then you say well it doesn't kill everybody. It kills some people and like everything's qualified. Or is it not? Or is there a solution to a web where everybody can say anything and it all looks equal in worth and merit?
I think it's a very complex problem that has as much to do with humanity as it does to do with the web. A simple statement would be religion is not objectively true. It's not objectively false either, probably, but it’s not certainly objectively true, whereas “Is Trump in Argentina right now?” is objectively true, I think, because it's the G20. So there are objectively true statements about reality both as it exists right now and as it's recorded, like it's objectively true if someone said something and it was recorded right? Which opens up a whole other can of worms like well could somebody have tampered…
I think that we will see a time where there will be systems that in real time can look at somebody talking on a podium and tell you whether they're lying or they're contradicting previous statements or there's a high degree of debate about what they said right? So is cancer caused by cigarettes? Is that an objective truth? I don't know. But is it statistically strongly true? Probably yes. So you could show that with the footnote. I think that we'll start to see that happen.
I think there's a lot of error, so for me, I look at a lot of the political conversations as kind of equivalent to religion in some ways. People take sides and they vigorously defend their side and they don't always have facts or reason or rationale behind those choices. And so in those scenarios, much of those conversations are not very easily definable as objective truth. I think there are definitely things that are definable as objectively false, especially like if you said something in the past and you contradict it or you just say something that's just factually wrong. For example, if you were to deny the Holocaust, I'm pretty sure that's objectively a lie and that's provable, so there's a lot that we're gonna see some evolution there.
The problem is that at the end of the day right now, there's no financial incentive for anyone to do that because it doesn't really help you. So if you're CNN or The New York Times or The Washington Post, you may have a high level of journalistic standards and integrity, but you still have some headline writer who's writing headlines that I would view that even on the most prestigious news publications are at best misleading because they're designed to get you to click something and read a story. Now if you read the story I think they do make a huge effort to tell the truth, and make sure that it's verified. But still, it's hard for the reader to fully verify everything if it's coming from a confidential source or whatever. I don't have any vetting data on that. The computer can't vet that because it doesn't know, unless the computer knows how to read these articles. It's a really hard problem I guess.
I have a little trick I use which I'll pass along because I've never found it to be false and that is that any headline that's expressed as a question, the answer is always no. It sounds like a joke, but if the answer was yes, it would be phrased as a statement. You know: “Is something in your water killing you?” The answer is No. Otherwise, the headline would be something like “Your Water is Killing You.” And so I have found lots of headlines rephrased as questions.
That might be true. I actually haven't thought about it that much, but I also feel like the question mark makes people… causes a FOMO event in your brain that makes you want to click it, because you feel like you have to know the answer even though it's a trick.
So your insights on how technology will change social institutions I always find fascinating. So I'm just going to rattle off three or four institutions. You can pick one and tell me some way technology is gonna to transform it. So I'm gonna go with: religion, the family, government, education. How will technology change one of those?
My hope would be that changes education because I just think there are huge opportunities to make education better. And I believe there's a lot of companies like Coursera and Udacity and Pluralsight. There's a ton of money being spent on the commercial edge of education and it's bleeding in to traditional education, like coding boot camps. They're a good way to find people that are good engineers that are basically people that started down a different path and took the initiative to sort of restart their careers and go spend a couple of months or whatever depending on the program, maybe a year, sort of being fully immersed in learning to code in a way that's very different than college and designed for professionals. It's like a job to go through the process. It's not easy. And then you come out the other side and you're probably as good as a graduate from a lot of places.
And so I think that as those systems become better, they become more online, they're more accessible -- like you can get a master's in computer science from the University of Illinois, Georgia Tech. Several other pretty prominent schools are fully online now through Coursera and edX and these are like really good. I've taken some certification classes on AI on some of those programs and Stanford, MIT… these are like the best, most respected educational institutions in the world. And you can basically take their classes online. A lot of them also just publish freely their class content.
That makes everything accessible and I think that's going to have a huge impact not just in the United States, but across the world as people in places like Africa and India and other places like India that has obviously a really good college education system, but it's super competitive. So opening up these things to other people and making them accessible is very powerful. And all these platforms are collecting huge amounts of data about how people learn, how people complete these classes, what drives them. Gamifying education in some way, even though gamification can have some negative components to it to, it gets people through the process.
I think as gamification and these models of teaching people things and using the psychology of humans that we learn especially in gaming and in these new next generation of education stuff is going to help people learn stuff faster and better. And that's going to also translate into people using say the same techniques that allow jobs. And I think that will slowly trickle down into K-12 education. I don't think it's there yet. I think K-12 education at least the United States hasn't changed very much in a long time. I went to a public high school that was in the middle of nowhere in Virginia, that wasn't great and I was able to go to a really good college thanks to my parents, but most people that went to that high school didn't get the same thing I got out of it and that was because of my parents.
But I think that as technology becomes more accessible and more available that anyone who really wants to make the effort, and maybe you don't know that you need to make the effort in the early days of your K-12 career, but hopefully you're exposed to things and you pick them up and you engage with something that's interesting to you and you're able to learn a lot in a way that is just not available right now in a traditional classroom environment. So I think that's going to be a game changer and that's probably true for a lot of other things.
Just popping all over the map here…the Turing test. Can a computer and a chatbot trick you into thinking it's a person? A. Do you think the techniques we know how to do now namely ML, are sufficient to solve that problem? And B. Do you think that means anything if we do solve it?
I have strong opinions on the Turing Test. I think the Turing test is a marketing thing more than…it’s kind of by accident. I think it was a good idea when it was developed. I think Turing and a lot of his contemporaries came up with a lot of really interesting ideas. I think super intelligence and the intelligence explosion came out of a philosophical thinking about AI, and the idea that computers could think. I think that was a really important step, but I think the Turing test has become more of a marketing thing recently. It's really hard to construct a true Turing test right now.
I can tell you everything I've ever tried, every system I go to, I ask: “What's bigger: a nickel or the sun?” And nothing's ever been able to answer that question. Do you think those kinds of simple common sense things are within...?
So if it's a data-driven question like that I think we're basically there and...
But no, that’s the thing. I did this article on GigaOm where I had an Amazon Alexa and a Google assistant and I documented all these questions that when you asked them the question, they gave you completely different answers and they were things that seemingly should have been the same. How many minutes are in a year? Who designed the American flag? And the like. You know why they would answer how many minutes in a year differently?
Because they're looking at different calendars or different years?
One of them is doing a calendar year and one of them is doing 365.25 days and one is doing 365 days. And you know why they said differently on the flag? Because one of them just said Betsy Ross and one of them said Robert Heck, and Robert Heck is the guy who did the 50-star configuration, and so it's like, all questions are inherently ambiguous (not all, that's an overstatement).
Most questions are ambiguous, a human is like, “What do you mean? The current flag or the last one? So when I say what's bigger a nickel or the sun? First of all, it doesn't even know if I'm saying S-U-N or S-O-N. And it doesn't know a nickel is a metal but it also happens to be a coin. A human knows what I'm asking because they're both round. A human knows I'm probably not talking about a person and a nickel. So all of that kind of second level inference that a human quickly does, I haven't ever found a single chatbot that can. You go to any of them, you ask any of the desktop assistants and none of them will even try to answer it.
Yeah, it's because right now the way those assistants are built is programmatic. They're not really designed to answer those types of questions. I mean they'll try to answer certain types of questions and maintain the appearance of intelligence, but they're basically a new type of programming language where you're basically designing these interfaces and so if you're talking through Google assistant or Siri or Alexa, you're talking to basically an NLP engine that's attempting to parse what you're saying and understand the intent and extract a bunch of data out of the semantic way that you phrased whatever you said to it, and then identify the right system to feed that to you and then extract the answer. So that leaves a lot of room for error, right?
NLP is a different problem. NLP to me is somewhat like computer vision -- it's different but not that different, right? So if I look at computer vision and I shine a camera at a wall and there's a person standing in front of the wall, it's relatively easy to spot the person. If there's a bunch of numbers written on the wall, it can probably figure out there are numbers on there and what they are depending on how complicated it is. But if a fly flies across the wall, it probably isn't going to know what that is if it hasn't ever been taught that, right? So these systems are not really, they're data-driven but the data is usually constrained because building up a fully open-ended sort of data gathering system is expensive and hard. So the people that are doing that type of stuff, I think the cutting edge of a lot of this is in things like deep reinforcement learning which is like DeepMind and OpenAI are doing, where it's like okay well now I can teach this computer to play Atari video games.
So in that scenario, it's learning really complex things, it's adapting and building its own system of understanding and waiting and it's playing millions and millions of games to learn this. And it can do the same thing with Go and Chess and things like that. And so the reason that Chess was the old school AI thing and then it was Go, and now it's like Atari video games and even if Open AI is working on Dota, which is a 4v4 very complex you know modern game against top players. And if you look at each of those games, if you think of the most simple game like Tic-Tac-Toe, Tic-Tac-Toe is its own universe. It has only nine spots in its universe. And it's two dimensional and only has two entities: zero or Os and Xs. And so there is a very small set of rules about your placement of the Os and Xs and you take turns placing the Os and Xs. So it's not that hard to encapsulate that entire universe in code.
It's harder to do it in chess because there's so much variability. There's a sort of escalating complexity of these games. And so these games are basically using a mix of learned analysis and a bunch of data to predict. It used to be brute force prediction of who's ahead. Now it's a much more complex algorithm. Go is harder even still than chess that's purely built on going on moves ahead because there is so much variability. You have to construct strategies. And so that's why like you read the Go the DeepMind AlphaGo stuff…
It's like the human master grandmasters are like man, I've never seen a strategy like that and that's because a computer's been playing against itself for so many more games than a human could ever play. And so those types of environments, those systems are very specialized, but they're very complex, and they're able to predict and make choices that are hard to understand from a human point of view. And they get more complicated as you move up the scale. And so that's why Atari games were first.
And now it's Dota, which is a multiplayer online battle arena game like League of Legends and basically it's four versus four players. There are automated creatures on each side that are fighting each other, there are towers that shoot things. There's positions, there's a bunch of sorts of developed strategies around each player's role. And these Open AI bots are like beating top-tier humans and there's ‘fog of war’ in these games and there’s all sorts of complexity. You have to team up. So it starts to get more and more like reality, but we're still nowhere close to actual reality because the variability in reality is different from the variability in a game.
But the games are the best representation of a subset of reality that I think that we have and that's why they're very popular for this more general AI research. And so to tie it back to the Accidental Gods, part of what drove me to write that was I believe that if we see an artificial sentience or a self-aware artificial entity in the foreseeable future, I think the most likely place we will see that arise is in some game-like environment -- whether that's a game that exists today or somebody creating a really physics accurate kind of small subsection of the world where they try to create entities that learn everything and do things in that environment. Those things may become self-aware or sentient in ways that we just don't understand because they're not designed to interact with humanity.
I think we're further off from building things that can interact and understand the complexities of English which is really a hard language and has a lot of ambiguity. I think we're more likely to see on the pure language side the rise of a more sophisticated vertical chatbot and then your intelligent agent will be like “Oh you're asking to book an airline flight, I'm going to talk to my friend the airline bot who understands everything about booking airlines,” but if you ask him the circumference of the Sun, he's gonna have no idea what you're talking about. Does that make sense?
It does. So we're running out of time here. Every time I see you, you seem like a really realistic person. You're very level and you're not prone to giddiness or extremism or anything like that. So I guess the question I want to ask you is, Do you, in the end, think you're a pessimist or an optimist about the future? Do you think on balance, human nature and technology are going to work to make the world better, and define that as less misery and suffering, more opportunity for more people? Or are you more of a dystopian that thinks we're going to have a world where these technologies are used by the powerful to oppress the weak and the rich to oppress the poor and all of the rest?
I think my views are maybe different from other people’s in that sense. So it's funny because I posted a poll on Twitter and I described the situation and I then I asked, “Do you think this is dystopian or utopian?” And there was an interesting split of answers, and so it's because I'm working on a new sci-fi novel that starts on Earth, that quickly moves out into the broader universe. But it's in a hypothetical point in the future of Earth where we've developed a super artificial superintelligence there that arose out of an AGI project, and so that superintelligence has basically taken away military, government and everything else, so everyone's provided for. You can kind of do whatever you want, there's not really money per se anymore, there's kind of some people get some perks that not everyone gets. They're still like the only thing that really gets you really big awards is being a superstar at something like movies or games or sports or music or whatever, because that's what people still really want. And if you want to lose yourself in drugs, you can, if you want to lose yourself in games or VR you can. If you want to take a job, like the main character starts out as a librarian, she understands that it's not a job that really means anything, she doesn't really know why she's doing it, but she just wants an anchor for herself, and she gets a few perks because she does a job that has some relevance.
But some people are like, “that's amazing, if I don't have to work anymore, I can just sit around and play games all the time with my friends, or if I could disappear into a wildly amazing ready player, 1-style VR or whatever, or, if I could just hang out with my friends and drink all day, watch sports, or whatever.” Some people are like, “that's amazing, that would be perfect.” Other people are like “that would be horrible.” It's interesting to hear people's response to those types of… if you frame something that way, some people are like, “that's utopia, I don't have to work anymore, I can just do whatever I want, it's fun, I get fed and can go out and drink if I want, I can play games if I want, whatever,” so for a majority of people, those types of models are somewhat utopian, and everything is clean, and there's robots to take care of everything.
There's a scene in the book where this guy's drunk, or on drugs or something, and he passes out in a park, and a robot just picks him up and carries him away, and if you didn't have any context, you'd maybe be like, maybe they're going to go and kill that guy, but instead the robot just takes him back to his house and cleans him up and leaves him there to wake up the next day with a hangover, or maybe not hangover, there's probably drugs to take care of that, but is that utopia or dystopia?
I don't know, I think there's a bunch of scenarios under which, there's something that's utopian-ish, for most people. I think some people would be frustrated by that. They view it as “I'm being controlled by computers.” Other people would be like, “I have total freedom, I don't have to think about all this dumb stuff that I don't like.” And so depending on who you ask, you get different answers, and I think there's a scenario where AGI or ASI or whatever does develop that way and we end up with an authoritarian government that's very abusive of power and impedes a lot of the progress towards a better life for the majority in exchange for power for the minority. I just don't know which way it goes, or if there are any in-betweens.
All right, Andrew, what's the best way for people to follow you?
On Twitter I'm just @AndrewBusey.
Alrighty, thanks a bunch for your time. That was a fascinating hour.
Great, thanks a lot.