Episode 38: A Conversation with Carolina Galleguillos

In this episode Byron and Carolina discuss computer vision, machine learning, biology and more.

-
-
0:00
0:00
0:00

Guest

Carolina Galleguillos built computer vision and machine learning classification systems at SET.tv, which was acquired by Conversant in 2014. She obtained a Ph.D. in Computer Science from UC San Diego, where she did research in object recognition in images.

Transcript

Byron Reese: This is Voices in AI brought to you by Gigaom, I’m Byron Reese. Today our guest is Carolina Galleguillos. She’s an expert in machine learning and computer vision. She did her undergrad work in Chile and has a master’s and PhD in Computer Science from UC San Diego. She’s presently a machine learning engineer at Thumbtack. Welcome to the show.

Carolina Galleguillos: Thank you. Thank you for having me.

So, let’s start at the very beginning with definitions. What exactly is “artificial” about artificial intelligence?

Well, I read somewhere that artificial intelligence is basically trying to make machines think, which is very “sci-fi,” I think, but what I’m trying to say here is we’re trying to automate a lot of different tasks that humans do. We have done that before in the Industrial Revolution, but now we’re trying to do it with computers and with interfaces that look more human-like. We also have robots that also have computers inside. I think that’s more of the artificial part. The intelligence, we’ll see how intelligent these machines will become in time.

Alan Turing asked the question, “Can a machine think?” Do you think a machine can think, or will a machine be able to think?

I think we’re really far from that. The brain is a really, really complex thing. I think that we can approximate the thinking of a machine to be able to follow certain rules, or learn patterns that seem more like common sense, but at the end of the day, it won’t think autonomously, I think. We’re really far from that. 

I want to get into computer vision here in just a minute.

Yes.

But I’m really fascinated by this, because that’s a pretty low bar. If you say it’s using machines to do things people do, then a calculator is an artificial intelligence in that view. Would you agree with that?

Well, not really, because a calculator is just executing commands.

But isn’t that what a computer program does?

Yeah, it does. But I would say that in machine learning, you don’t need to program those rules. The program will infer the rules by seeing data. So you’re not explicitly writing down the rules in that program and that’s what makes it different from a calculator.

Humans do something really kind of cool. You show a human an object, like a little statue of some figure, and then you show them a hundred pictures, and they can tell what that figure is—even if it’s upside down, if it’s underwater, if the lighting changes, if they only see half of it. We’re really far away from being able to do that with a machine, correct?

Well, it depends, always I think it is depends. We can do very well now in certain conditions, but we are far from—I’m not saying super far—doing it when you don’t have all the information, I would say.

How do humans do that? Is it that we’re really good at transfer learning or what do you think we’re doing?

Well, yes, transfer learning, but also a lot about the context. I think that the brain is able to store so many different connections—millions and millions of connections, it has so much experience—and that information goes into recognizing objects. It’s very implicit. A person cannot recognize something they’ve never seen before, but if that person has the context about what it should be, it would be able to find it. So I think that’s the main point.

If I took you into a museum gallery, and there was a giant wall with two hundred paintings on it—they’re all well-known paintings, they’re all realistic and all of that—and I hang one of them upside down, a human notices that pretty quickly. But a computer doesn’t. A computer uses the same kind of laborious algorithm to try to figure out which painting is upside down, but a human just spots it right away. What do you think is going on there?

I think that what’s going on is probably the fact that we have context about what we usually face. We usually see paintings that are straight, that they point up, so we are really quick to identify when things are not the way we expect them to be. A computer doesn’t have that knowledge, so they start from a clear slate.

What would giving them that kind of context look like? I mean, if I just said, “Here’s 100,000 paintings and they’re all right side up. Now, quick, glance at that wall and tell me which one’s upside down,” the computer wouldn’t necessarily be able to do it right away, would it? What kind of context do we have that they don’t—what paintings look like right side up, or what reality looks like right side up? What do you think?

Well, if there are objects on that painting, the computer will probably also be able to say that it’s upside down. Now, if it’s a very modern piece, I don’t think a human could also figure out if it’s upside down or not. I think that’s the key of the problem. If it’s basically a bunch of colors, I wouldn’t be able to say that’s upside down. But if it is the painting of a lady, the face of a woman, I would be very quick to spot that that painting is upside down. And I think a computer also could do that, because you can train a computer to identify faces. When that face is upside down, it would be able to say that, too.

It’s interesting because if you were an artist and you drew fantastic landscapes in science fiction worlds, and you showed people different ones; somebody could point at something and say that’s not very realistic, but that one is. But in reality, of course, they’re alien planets. But it’s because we have a really deep knowledge about things, like, the shapes of biological forms, and the effects of gravity—just this really intuitive level of what “looks right” and what doesn’t. What are the steps you go through to get a computer to have just that kind of natural understanding of reality?

That’s a good question.  I think, as part of recognizing objects—let’s say that’s our main task—we try to also give more information about how these objects are presented in reality.  So, you can have algorithms that can code the spatial information of objects—usually you’re going to find the sky above, the grass is down below, and usually you won’t find a car on top of a building, and all that. So you can actually train an algorithm that can surface those patterns, and then, when you show them something that is different, it’s going to make those assumptions, and one of the outcomes is that it might not recognize objects correctly because those objects are not in the context that the algorithm was trained on.

And do you think that’s what humans are doing, that we just have this knowledge base? Because I could totally imagine somebody looking at these alien landscape paintings and saying, “That one doesn’t look right,” and then they say, “Well, why doesn’t it look right?” and it’s like, “I don’t know. It just doesn’t look right.” Is it that there’s some much deeper level that humans are able to understand things, that wouldn’t necessarily be able to be explicitly programmed, or is that not the case?

I think that there’s a belief in machine learning, and now especially with deep learning, that if you have enough data, say millions and millions of examples, those patterns will surface. You don’t have to explicitly put them there. And then those images—let’s say we’re doing computer vision—will encode those rules, like, you’re always going to see the size of a car and the size of a person are mostly around the same, even though you see them at different distances. 

I know that as humans, because we have so much experience and information, we can make those claims when we see something that seems odd. At the same time, we can have algorithms that—if you have enough data to get those patterns surfaced—could also be able to spot that. I think that it’s happening more and more in areas like medicine, when you want to find cancer. So they’re trying to leverage those algorithms to be able to detect those anomalies.

How much do you think about biology and how humans recognize things when you’re considering how to build a system that recognizes things? Are they close analogs or is it just trivia that we both happen to recognize things but we’re going to do it in such radically different ways? There’s not really much we can learn from the brain?

This is a very hot topic, I’d say, in the community. There’s definitely a lot of machine learning that is inspired by the brain, or by biology. And so, they’re trying to build architectures that simulate the way that the brain works, or how the eyes would process information. I think that they do that in order to understand how the brain works, in order to do the other way around, which is create algorithms that emulate the brain, because I think that would be extremely hard to do. 

When I build machine learning systems, either computer vision or just generic machine learning systems, I usually am not inspired by biology, because I’m usually trying to focus on very specific tasks. And if I were to be inspired by the brain, I would have to take into account a lot of different things into my algorithm, which sometimes just wants to do something still very smart, but very focused, and the brain actually tries to take into account a lot of different inputs. So that’s how I usually approach the work I do.

Humans have a lot of cognitive biases. So we have ways that our brain doesn’t work. It appears to have these bugs in it. For instance, we over-see patterns, right? I guess we over fit. You can look up at a cloud and you see a dog. And the thesis goes that a long time ago it was far better to mistake a rock for a bear and run away than to mistake the bear for the rock and get eaten. 

Do you think that when we build computer systems that can recognize objects, are they going to have our cognitive biases because we’re coding them? Or, are they going to have their own ones that we can’t really predict? Or, will they be kind of free of bias because they’re just trained off the data?

I think it depends. Basically, I think it depends on how you are going to build that system. Like, if you do it by being inspired by the brain; you might actually be able to put your own bias against it, because you might say, well, this is a rock and this is a bear and bears and rocks show up together in certain occasions, and you might actually be able to put your own bias in it. Now, if you let the data sort of speak by itself, by showing examples through algorithms, then the machine, or the computer, will just make their own judgment about that, without any bias.  You can always bias the data as well, that’s a different problem, but let’s say we take all the images in the world where all the objects appear, then we usually will pick up very general patterns, and if, usually, rocks look like bears, then they might make those mistakes pretty easy.

I guess the challenge is that every photograph is an editorial decision of what to photograph and so every photograph reflects a human’s bias. So even if you had a hundred million photos, you’re still instantiating some kind of bias. 

So, people have this ability…we change focus. You look at something, and then a bear walks in the room, and you’re like, “Oh, my gosh! A bear walked in the room!” and then somebody yells, “Fire! Fire!” and you turn over to see where the fire is. So we’re always switching from thing to thing, and that seems to be a feature associated with our consciousness, that it allows us to switch. Does the fact that the computer is not embodied, it doesn’t have a form, and it doesn’t have consciousness, is that an inherent limitation to what it’s going to be able to see and recognize?

Yes. I think so. I mean, once again, if the computer doesn’t have any extra sensors, it wouldn’t even realize what’s going on, apart from the task that it’s actually executing. But let’s say that computer has a camera, it also has a tactile device, and many other things, then you’re starting to enable a little bit more context to that computer, or that program. I mean, if those events occur once in a while, then it would be able to react, or say something about it.

If you think about it, photographs that we use generally are the visible spectrum of light that humans are able to see, but that’s just a tiny fraction. Are there pattern recognition or image recognition efforts underway that are using a full spectrum of light and color? So they show infrared, ultraviolet…

Yes. Definitely. Yes.

Can you give an example of that? I find that fascinating.

Well, a very good example is self-driving cars. They have infrared cameras. They could potentially give you an idea of, “There is a body there, there is something there that is not animated,” so you don’t hit it when you’re driving. So, definitely, there are not just photographs, but for MRIs, all medical imaging, basically you use all that information that you can get.

Our audience is familiar, broadly, with machine learning, but can you talk a little more specifically about how, conceptually, “Here’s a million photos of a cat, learn what a cat looks like, and try to find the cat in these photos,” but peel that onion back one layer. How does that actually work, especially in a neural net situation?

Yeah, you basically tell the computer “there’s a cat in here.” At every single image, you’ll say “there’s a cat in here.” Sometimes you even label the contours of a cat, or even maybe just a rectangle around it, to differentiate it between the actual foreground and background. What the computer is going to do at the first level, it’s going to do very low-level operations, which means it’s going to start finding edges, connected components, all at the very low, granular level. So, it starts finding patterns at that level basically; that’s the first stage. And depending on how deep—let’s say if it’s a neural network—the neural network is, the higher the granularity of these patterns; they start getting more and more. So the representation of a cat starts from very low-level, until you start getting things like paws, and ears, and eyes, until you actually get to what it is a full cat at the end of the layers of this neural network. That’s where the layers of the neural network start encoding. So when you have a new picture where there’s not a cat—maybe there is a person—it’s going to try to find those patterns. And it’s going to be able to say, “Well, in this area of this image, there’s no cat because I don’t see all those patterns coming up the way I see it when a cat is present.”

And what are the inherent limitations of that approach, or is that the be-all and end-all of image recognition? Give it enough images, it will figure out how to recognize them almost flawlessly?

There are always limitations. There are objects that are easier to recognize than others. Now we have done amazing progress in even recognizing different types of dogs and different types of cats, which is amazing, but there are always constraints to lighting conditions, the quality of the image, to things that. You know, some dogs can look like cats, so, we can’t do anything about that. We always have constraints. I think that algorithms are not perfect, but depending on what we’re trying to use them for; they can get very accurate.

The same techniques are used, not just for training for images, but making credit decisions or hiring decisions or identifying illnesses—it’s all the basic same approach, correct?

Yes.

What do you think of the efforts being considered in Europe, that dictate that you have a right to know why the algorithm suggested what it is? How do you reconcile that? For instance, you have denied a person’s mortgage application, and that person says, “Why?” and then you say, “The neural net said so.” And, of course, that person wants to know, “Well, why did it say so?” And it’s like, “Well, that’s a pretty hard question to answer.” How do you solve that, or how do you balance that? Because as we get better with neural nets, they’re only going to get more obfuscated and convoluted and nuanced, right?

I think the harder the problem, like say in the case of computer vision, it’s really hard to say what are the things that trigger a certain outcome. But luckily, you can still come up with algorithms that are simpler to train, but also simpler to figure out what are the main features that are triggering certain outcomes. And then you’ll be able to say to that person, if you pay your credit cards, then your score will improve and we’ll be able to give you a mortgage. 

I think that’s the trade off, right? I think it’s always task-dependent. There is a lot of hype with deep learning and neural networks. Sometimes you just need a little bit more simple algorithms. They are still very accurate, but they can actually give you insights about your prediction, and also the data that you are looking at, and then you can actually build a better product. If your aim is to be extremely complete, or to try to solve a task that is very difficult, then you’re going to have to deal with the fact that there are a lot of things you won’t know about the data and also why the outcome of that algorithm came about.

Pedro Domingos wrote a book called The Master Algorithm where he said there are five different tribes, where he kind of divides all of that up. You have your symbolists, and you have your Bayesians, and so forth; and he posits that there must exist a master algorithm, a single general-purpose algorithm that would be able to solve a wide range of problems, in theory, all problems that are solvable that way. Do you think such a thing exists?

I don’t think it exists now. Given the fact that deep learning has been extremely useful across different type of tasks—going from computer vision, to even, like, music or signal processing, and things like that—there might be an algorithm that can help with a lot of different tasks, like a master algorithm, if you want to call it like that. But it will always be in some way modified to fit the actual problem that you want. Because of the fact that these algorithms are very complex, sometimes you actually need to know why the outcome is the outcome that you’re getting. So, yes, I think that algorithm might exist at some point. I don’t think it exists now. Deep learning, maybe, is one of the frameworks—because it’s not an algorithm but it’s more like a framework, or an architecture—that is helping to be able to accurately make predictions in different areas. But at the end of the day, we want to know why, because it will affect a lot of different people, at the end of the day.

One argument for the existence of such a thing—and that it may not be very much code—is human DNA, which is, of course, the instructions set to build a general intelligence. And the part of the DNA that makes you different than creatures that aren’t intelligent is very tiny. And so the argument goes that somehow a very little bit of code gives you the instructions to build us and we’re intelligent, so therefore there may be an analog in the computer world. That it’s just a small amount of code that can build something as versatile as a human. What do you think of that analogy?

Yeah, that’s mind blowing. That would be really cool if that happened, but at the same time, very scary. I never really thought about that before.

Do you think we’re going to build an artificial general intelligence? Will we build a computer as smart and versatile as a human?

This is very personal answer, humans are social beings and the only way that this could happen is that we’re alone, and we need something like a human to be with us. Hopefully, we’re very far from that future, but in the actual present, I don’t think that’s something that we aim to do. 

I think it’s also more about, like, figuring out humanity by itself, like understanding why we come to be the way we are, why people are violent, why people are peaceful, why people are happy, or why people are sad. And that’s the best way of understanding that, like, basically, reconstructing a human brain and maybe extending that brain to have arms and become a robot. But I don’t think it would be the actual goal. It’s more like the way to understand humanity. 

I also don’t think it would be a way of executing tasks. We always see in sci-fi movies that robots do things that humans don’t want to do, but they wouldn’t be humanoids. I was looking to buy a Roomba yesterday and that could possibly be a robot that is cleaning, it’s trying to do something that I don’t want to do, but I don’t consider it as being artificial intelligence or a smart being. So I think that it is in some way possible but I don’t think as an end to build something like a human.

Certainly not for a lot of things, but in some parts of the world, that is a real widespread goal. The idea being that places where you have aging populations, and you have lonely people, and you want robot companions that have faces that you recognize, and that display emotions and can listen to your stories, chuckle at your jokes, recognize your jokes, all of that. What are your thoughts on that? So, in those cases, they are trying to build artificial humans, aren’t some people?

But they won’t be complete humans, right? They will be machines that are very good at solving certain tasks, which is recognizing your voice, recognizing that you’re saying a joke, or being able to say things that make you feel better. But I don’t think that they are artificial humans, because that’s a very complex thing. That robot that is helping a senior person, for that person not to be alone, it won’t be able to do other much more complex tasks that a human can do. 

I think it’s all about being very specific to solve very specific tasks.  And I think robots in Japan are doing that. I mean, we have smart assistance, right? And they are very good at understanding what you’re trying to say so they can execute a command, but I don’t think about them as another “human” that is trying to understand me, or actually know about who I am.

I don’t know if you saw that Tom Hanks movie years ago called Castaway, but his only companion is this soccer ball that he named Wilson, and then there’s a point where Wilson is floating off and he’s like, “Wilson!” And he risks his life to save Wilson. And then you look at how attached people get to their pets and their animals. And so, you can imagine, if you just kind of straight line graph that, how people might feel towards robots that really do look and act human. It’s undoubtable that people will develop strong emotions for them.

Yes, I agree with that.

So, it’s interesting, you’re talking about these digital systems, and some vendors choose to name them. Apple has Siri, Amazon has Alexa, Microsoft has Cortana, but Google, interestingly, doesn’t personify theirs. It’s the Google Assistant. Why do you think, and not necessarily those specific four cases, but why do you think sometimes they’re personified and sometimes they aren’t? What does that say about us, or them, or how we want to interact with them, or something like that?

That is very interesting, because sometimes when I have my son with me I’ll ask Alexa to play some music. Having a name makes it feel like it’s part of your family, and probably my son will wonder who this Alexa person is that I’m always asking to play 80s pop music. But it definitely makes you feel that it’s not awkward, that your interaction with the machine is smooth and it’s just not an execution, right? It’s part of your environment. 

I think that’s what these companies are going for when they put a real name—it’s not a very common person’s name, but still a name that you could say. Alexa, probably, has a female voice because that’s, sort of, the gender that they’re aiming to represent. With respect to Google, I think maybe they want to see it in a more task-driven way. I don’t know. It could be many things.

I think I read that Alexa may have come from—in addition to having the hard “x,” which makes it sound distinctive—an oblique reference to The Library of Alexandria, way back in ancient times. 

Whenever my alarm goes off, I’m like, “Alexa, set me a five-minute timer”—which, luckily, it didn’t hear me—but when the timer goes off, I go, “Alexa, stop,” and it feels rude to me. Like, I don’t talk to people that way, and, therefore, it’s jarring to me. So, in a way I prefer not having it personified, because then I don’t have that conflict. But what you just said about your child, that they may not grow up having any of those sorts of mixed minds about these things. What do you think?

Yeah, I think that it’s true. Sometimes, I feel the same way like you do when you say, “stop,” and it feels like a very commanding way. With the next generation, you have iPads and computers are an old thing, almost; it’s all about new interfaces. It’s definitely going to shape the way that people communicate with machines and products. It’s very hard for me to know how that’s going to be, but it’s going to be very natural—the way that interactions will come with websites, products, gadgets, things like that. 

I think that the fact that Google is still the “Google Assistant” also has to do with the fact that when you’re in a conversation, people don’t say Google a lot, right? So then you won’t trigger those devices to be listening all the time, which is another problem. But yeah, it’s very interesting. I always think about how the next generation is going to behave or how the experience is going to be for them, growing up with these devices.

The funny thing is, of course, because Google has become a verb, you could imagine a future in a hundred years or two hundred years when the company no longer exists, but we still use the word, and people are like, “I wonder why we say ‘google’? Where did that come from?” 

This is, in a sense, a personal question, but do you think a computer could ever feel anything? So, for example, you could put a sensor on a computer that detects temperature, and you can program the computer to play a wav file of a person screaming if it ever hits five hundred degrees, but that’s a different thing than actually feeling pain. Do you think it’s possible for a machine to feel, or is that something that’s purely biological, or purely related to life, or some other aspect of us?

I think that for a human to be able to feel something, that aspect of humanity, is such a complex thing. We know from biology that it’s mostly our nerves perceiving pain. They’re perceiving things and then sending that signal to the brain, and the brain is trying to interpret that information into something. 

You could, if you want to be very analytical about it, then you could possibly have a computer that feels pain, like you said, something that can give input to the computer and then goes through the processor and the processor will infer a rule and it will say “this is pain.” I don’t think they can do it in the way that we, as humans, perceive it. It’s such a complex thing.

But in the end, we have a self. We have a self that experiences the world.

Yes.

Can a computer have a self and can a computer, therefore experience the world, as opposed to just coldly sense it?

I think it’s really hard. Unless you can build a computer with cells and things that are more common to a human, which will be a really interesting thing. Personally, I don’t think that is possible, because even pain, like we’re talking about, is very different for everyone, because it’s mostly given by the experiences, right? And a computer can store a lot of information, but there’s much more than that signal, just the way that interpreting that data is what makes humans so interesting.

Humans, we have brains, and our brains do all the things, but we also have a theory of something called a “mind,” which, you know, “are you out of your mind?” And I guess we think of the mind as all the stuff that we don’t really understand how just a bunch of neurons can do, like creativity, emotions, and all of that. In that movie, iRobot, when Spooner, the Will Smith character, is talking to Sonny, the robot, he says, “Can you paint a painting? Can you write a symphony?” And of course, Sonny says, “Well, can you?” But, the point being, that all of those things are things we associate with the “mind.” Do you think computers will ever be able to write beautiful symphonies and bestselling novels, and blockbuster movies, and all of that? And if so, is machine learning a path to that? Like, how would you ever get enough movie scripts or even books or even stories to train it?

That’s interesting. I actually read that there is a movie that was written by a machine learning algorithm, the script actually, and they made a movie out of it. Now, is it good? I don’t know. So, it’s definitely possible. I think that, per se, computers cannot be creative. In  my experience, they’re basically looking at patterns of things that people find funny, or exciting, or makes them feel things. 

You can say, “This song is very pleasing because it’s very slow and romantic and relaxing,” and then a computer could just take all those songs that are tagged that way and come up with a new song that has those specific patterns, that make that song relaxing, or pleasing, right? And you could say, “Yes, they are being creative,” because they created something new from something old, from patterns and previous examples. So, in that case, it’s happening already, or a lot of people are trying to make it happen. 

Now, you could also argue that artists are the same way. They have their idols, and they somehow are going to try to take those things they like from their heroes, and incorporate them in their own work, and then they become creative, and they have their own creations, or their own art. A computer can actually do the same process. 

I think humans are able to capture even more than a computer could ever capture, because a human is doing something for other humans, so they can actually understand the things that move people or make people feel sad or happy. Computers could also just catch the patterns that for certain people, for certain data that they have, produces those emotions but they will never feel those emotions like humans do.

There’s a lot of fear wrapped up in artificial intelligence machine learning with regards to automation. There are three broad beliefs. One is that we’re going to soon enter a period where there are people with not enough education or training to do certain jobs, and you’re going to have kind of the permanent Great Depression of twenty to twenty-five percent unemployment. Another group of people believes that eventually machines can do everything a person can do, like we’re all out of work. And then there’s a group of people who say, look, every time we get new technology, even things as fundamental as electricity and steam, and replace animals with machines; unemployment never goes up. People just use these new technologies to increase their productivity, and therefore their standard of living. Which of those three camps, or a fourth one, do you find yourself sympathetic to?

I think definitely the third one. I agree. A really good example; my dad studied technical drafting for architecture, and then there were computer programs that did that, and he didn’t have a job. He did it by hand, and then computers could do it easily. But then, he decided that he was really good at sales and that’s where his career started to develop. You know, you need to be personable, you need to be able to talk to people, engage them, sell them things, right? 

I think that, in general, we are going to make people develop new skills they never thought they had. We are going to make them more efficient. For example, at Thumbtack, we’re empowering professionals to do the things that they’re really good at, and, you know, me, personally; it’s helping them through machine learning to optimize processes so they can be just focused on the things they love doing. 

I don’t really like the fact that people say that AI, or machine learning, will take people’s jobs. I think we have to see it as a new wave of optimized processes that will actually give us more time to spend with our families or develop skills that we always thought would be interesting, or actually things that we love to do and we can make a job out of it. We can support our families by doing the things that we love, instead of being stuck in an office doing things that are super automatic, that you don’t put your heart, or even your mind to it. Let’s leave that to machines to automate it, and let’s just do something that makes our life better.

You mentioned Thumbtack, where you head up machine learning. Can you tell us a little bit about that? What excited you about that? What the mission of the company is, for people who aren’t familiar with it, and where you’re at in your life cycle?

So, Thumbtack is a marketplace where people like you and I can go and find a pro that’s going to do the right project for you. What’s really exciting is the fact that you don’t have to go to a listing, and call different places, and ask them “Are you interested?” When you go to Thumbtack, you put a request and only the pros, which are super qualified, that are interested will contact you back with a quote, and with information to tell you, “I’m ready to help you to get your project done.” And that’s it. 

It’s amazing that we’re at 2017, and finding a plumber to fix your toilet, or even a DJ because you’re getting married, all those things are so hard. And what I really like by working at Thumbtack is that we are making that super easy for our customers. And we are empowering pros to be good at what they do, to just not have to be worried about putting out flyers or putting up a website, and spending all that time in marketing, and all these things, instead of helping people with their projects, and, for them, building their business.

It’s such a complex problem, but at the same time, it has such a good outcome for everyone, which, is one of the things that attracted me. And also the fact that we’re a startup, and startups are always a hard road, because we’re trying to disrupt a market that’s been untouched forever. And I think that’s a super challenging problem as well and being part of that is actually super exciting.

It’s true. It wasn’t that long ago when you needed a plumber, and you opened up the Yellow Pages, and you just saw how they were able to put a number of As in front of their names, AAA Plumbing, AAAA Plumbing, but that was how we figured things out. 

So, tell me a kind of machine learning challenge, a real day-to-day one that you have to deal with; what data do you use to solve what problem in that as you outlined it?

There are many different things. For some things, like automating some tasks that can make our team more productive, machine learning helps you to do that. For example, making sure that they can curate content. We get a lot of photos and reviews and things like that from our customers, and also content from our professionals, and we want to make sure that we’re showing all the things that are good for our customers, or surface information that is very relevant for them, when they’re looking to hire a professional. 

There are also things, like, using information on our marketplace to enhance the experience of our users when they come to Thumbtack, and be able to recommend them another category, like, say they put a request for a DJ, and maybe if they are having a party they might also want a cleaning person the next day, right? Things like that. So, machine learning has always helped there to be able to use a lot of the data that we get from our marketplace, and make our product better.

All right. We’re nearing the end. I do have two questions for you. Do you enjoy any science fiction—like books or movies or any of that—and, if so, is there anything you’ve seen that you look at and think, yes, I could see the future unfolding that way, yes, that could really happen?

Yes, I definitely like science fiction. Foundation is one of the books that I really like. 

Of course. That’s been one that’s resisted being able to be made into a movie, although I hear there’s one in the works for it, but that’s such a big project.

Yeah, I enjoy any type of science fiction, in general. I think it’s so interesting how humans see the future, right? It’s so creative. At the same time, I don’t particularly agree with any of those movies, and things like that. There are a lot of movies in Hollywood, too, where computers or robots become bad and they kill people. 

I don’t think that’s the future we’ll see with machine learning. I think that we’ll be able to disrupt a lot of areas, and the one I’m most excited about is medicine, because that can really change the game in humanity by being able to accurately diagnose people with very few resources. In so many places in the world where there are no doctors, to be able to take a picture, or send a sample of something and having algorithms that can help doctors to get to that diagnosis quickly; that’s going to change the way that the world is today.

Gene Roddenberry, the creator of Star Trek, said, “In the future, there would be no hunger, and there would be no greed, and all the children would know how to read.” What do you think of that? Or, a broader question, because you are in the vanguard of this technology, you’re building these technologies that everybody reads about.  Are you optimistic about the future? How do you think it’s all going to turn out?

Actually, it feel like a renaissance in some ways. Always, after some renaissance, some big shift in culture, there’s always these new creative things happening. In the past, there were painters that revolutionized art by coming up with new ways of being creative, of painting.  So, my view of the future is that, yes, a lot of the basic needs of humans might be satisfied, which is great. Mortality probably is going to be very low. But also there is the opportunity for us to have enough time to be creative again, and think about new ways of living. Because we have that foundation, then people will be able to think long-term, be more wild about new ideas. I think that’s mostly how I see it.

That’s a great place to end it. I want to thank you so much for taking the time. It was a fascinating hour. Have a good day.

Sure. Thank you. Thank you for having me.

Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here.