In this episode Byron speaks with Sameer Maskey of Fusemachines about the development of machine learning, languages and AI capabilities.
- Subscribe to Voices in AI
- Google Play
Sameer Maskey is the Founder of Fusemachines, Inc. and serves as its Chief Executive Officer. Sameer has more than 15 years of experience in natural language processing/machine learning/data science. He currently teaches Data Science and Technology Entrepreneurship at Columbia University. He has published more than 20 peer reviewed articles and served as a Session Chair, a Program Committee member, and a Review Committee member of many International Conferences. Sameer holds a Ph.D., M. Phil. and M.S. in Computer Science from Columbia University.
Byron Reese: This is Voices in AI brought to you by GigaOm and I'm Byron Reese. Today my guest is Sameer Maskey. He is the founder and CEO of Fusemachines and he's an adjunct assistant professor at Columbia. He holds an undergraduate degree in Math and Physics from Bates College and a PhD in Computer Science from Columbia University as well. Welcome to the show, Sameer.
Sameer Maskey: Thanks Byron, glad to be here.
Can you recall the first time you ever heard the term ‘artificial intelligence’ or has it always just been kind of a fixture of your life?
It's always been a fixture of my life. But the first time I heard about it in the way it is understood in today's world of what AI is, was in my first year undergrad when I was thinking of building talking machines. That was my dream, building a machine that can sort of converse with you. And in doing that research I happened to run into several books on AI and particularly a book called Voice and Speech Synthesis, and that's how my journey in AI came into fruition.
So a conversational AI, it sounds like something that I mean I assume early on you heard about the Turing Test and thought ‘I wonder how you would build a device that could pass that.’ Is that fair to say?
Yeah, I'd heard about Turing test but my interest stemmed from being able to build a machine that could just talk, read a book and then talk with you about it. And I was particularly interested on being able to build the machine in Nepal. So I grew up in Nepal and I was always interested in building machines that can talk Nepali. So more than the Turing Test was just this notion of ‘can we build a machine that can talk in Nepali and converse with you?’
Would that require a general intelligence or are not anywhere near a general intelligence? For it to be able to like read a book and then have a conversation with you about The Great Gatsby or whatever. Would that require general intelligence?
Being able to build a machine that can read a book and then just talk about it would require I guess what is being termed as artificial general intelligence. That begs many different other kinds of question of what AGI is and how it's different from AI in what form. But we are still quite far ways from being able to build a machine that can just read a novel or a history book and then just be able to sit down with you and discuss it. I think we are quite far away from it even though there's a lot of research being done from a conversational AI perspective.
Yeah I mean the minute a computer can learn something, you can just point it at the Internet and say "go learn everything" right?
Exactly. And we're not there, at all.
Pedro Domingo wrote a book called The Master Algorithm. He said he believes there is like some uber algorithm yet we haven't discovered which accounts for intelligence in all of its variants, and part of the reason he believes that is, we're made with shockingly little code DNA. And the amount of that code which is different than a chimp, say, you may only be six or seven mbps in that tiny bit of code. It doesn't have intelligence obviously, but it knows how to build intelligence. So is it possible that... do you think that that level of artificial intelligence, whether you want to call it AGI or not but that level of AI, do you think that might be a really simple thing that we just haven't... that's like right in front of us and we can't see it? Or do you think it's going to be a long hard slog to finally get there and it'll be a piece at a time?
To answer that question and to sort of be able to say maybe there is this Master Algorithm that's just not discovered, I think it's hard to make anything towards it, because we as a human being even neurologically and neuroscientists and so forth don't even fully understand how all the pieces of the cognition work. Like how my four and a half year old kid is just able to learn from couple of different words and put together and start having conversations about it. So I think we don't even understand how human brains work. I get a little nervous when people claim or suggest there's this one master algorithm that's just yet to be discovered.
We had this one trick that is working now where we take a bunch of data about the past and we study it with computers and we look for patterns, and we use those patterns to predict the future. And that's kind of what we do. I mean that's machine learning in a nutshell. And it's hard for me for instance to see how will that ever write The Great Gatsby, let alone read it and understand it, but how could it ever be creative? But maybe it can be.
Through one lens, we're not that far with AI and why do you think it's turning out to be so hard? I guess that's my question. Why is AI so hard? We're intelligent and we can kind of reflect on our own intelligence and we kind of figure out how we learn things, but we have this brute force way of just cramming a bunch of data down the machine's throat, and then it can spot spam email or route you through traffic and nothing else. So why is AI turning out to be so hard?
Because I think the machinery that's been built over many, many years on how AI has evolved and is to a point right now, like you pointed out it is still a lot of systems looking at a lot of historical data, building models that figure out patterns on it and doing predictions on it and it requires a lot of data. And one of the reasons deep learning is working very well is there's so much data right now.
We haven't figured out how, with a very little bit of data you can create generalization on the patterns to be able to do things. And that piece on how to model or build a machine that can generalize decision making process based on just a few pieces of information... we haven't figured that out. And until we figure that out, it is still going to be very hard to make AGI or a system that can just write The Great Gatsby. And I don't know how long will it be until we figure that part out.
A lot of times people think that a general intelligence is just an evolutionary product from narrow. We get narrow then we get...First they can play Go and then it can play all games, all strategy games. And then it can do this and it gets better and better and then one day it's general.
Is it possible that what we know how to do now has absolutely nothing to do with general intelligence? Like we haven't even started working on that problem, it's a completely different problem. All we're able to do is make things that can fake intelligence, but we don't know how to make anything that's really intelligent. Or do you think we are on a path that's going to just get better and better and better until one day we have something that can make coffee and play Go and compose sonnets?
There is some new research being done on AGI, but the path right now which is where we train more and more data on bigger and bigger architecture and sort of simulate our fake intelligence, I don't think that would probably lead into solutions that can have general intelligence the way we are talking about. It is still a very similar model that we've been using before, and that's been invented a long time ago.
They are much more popular right now because they can do more with more data with more compute power and so forth. So when it is able to drive a car based on computer vision and neural net and learning behind it, it simulates intelligence. But it's not really probably the way we describe human intelligence, so that it can write books and write poetry. So are we on the path to AGI? I don't think that with the current evolution of the way the machinery is done is probably going to lead you to AGI. There's probably some fundamental new ways of exploring things that is required and how the problem is framed to sort of thinking about how general intelligence works.
Well I just want to ask you one more question about AGI and I won't bring it up anymore, but can you construct an argument whether or not you believe it, but can you construct an argument that AGI is impossible; it cannot be done with computers, without necessarily appealing to the soul or something supernatural, which you know a lot of people would say ‘We can't build it because people aren't machines.’ But without that, is there an argument that it may not be possible?
Is there an argument that it may not be possible? That's a hard question to answer. I mean if you look at some of the very, very fundamental perspectives of how the brain works, it's the signals turn on and turn off, that's basically neurons to neurons as well. So if you were to think from the perspective of literally how the signals get turned on and turned off and get passed onto another bunch of neurons and signals get passed, you could potentially think that maybe we could build a machine that [is] similar to human brains. So from that perspective I want to answer saying, “Maybe there is a way to build a general intelligence system, using math.”
Well I guess there are those who might argue that the function of a neuron isn't understood and that it could be as complicated as a supercomputer, that there's something called the Openworm Project where these people have spent 20 years trying to model 302 neurons of this worm in a computer just 302 neurons, and get something that behaves like that worm. And they don't even know if that's possible to do, so it raises the question of ‘Well, what about humans? If we don't even know how a few neurons interact to produce intelligence how can one hundred billion? How would we ever do that?’
You know the power requirements of the big computers: we have 20-30 million watts, versus your 20 watt brain. So I don't know if we're gonna brute force our way there, there may not be enough power that we have to we have to cleverly get there.
Yeah I mean that's what I was saying: basically they are some similarities on very fundamental unit of things that gets together to build intelligence. But from the perspective of how machine learning has evolved, can that lead to general intelligence? I don't think so. Or not at least the way it's being done.
The fascinating thing is that my guests on this show are divided almost 50/50 on that question and I find that really interesting. I don't say it in a snarky way. We're in a brand new science. It's expected that we don't know how, I mean we're like living in the age of Newton where ‘oh my gosh look what we can do and we still don't even know the guidepost and the boundary markers.’ So it's not surprising. But I think it's really interesting that there seems to be a group of people who believe that the techniques we already know will get us there, and then people who say ‘No, it's going to be something we haven't quite figured out.’
I think I fall in the camp of where we we haven't really figured out how to build generally intelligent machines.
So you are the founder and CEO of Fusemachines. Tell us what is Fusemachines and why did you found it?
So I started Fusemachines a couple years ago with the notion that talent is everywhere but opportunities are not. Basically having taught at Columbia University for several years I saw that a lot of engineers in developing countries and underserved communities in the US as well, didn't get the right kind of tools, if they can get right kind of content, right kind of professors to learn machine learning then they can be equally good. So we had this notion of ‘can we democratize AI and can we do it by educating talented engineers around the world and in the US from the underserved communities?’ And that's what we started to do. And that's what fusemachines is about. We find talented engineers and then train them in machine learning.
And where are you in that mission? What are some metrics of what you've done or the rest?
Sure. So basically we run it as what we call Fusemachines AI Fellowship programs, and we started out, our first location was in Nepal, and after Nepal we ran it in Dominican Republic, then New York, then Rwanda, Burkina Faso and we're adding several cities to it. So at this point there are hundreds and hundreds of engineers who are being trained through our platform on machine learning. Every time we open a cohort, we get thousands and thousands of applications and we select a cohort of anywhere between 25 to 100 engineers at a time.
What about the part of the cohort that doesn't get chosen, is there a self guided [version], or is it strictly this mentoring model that you do?
It is strictly a mentoring model. It's basically a mix of online and on site training. So we were using Colombia MicroMasters program on AI as one of the pieces of content, as one of the programs. We're also using our own online program to fuel the platform. But one of the problems with all the online platforms especially on the books is the completion rates [are] really low. Everybody signs up and then after four weeks, they sort of forget about it. So what we did was we created a model where there is an on site component to the online component where all the students come to an office or to a classroom once or twice a week. They do assignments together, they do projects together and they also get taught a little bit.
So that's that's the difference, but the slight modification of the model has created amazing results on the completion rates. So we at this point have stuck to the model which is: they come in and they learn online and they also learn on site. So the engineers who don't get into the program, they get to reapply again, but because of the on site component, we cannot take everybody.
I have this theory that if we just stop developing, making any advances in AI, we have like 10 years of work to do. AI is used in so few places. There's all this data and we only apply it in these few instances.
You know the story of the Google engineer that made the cucumber sorting machine, I suppose you've heard? I think he used Kubernetes and a Linux box and Arduino and got it to sort cucumbers based on these four factors. And so you think if there's a cucumber use case, there's millions and millions of use cases.
So I think about that that we actually need a whole lot more talent. And then I think: more and more people are studying AI than ever and you have Andrew Ng's course, you have ways to take Coursera online that you can learn some of this. So do you think we're training enough people? Are enough people entering AI that we're gonna be able to apply it in all these areas that it needs to be applied in? Or is there going to always be more things to do with it than we have people to do it?
I think at least for several years forward they're not going to be enough people to apply AI on the things that everybody would like to apply AI on, but one caveat I would like to point out is a lot of engineers who learn AI, who learn how to use tools like Scikit-learn, Keras or TensorFlow and whatnot, and taking short[cuts] like, looking at a few videos online and then doing some tutorials, and they can actually build simple applications to sort a cucumber or some something else, but the one thing that I see among a lot of those engineers is lot of them actually fundamentally don't understand how the algorithms work.
Now one may argue that you know they need to understand how the algorithms work if they can actually apply it, but I believe to build really really really good systems, you need to understand how the algorithms work. Not to understand how algorithms work, pretty much all machine learning algorithms right now is machine learning is its math and which is math you know and different statistics. So at least from the engineers who actually understand all the way down to the fundamentals of math, building AI for the next quite a quite a few years will not have enough people to build systems on every use case on the end that could be applied.
Anybody who reads my writing [knows] I'm an optimistic guy about technology, about the future. I think we're going to use technology to solve a lot of seemingly intractable problems of humanity. I think we're going to feed the hungry and eliminate poverty and eliminate disease and lengthen lives. Nobody's ever called me a pessimist, but do you worry about the misuses of the technology with regards to privacy? Because it used to be we all had privacy mainly because there were so many people, you just couldn't listen to every phone call, you couldn't watch every person. But with AI you can.
With AI you can you can ‘voice to text’ every conversation, you can do facial recognition and map everybody, you can look for patterns and the very same tools we build to look for, to identify tumors, can identify disloyal citizens or something like that. You don't actually have to go into science fiction to find examples of this as you as you know. What are your thoughts on that?
It's a real issue and it's one of the scary sides of AI, which is being able to use AI to pretty much track people everywhere they go, track every online event that gets triggered by whatever application and potentially could be used for surveillance and so forth. So that's something I do worry about on how AI could be misused.
I worry less so on having Terminators walking the streets of New York City at anytime right away, but even the current version of facial recognition systems could be applied to mass surveillance, are actually put on buildings, small weapons on top of drones with facial recognition as shown in one of the videos, I think, of killer robots which was a video created to show what is possible and how it could be misused. But that's a scary side of AI that I do worry about quite a bit.
And then on the other side, what do you hope the technology does for the planet? What's the optimistic view? You've got those two guys sitting on your shoulder and one of them is like ‘uggh’, you know. What's the other one saying right now?
So the optimistic side is saying you know the same innovation technology if it's not used to do mass surveillance for example, but the same technology could be used on many applications like delivery of medicine, using drones. In fact at Fusemachines, that's something we built last year because in Nepal there's not a lot of roads, and being able to deliver medicine high in the hills, it's quite hard and important.
So we build a drone with facial recognition system that would go in front of a house to drop medicine. So there's all these applications of machine learning AI systems that could improve humanity and solve a lot of humanity's problems. And not just in medicine. It could be applied in agriculture as well, where it could help produce... find all the big pieces of land where the plants may be dying and then quickly do things to resolve that. So the optimistic side says you know there's a lot of good use cases of AI. And there are a lot of still hard problems that AI could help solve that could improve lives for a lot of people in the world today.
Alright well let's leave it there, we're at the bottom of the hour. How can people keep up with what you do, how can they follow you and how can they get involved with your company?
They can find information on our website: www.Fusemachines.com. If they want more info or data from us, they could also subscribe to our Twitter handle @fusemachines. My twitter handle is @sameermaskey. And if they want to e-mail us: email@example.com.
Alrighty some fascinating stuff, thanks for being on the show.
OK we're done.
Thanks a lot.