Episode 44: A Conversation with Gaurav Kataria

In this episode, Byron and Gaurav discuss machine learning, jobs, and security.



Gaurav Kataria leads product and data science at Entelo. Prior to this role he spent nearly decade at Google Cloud and prior to Google, he was a senior leader at Booz Allen Hamilton and co-founder of a tech startup. He has a Masters and PhD in Computer Security from Carnegie Mellon University and Bachelors in Electrical and Computer Engineering from Indian Institute of Technology.


Byron Reese: This is Voices in AI brought to you by GigaOm. I am Byron Reese. Today our guest is Gaurav Kataria. He is the VP of Product over at Entelo. He is also a guest lecturer at Stanford. Up until last month, he was the head of data science and growth at Google Cloud. He holds a Ph.D. in computer security risk management from Carnegie Mellon University. Welcome to the show Gaurav!

Gaurav Kataria: Hi Byron, thank you for inviting me. This is wonderful. I really appreciate being on your show and having this opportunity to talk to your listeners.

So let’s start with definitions. What is artificial intelligence?

Artificial intelligence, as the word suggests, starts with artificial and at this stage, we are in this mode of creating an impression of intelligence, and that’s why we call it artificial. What artificial intelligence does is it learns from the past patterns. So, you keep showing the patterns to the machine, to a computer, and then it will start to understand those patterns, and it can say every time this happens I need to switch off the light, every time this happens I need to open the door, and things of this nature. So you can train the machine to spark these patterns and then take action based on those patterns. A lot of it is right now being talked about in the context of self-driving cars. When you’re developing an artificial intelligence technology, you need a lot of training towards that technology so that it can learn the patterns in a very diverse and broad set of circumstances to create a more complete picture of what to expect in the future and then whenever it sees that same pattern in the future, it knows from its past what to do, and it will do that in the future.

The Fourth Age


Artificial intelligence is not built…sorry, go ahead.

So, that definition or the way you are thinking of it seems to preclude other methodologies in the past which would have been considered AI. It precludes expert systems which aren’t trained off datasets. It precludes classic AI, where you try to build a model. Your definition really is about what is machine learning, is that true? Do you see those as synonymous?