Episode 109: A Conversation with Frank Holland

Byron speaks with Frank Holland about the nature of intelligence and the ways in which we define, cultivate and attempt to mimic it.

:: ::

Guest

Frank Holland joins Apttus as its Chief Executive Officer (CEO). He previously served as Corporate Vice President at Microsoft, where he built and led global teams for more than 20 years. His extensive experience in the software space spans decades and includes a track record of operational and strategic accomplishments, board memberships and sterling leadership. He excels at building high-performing teams. As CEO, Frank is responsible for directing the company’s growth at scale and capitalizing on the vast Middle Office market opportunity.

Holland’s long history of operational excellence and innovative execution provides a foundation for Apttus’ continued growth and industry leadership. Prior to joining Apttus, he led Microsoft’s Dynamics CRM and ERP products’ growth, directing its global salesforce in delivering a portfolio of on-premise and cloud offerings. Additionally, he directed key aspects of Microsoft’s business expansion and competitive positioning efforts, such as the customer-facing due diligence process associated with the $26 billion acquisition of LinkedIn in 2016. Before taking on the role as CVP of Dynamics, Holland ran Microsoft’s $4 billion advertising sales business. He was also Corporate Vice President of Microsoft’s Operations, focusing on order-to-cash functions and field operations.

Frank holds a B.S. in Operations Research and Industrial Engineering from Cornell University.

Transcript

Byron Reese: This is Voices in AI brought to you by GigaOm and I’m Byron Reese. Today, we have a good guest for you. His name is Frank Holland. For the last seven months, as of this recording, he has been the CEO of Apttus, which we’ll hear more about. Before that, he had a long and accomplished tenure at Microsoft, where he was a corporate VP. He holds a BS in Operations Research and Industrial Engineering from Cornell.

Welcome to the show, Frank!

Frank Holland: Hey, it’s great to be on. Thanks a lot, Byron.

I always like to start off by just putting up signposts. What is intelligence? If you don’t like that, what is artificial about artificial intelligence?

That is exploring the esoteric, isn’t it? I think that you could safely define intelligence as the ability to reason. What makes it so different from any other kind of rule-based algorithms – or applied logic to certain scenarios, is that you’re able to not only work your way through a problem in an organized way, but do it with broad pattern-matching and the ability to recognize trending as you do it, which makes it so tenuous to be able to grapple with when you want to apply it to any sort of compute-type of environment.

That’s where I think that the state of the art is right now, as people make the transition from the philosophical question: ‘What is intelligence?’ to the idea of turning it into something that’s probably saleable.

I’m with you on that, but I always like to ask the question because I’m really wondering, is AI mimicking intelligence? Is it feigning intelligence the way artificial turf isn’t really grass, just trying to look like grass, or do you actually think it’s intelligent?

I don’t actually have a horse in this race, I don’t have a strong – I don’t even have a weak opinion for that matter, but I’m curious because to me it speaks to ‘what are the limits of the systems for building?’ Are they actually smart or are we just figuring out a kludge way to annotate intelligence and it’s going to cap-out pretty quickly?

Well, I certainly think it started there. The idea that you could build rule engines that could take logical trees and break them down into sequential processing parameters is something we’ve been doing. I remember working on those in my early days, even before, I think, the term “artificial intelligence” got coined. What we were working with was really just the very early vestiges of machine-learning.

Is it synthetic? Yeah, I think it is to a degree. It almost has to be to be able to put it into an environment where you can reliably repeat it, using the sorts of instructions we give to a compute platform. The notion though, that you can take neural nets – and other sister technologies around that and apply it to a hard, thorny problem where we may not even know how to ask the question, could be the beginnings of what it makes that truly intelligent. I think that where the human isn’t able to keep up, not just in terms of scale, that’s an easy one to understand, but in terms of knowing how to approach a problem, or like I said, knowing what question to ask, is where you could start to see some real intelligence coming out of what sorts of compute problems we’re throwing at the world right now.

We have figured out this trick, and I don’t mean that term pejoratively, like a magic trick, this trick of machine-learning and it’s a pretty simple idea. It says: take a bunch of data about the past and study it and look for patterns and make projections into the future about that. You can imagine there’s a range of problems that’s really good for – well, it’s good when the past and the future are very similar. What does a cat look like? It’s not that different from day to day.

Probably won’t change overnight, you’re right.

No, but something like a cellphone might.

Yeah, true.

My question to you is when you think – and we’re going to get to your company and the challenges of simulated speech. Before we get there, do you think the Turing test kind of problem, where you can ask the computer a question and it can answer it in a compelling way, does that actually behave that way? If you had an infinitely large corpus of writing, could you predict the next thing I’m going to say?

I don’t know that with the state of technology it’s at a point where it could do that. The human mind is so complex that the next words out of your mouth are probably not even known by you. Depending on a whole variety of variables it can’t really incorporate...

Rutabaga! Who saw that coming?

Yeah, exactly. I bet you didn’t even, a second ago.

No, no. I was like, what is the word I’m least expecting?

Yeah that’s right.

I don’t even know what a rutabaga is quite frankly but go ahead. I’m sorry! What can a computer do, along those lines? What are the boundaries of, well we can do that now?

It feels like where we are today, in terms of practical applications, and that’s what I’m all about, I don’t want to get into – I’m not a data scientist. I don’t hold any kind of credentials in the computer science area, so I wouldn’t be a good one to ask, what does technology afford you to do? I only think about the practical applications of what other people have built and where we’re able to move that into the broad B2B context.

The sorts of things, though, that you’re able to do in a meaningful scaled way in the environment today, where we tend to need an assistant or some sort of an intelligent agent to help us out are the trivial, or the ones that require a bunch of data crunching.

The kinds of problems I like to solve with AI these days are ones that – you could repurpose a human way better if you were to take X, Y and Z off their hands. By doing that, you free up time to have them be operating at a much [higher], strategic, better operating level than you would if you were to ask them to do those sorts of non-real high value-add-type problems. I think those are the sorts of things where we see real opportunity in today’s world.

Well, I tell you what. I’m only going to ask you one more question along these philosophical lines and then I’d love to hear the practical solutions you guys are working on. In fact, I’m going to ask you a segue question, which is this: I wrote an article – so I have on my desk - I can’t say them because it’s right by me – the Amazon device and the Google device. You know what I’m talking about, right?

Yes.

I wrote an article that was – I would ask them questions, very straightforward questions, and they would give me different answers. Question one would be: how many minutes are in a year? That would seem to be a very knowable thing. Then the second one was who –

It seems like there’s only one answer.

Correct! The second one was: who designed the American flag? Again, just a matter of fact. The third one was: who’s buried in Grant’s tomb? That was one of them. I had 20 of these questions, where they gave me different answers to these questions. For the first one, how many minutes are in a year? One multiplied 365 days by the number of minutes and one multiplied 365.24 days, a solar year not a calendar year, and gave me an answer. With the flag, one says Betsy Ross and one said Robert Heft. I was like, who’s that? Robert Heft’s the guy that designed the 50-star configuration. Then the third one is a little bit of a trick question because nobody’s buried in the tomb. You entomb people in tombs, you don’t bury them. One of them got it and one of them didn’t.

In at least the first two questions, the system didn’t have the wherewithal to say, “Did you mean a calendar year or solar year?” or “Do you mean the current flag or the original flag?” That’s why these systems always seem very brittle to me.

Tell me about Apttus. Go ahead and bring everybody up to speed on Apttus and what you’re trying to do there. Is that a challenge you guys wrestle with? That inherent ambiguity that a human can be like, ‘wait a minute, you mean…’ versus a computer, which, like you said at the very beginning, goes down the path and gives you an answer.

Yeah, I think it is a really good segue question, Byron. The reason, as you point out, that you get different answers is because I think probably on one level you want real precision and a computer is only as precise as the person who built it, or an app is only as good at determining the right kind of answer as the individual that designed it.

As a result, you could also factor in, ‘well what do most people mean when they ask how many minutes are in a year?’ I bet very few people would say, “Well, I’m in a solar year, of course.” Do you really even have to ask the clarifying question or can you just make an assumption around what the broad populace is talking about? Do you really mean that – you draw a distinction between burying and entombing? Probably with that question because it’s meant to be a trick one.

At Apttus, what we try to do is identify specific scenarios, practical scenarios where you can get the scale and leverage in the middle office space that I alluded to a little bit earlier. We build what we call quote-to-cash software. It manages this ecosystem between CRM systems and CRP systems that neither does really well and you have to build real fit-for-purpose-type applications to adroitly handle the use case.

Being able to build a large shopping cart in a B2B environment with very complex products, most big enterprises who are the ones that we tend to do our business with, Fortune 1000 etc., are looking to build efficiencies into their demand chain by taking advantage of ecommerce capabilities, even though it’s thought to be just – or I guess today, a B2C-type application. As you get into those sorts of scenarios, you want to be able to offer people the ability to... take a sales contract from someone that is a rep out in the field and be able to address terms’ changes with a customer on the fly, so that you get not only a good deal in place that’s got a good price around it, good delivery dates around it, ones that you think you can live up to etc., but also is de-risked from a legal application perspective.

What we try to do is to build a database of those terms that are the most highly regarded by the legal team and then give them as suggestions to the sales rep so that they can seamlessly integrate those into a suggestion or an edit window with the buyer in front of them, as opposed to having to wait days for the turnaround and redline process to happen. That’s one example about where we’re taking the segue question that you asked and putting it in context, so there is as little opportunity for ambiguity as possible. I suppose it’s probably always there, but we try to remove it and extract it at a level where you are so embedded into the business process that you’re not worried about that.

I think another example could be that you’re trying to leverage your salesperson so that they’re working on the hardest and most lucrative types of opportunities for a customer. We’ve got customers who are asking us to be able to take the best and brightest salespeople and their ideas about how they might configure a deal and put it in the hands of their least-experienced and newest reps, so that they can benefit from the years and years of cycles that the veterans have to compile their vast knowledge base and offload some of that learning into the compute environment so that a new rep can get up and ramped as quickly as possible.

Those are the sorts of scenarios I’m seeing our customers adapt to. The way we solve, or address I guess, the problems we talked about with the flag issue or the number of days in a year get compartmentalized in a way that because it is in context, you remove some of that error margin or maybe comprehension margin that you might encounter otherwise.

That’s absolutely true that the narrower your domain space the better the systems are. If it was just about minutes in different things, – if it were a vertical application, it would have picked that up. The way you’re describing Apttus, it sounds very abstract. Can you give me some real use cases of problem here, solution here and so forth?

You bet, yeah. One of our customers is using our MAX product, which is our AI product. We’ve given it a name, we market it separately etc. What they’ve done is really interesting in the sense that they’ve gone out and said to their reps, “Look, we’ve got the opportunity to go and consolidate an entire market, but we’re doing it using contract language and an approval cycle that has us being subject to these really long cycle times in being able to generate a quote.” When I say ‘quote,’ it’s probably a well-known term but I’ll define it anyway. I mean a proposal to purchase one’s products to a buyer and then a proposal to sell from a seller.

The idea that you’d be able to take, in this particular instance that I’m thinking about it happens to be medical technology equipment that’s very customized and yet has enough of a pattern to it that you can actually create trends around what sort of discounts lead to rapid uptake because it understands what the general price points are in the market for this kind of competitive technology. Yet it also is an attractive enough type of a deal for the purchaser so that they’d be likely to sign on the dotted line that day and make you a nice little revenue uptick.

I’m not able to disclose either the customer’s name or the amount of benefit that they got just because they see it as, this is a real competitive advantage right now versus their competitive space that they operate within. They’ve found double to triple-digit improvement in their ability to generate not only quick deal signings but also high margin-type take-up of this sort of a policy where they’ve seen best in class done before and then they’ve been able to repeat it at scale. That’s just one example of where we see people take up our technologies.

I’m with you on that, and believe me, we at GigaOm here, we’re severely limited in the number of use cases about AI we can write about because so many people do see it as competitive advantage. Let’s just take that one, where you say there’s this sales process and there are people who do it very well and there are people who don’t do it well. What we’re going to do is study, programmatically, the people that do it very well and apply that learning to everybody. Is that fair? Did I get that right?

That’s right. Yeah that’s right.

Put some legs on that. What are the steps in that? Do you take just all the proposals the superstar did? Flesh that out because that’s really compelling. You use the phrase, “triple digits” and I can do enough math to know that’s a big deal. How does that happen?

What we do is we study individual deal sets, and we know because we see all of the financial implications of any given deal since we work in the “cash” area. We know what’s the data that we send to the ERP system and we know how various different vertical customers are behaving with different proposals sent to them and we can draw trends out of that. We can poll for product A, B, C, we know that they happen to be a hot seller with these – in this configuration and then that’s actually where the market’s headed because we’ve seen so many reps doing deals like that. We also know that we tend to drive low-margin results as a function of that being at so high a volume that you think you can make your quota number – on volume and not really worry about the implications for the bottom line.

What we’ve done is we’ve taken in good margin deals that are still attractive to those sellers and we’ve said, “What is special about those? What actually made the buyer say ‘yes ‘as opposed to, ‘I want a lower price’?” It could be that we identified a particular term that was interesting to a customer. We’ll throw that in the proposal. It could be that we identified a particular delivery style that they wanted to see. We could take a whole handful of different options that you could put around any product configuration and say those were the drivers.

Yet what’s unique in this one deal set is where someone said ‘yes,’ versus the times where we did good volumes but bad margin-type deals and the customer still said ‘yes.’ What was common between those things that you could take and then drive into a broad knowledge base of capabilities that a new rep could learn from or an inexperienced one could get better at? Without going into the tech itself, that’s how you isolate against patterns that aren’t repetitive in all scenarios, but do pop up as being the differentiator in one that really did work or a handful enough to make a trend.

Fair enough. That all makes sense to me if you had a gazillion deals because AIs love to be trained on lots of data. Don’t most companies who are dealing in numbers like thousands of deals, – how do you overcome just the sparsity of the dataset?

We don’t run into sparsity all that often. In fact, we run into the opposite, where – in our industry the state of the art on being able to put numbers of products into a shopping cart is about in the mid-hundreds. Right now, what we’re able to – this is on a different discussion path, but by building-out capability of being able to do that at 10,000 items as opposed to 500 items, you get more shots, you get more reps, and you learn more in the way that customers behave to different types of approach, in this particular example that we’re using, this hypothetical example.

You learn more about what customers – what is appealing to them and what tends to turn them off. What I’ve found is that markets behave – this is not a Frank Holland truism here; this is something that’s been proven over the years. Markets behave fairly efficiently and yeah, there are outliers, but generally they’ll trend toward optimal configuration that actually makes sense for all. That’s what we try to get to in the way that we build our AI solutions.

What are the challenges that you’re facing right now? What is it like, okay, now our next challenge is blank, we need to be able to... something?

Yeah, right.

What’s keeping you scratching your head at night?

Well, I mean we haven’t really as an industry solved this whole concept of taking online B2B; allowing for people that are in more traditional industries that aren’t as-a-service-type product and making them as-a-service. It’s really easy to say, “I want to sell you an airplane and by the way, I’ll sell you all the maintenance along with that and not even really charge you for the plane itself.” Doing that in practice is incredibly difficult because what you’re doing is establishing a set of digital SLAs that were never part of that particular plane deal that you might have struck with a commercial airline, historically. It introduces a number of new nuanced ways of having to now represent your digital product, the subscription, with the SLAs that go against it.

That’s all the new contract language that people are having to think about contemplating, as they consider moving into this as-a-service-type economy. You’re seeing it across the board, Byron. It’s happening in virtually every industry that I can think of. That’s what we spend a lot of time thinking about these days.

Well, wonderful. I see we’re coming up on time here. The company is Apptus, A-P-P-T-U-S and I assume people can go there to get the latest and greatest. What about you, Frank, how can people keep up with what you’re doing? Do you blog or tweet or hire a skywriter on a frequent basis?

I haven’t taken up skywriting yet, maybe that’s a good idea! We should look into that; I’ll make a note! The company’s name is actually, and I made this mistake too, Byron, when I came on board. It only has one P. It’s A-P-T-T-U-S. You can find us at apttus.com. It’s actually my media day today, so I’ve just finished taping a series of blogs that we’ll publish every couple of weeks. I’d love to invite your audience to come and see a little bit more about what we’re up to here at Apttus by looking at those blogs.

Well, great, and I apologize for the misspelling.

Yeah, no problem.

Now that we’ve had a discussion about it, people are even more inclined to remember it!

Excellent!

All right, Frank, well I want to thank you so much for sharing with us a little bit of the fascinating work you’re doing, applying the technology of artificial intelligence to a very real set of business issues and I wish you the best of luck.

I appreciate the time and thanks for having me on, Byron.