In this episode, Byron and Charlie discuss the difficulty of defining AI and how computer intelligence and human intelligence intersect and differ.
- Subscribe to Voices in AI
- iTunes
- Google Play
- Spotify
- Stitcher
- RSS
Guest
Charlie Burgoyne is the founder & CEO of Valkyrie Intelligence, a consulting firm with domain expertise in applied science and strategy. Charlie is also a general partner for Valkyrie Signals, an AI-driven hedge fund based in Austin, as well as the managing partner for Valkyrie Labs, an AI product company.
Charlie holds a master’s degree in theoretical physics from Georgetown University and a bachelor’s in nuclear physics from George Washington University.
Transcript
Byron Reese: This is Voices in AI brought you by GigaOm and I'm Byron Reese. Today my guest is Charlie Burgoyne. He is the founder and CEO of Valkyrie Intelligence, a consulting firm with domain expertise in applied science and strategy. He's also a general partner for Valkyrie Signals, an AI-driven hedge fund based in Austin, as well as the managing partner for Valkyrie labs, an AI credit company. Charlie holds a master's degree in theoretical physics from Georgetown University and a bachelor’s in nuclear physics from George Washington University.
I had the occasion to meet Charlie when we shared a stage when we were talking about AI and about 30 seconds into my conversation with him I said we gotta get this guy on the show. And so I think ‘strap in’ it should be a fun episode. Welcome to the show Charlie.
Charlie Burgoyne: Thanks so much Byron for having me, excited to talk to you today.
Let's start with [this]: maybe re-enact a little bit of our conversation when we first met. Tell me how you think of artificial intelligence, like what is it? What is artificial about it and what is intelligent about it?
Sure, so the further I get down in this field, I start thinking about AI with two different definitions. It's a servant with two masters. It has its private sector, applied narrowband applications where AI is really all about understanding patterns that we perform and that we capitalize on every day and automating those things like approving time cards and making selections within a retail environment. And that's really where the real value of AI is right now in the market and [there’s] a lot of people in that space who are developing really cool algorithms that capitalize on the potential patterns that exist and largely lay dormant in data. In that definition, intelligence is really about the cycles that we use within a cognitive capability to instrument our life and it's artificial in that we don't need an organic brain to do it.
Now the AI that I'm obsessed with from a research standpoint (a lot of academics are and I know you are as well Byron) that AI definition is actually much more around the nature of intelligence itself, because in order to artificially create something, we must first understand it in its primitive state and its in its unadulterated state. And I think that's where the bulk of the really fascinating research in this domain is going, is just understanding what intelligence is, in and of itself.
Now I'll come kind of straight to the interesting part of this conversation, which is I've had not quite a hundred guests on the show. I can count on one hand the number who think it may not be possible to build a general intelligence. According to our conversation, you are convinced that we can do it. Is that true? And if so why?
Yes… The short answer is I am not convinced we can create a generalized intelligence, and that's become more and more solidified the deeper and deeper I go into research and familiarity with the field. If you really unpack intelligent decision making, it's actually much more complicated than a simple collection of gates, a simple collection of empirically driven singular decisions, right? A lot of the neural network scientists would have us believe that all decisions are really the right permutation of weighted neurons interacting with other layers of weighted neurons.
From what I've been able to tell so far with our research, either that is not getting us towards the goal of creating a truly intelligent entity or it’s doing the best within the confines of the mechanics we have at our disposal now. In other words, I'm not sure whether or not the lack of progress towards a true generalized intelligence is due to the fact that (a) the digital environment that we have tried to create said artificial intelligence in is unamenable to that objective or (b) the nuances that are inherent to intelligence… I’m not positive yet those are things through which we have an understanding of modeling, nor would we ever be able to create a way of modeling that.
I'll give you a quick example: If we think of any science fiction movie that encapsulates the nature of what AI will eventually be, whether it's Her, or Ex Machina or Skynet or you name it. There are a couple of big leaps that get glossed over in all science fiction literature and film, and those leaps are really around things like motivation. What motivates an AI, like what truly at its core motivates AI like the one in Ex Machina to leave her creator and to enter into the world and explore? How is that intelligence derived from innate creativity? How are they designing things? How are they thinking about drawings and how are they identifying clothing that they need to put on? All these different nuances that are intelligently derived from that behavior. We really don't have a good understanding of that, and we're not really making progress towards an understanding of that, because we've been distracted for the last 20 years with research in fields of computer science that aren't really that closely related to understanding those core drivers.
So when you say a sentence like ‘I don't know if we'll ever be able to make a general intelligence,’ ever is a long time. So do you mean that literally? Tell me a scenario in which it is literally impossible like it can't be done, even if you came across a genie that could grant your wish. It just can't be done. Like maybe time travel, you know back in time, it just may not be possible. Do you mean that ‘may not’ be possible? Or do you just mean on a time horizon that is meaningful to humans?
I think it's on the spectrum between the two. But I think it leans closer towards ‘not ever possible under any condition.’ I was at a conference recently and I made this claim which admittedly as any claim with this particular question would be based off of intuition and experience which are totally fungible assets. But I made this claim that I didn't think it was ever possible, and something the audience asked me, well, have you considered meditating to create a synthetic AI? And the audience laughed and I stopped and I said: “You know that's actually not the worst idea I've been exposed to.” That's not the worst potential solution for understanding intelligence to try and reverse engineer my own brain with as little distractions from its normal working mechanics as possible. That may very easily be a credible aid to understanding how the brain works.
If we think about gravity, gravity is not a bad analog. Gravity is this force that everybody and their mother who’s older than, you know who’s past fifth grade understands how it works, you drop an apple you know which direction it's going to go. Not only that but as you get experienced you can have a prediction of how fast it will fall, right? If you were to see a simulation drop an apple and it takes twelve seconds to hit the ground, you'd know that that was wrong, even if the rest of the vector was correct, the scaler is off a little bit. Right?
The reality is that we can't create an artificial gravity environment, right? We can create forces that simulate gravity. Centrifugal force is not a bad way of replicating gravity but we don't actually know enough about the underlying mechanics that guide gravity such that we could create an artificial gravity using the same techniques, relatively the same mechanics that are used in organic gravity. In fact it was only a year and a half ago or so closer to two years now where the Nobel Prize for Physics was awarded to the individuals who identified that it was gravitational waves that permeate gravity (actually that's how they do gravitons), putting to rest an argument that's been going on since Einstein truly.
So I guess my point is that we haven't really made progress in understanding the underlying mechanics, and every step we've taken has proven to be extremely valuable in the industrial sector but actually opened up more and more unknowns in the actual inner workings of intelligence. If I had to bet today, not only is the time horizon on a true artificial intelligence extremely long-tailed but I actually think that it's not impossible that it's completely impossible altogether.
So you know the argument, the counter to that virtually everyone on my show would offer? It goes like this: You’re biological and biology is just chemistry in motion, chemistry is just physics, and therefore everything that happens in you is just physics. With a big enough whiteboard I can write out everything that's special about Charlie Burgoyne and if that's just physics, then there's no reason we can't replicate that. What's wrong with that logic?
So let's start with a very simple example of you just using physics to try and create something synthetic. How about a diamond. Right? We're able to create a synthetic diamond in a very short amount of time by compressing carbon until it solidifies into a singular lattice molecule and becomes a perfect diamond. That's just physics, right? But it took us an incredible amount of time to understand that and we're able to show that one particular isotope of carbon with two particular features of a process of heat and pressure will create the object of interest. I mean there are certainly more components. It's not that simple, but that's basically how it works.
The intelligence that we experience and exhibit as a cognitive sentient being is derived not from a small collection of experiences, small processes in our own cortex but from the combination of that with all the other cortexes that we experience in day to day life all the other domains that we interact with regularly. We're getting trillions of different inputs throughout the course of the day. We're making millions of decisions, extremely small minute decisions but decisions nonetheless that are based off of input and the complexity of the network which defines intelligence, and particularly our own intelligence, is extremely vast. So it's just a gigantic problem and even if we were to accomplish that, there's still an underlying challenge, which is how do we account for the deep-seated motivations that are either instinctual or derived from experiences that we've had? How do we derive that motivation and simulate that for an AI?
So let's say we get a baby in a laboratory. We put them in this laboratory nursery and they don't have any contact with the outside world and then we have a second room that's configured in a very similar way. And we're only exposing, that we understand the basic needs of that baby at a very, very early age. We understand how the neurons are building, growing, developing at a high level and we create a network. Let's say we have all the compute power in the world. We create a network and we give them some of the same requirements that we know the baby maintains and we let the two grow in parallel development paths.
On the two thousandth day of life, we let the baby out of the room and robots have taught it how to walk and robots have taught him how to talk and interact. For us to make the relatively naive assumption that the AI and the baby would be saying the same things or have similar motivations or even articulate themselves and articulate problems in the same way, it's neo physical, it's a claim of a neophyte, right?
There are no ways that we currently… we don't have the vernacular, the understanding, the expertise to articulate the core motivators for the expressions and behaviors that we exhibit and try and understand on a regular basis. We see how people behave, we can easily make predictions about how to behave in the future under certain domain requirements and certain conditions. We're wrong about that a lot, but we don't really understand at the deepest most fundamental level why those decisions are happening. The position of different physical phenomenon in domains on top of each other, is that possible? Sure. Everything's bound by the constraints of the physical world. That doesn't necessarily mean that's completely explainable.
So to be clear, it's legitimate for people to say ‘I think I have a soul’ or ‘I think there's some non-corporeal part of me that isn't covered by the physical universe.’ I mean most people actually believe that. But you’re not talking about that when you're expressing this view that AGI could potentially be impossible?
I don't depend on it, but I would be dishonest if I said that is often the conclusion I reach when I really think about this problem.
OK, so a lot of people hearing that argument before this last little bit would say you're talking uniquely about human intelligence and we are a big bucket of spaghetti code. Who knows how we happen to be intelligent? But is it possible that intelligence… there are simpler manifestations of it that we may just stumble across someday that aren't nearly as complicated as human intelligence, [so] we may get to it without a lot of the excess baggage we have?
So, you know there are a number of initiatives. I'm not sure. I think we've got a chance to talk about [it]… individuals have tried to replicate the neural code cortex of like earthworms.
The nematode worm.
Yes. And to date, that experiment is not proving to be very fruitful.
Among them, I don't even agree that it's possible. Isn't that interesting? The people working on it are trying to replicate 302 neurons in a machine to duplicate the behavior of this certain worm.
So a Luddite or a contrarian would point out that we have algorithms right now that can predict what you'd like to buy on Amazon or can predict that you would like to change the temperature of your home with a NEST. It must be then that the challenges that are faced with the with the worm team are due to the fact that the worm team aren't very competent, right? Because surely that decision process that we go through to identify that the home needs to be warmer is far more complex than a simple question you could pose to an earthworm. I don't think that that that logic is sound.
I think what gets conflated a lot particularly by the Luddites or those who have a relatively pedantic or topical understanding of intelligence, is that by understanding, by making predictions about what will appear next or occur next in a pseudo-intelligent manner, that we have an understanding of the underlying mechanics of intelligence. Not only is that not the case but those two are diametrically opposed when it comes to research and development. The former has gotten so much attention in industry and has been so wildly valuable that most people don't care.
Most people don't care that we don't understand much more about the true nature of intelligence today than we did 40 years ago. They don't care about that. They only care about the fact that we're cutting default rates by a significant margin, or they only care about the fact that they can optimize inventory by creating recommendations. But the deeper I go, I actually don't think you can reconcile those two domains well. I think we'll eventually get to the point where we actually rename one of those fields from the nature of ontology, the nature of structures, the correspondence to semantic objects and synaptic firings. We'll name that something and we'll know that that has nothing to do with the operations that are occurring on Siri or Alexa or Google Home.
So put another way, you're saying that narrowly AI and AGI may have nothing whatsoever to do with each other, and even narrow AI and understanding of how the nematode worms may have nothing to do with each other?
Precisely.
On this question of general AI, you know there's all this money that comes into artificial intelligence and 99% of it's going to what you were just talking about. How do I identify spam e-mail or how do I guess what email reply I want to send next or any of that. Who's working on general intelligence as far as you know?
Well those who don't need a meal ticket, basically. It's not too dissimilar from deep astrophysics or it's a bad example nowadays. But maybe particle physics work is not particularly valuable to industry, but it's pursued because it's loved, so of course, some academics fall in that category. I've seen some interesting work come out of neuroscience departments. I definitely think there are some computer scientists who think about this challenge, but it's quite an open Wild West.
If you just had to start listing… the Human Brain Project in Europe presumably is working on general intelligence because they're trying to model the human brain. We assume that Google works on it with DeepMind. You assume that, I don't know, Carnegie Mellon or M.I.T. or are there people there working on it. But if you were to guess, are there a dozen teams working on it, a hundred teams working on it?
Yeah. So ok, as we're defining it, I bet you there are less than 100 bonafide teams working on that problem. The challenge I have though, is I think if I went to spend time with one of those universities (probably not Carnegie Mellon) but well maybe let's say everyone spent the afternoon with M.I.T or with Berkeley said we’d like to talk to your general intelligence team. I bet you nine times out of 10, when you talk to those labs, the first thing they would do would be to pull up a system of code or a collection of algorithms that they programmatically developed that they would argue are contributing towards the research of generalized intelligence. If we were to walk into one of those rooms, and they don't actually have any computers at all but a collection of whiteboards that show the bridging between semantic objects and decisions outside of the confines of the digital universe, I would actually have a lot of interest in it.
I agree, I completely agree. We should say ‘open AI’ by the way. I mean they explicitly are interested in it.
Let me give you the argument from genetics: so you have DNA and the DNA tells how to build ‘you’ and how to run you and it's something like 600 MB expressed the way that, you know it’s however many base pairs… It can be one of four things and then you say ‘well what part of that is different than say a banana?’ and you're down to like half. Because we know the banana is not intelligent, then you say ‘well what parts are different than me and you a cockroach?’ And then it's a much smaller amount and then you can say what parts are different for me and a chimp? You’re probably down to 1% and there is a marked difference in human intelligence and chimpanzee intelligence. I guess [at] 1% you're at 6MB so some 6MB of code is all biology needs to produce intelligence, so why isn't that a proof that not only it can it be done, it's actually a pretty short program?
Very interesting provocation and full disclosure to podcast listeners: we have not discussed this before. So this is all off the cuff. Well, let's take my supposition that I made earlier. If we accept the supposition that the mechanics of intelligence are derived from all things that we don't understand such as motivation and instinct, creativity, things that are often given ethereal nature. But I don't think we depend on that for the sake of argument. Let's say that we take those suppositions that at the core of intelligence is intrinsic motivation for the organic life, I'm actually not sure that that's encoded in those 6MB, I think a banana has motivations. Right? A banana is motivated to avoid getting eaten by prey. It avoids falling before it's ripe. Or maybe a banana is not a great example. But a chimpanzee or my beagle named Plutonium…they both have core motivators that they operate against with all their facilities. They are capable of making decisions, their capability of emotional intelligence is impressive given that they're not human, but it's certainly not a human level.
I'm not positive, to be honest, Byron, that the thing, the intrinsic capabilities [and] features of our genome that guide and dictate the impressive nature of intelligence are unique to humans. It may simply be the capacity that we have at our disposal to execute against those motivations. And frankly the level of complexity that we can indulge ourselves with when it comes to solving those challenges.
So let's switch to consciousness for a moment, and just to define our terms here: consciousness is the experience of being ‘you.’ It is the fact that you can feel warmth and a computer can measure temperature and it's the difference between those two things. It can measure it, you can experience it. What do you think? Where do you think it comes from? How would you know? You've got a degree in theoretical physics and a degree in nuclear physics. How would you as a physicist think of consciousness scientifically or can you even think of it scientifically?
Yeah, where to dig in on this? I think the real question for how do we differentiate between feeling warmth and identifying a temperature comes from which systems we can affect with that sensory input. And if consciousness is about us experiencing something versus understanding a variable has changed its value, is that because we simply have more systems that are tied back to that sensor?
In other words, a computer that has a thermometer attached to it by a USB, well it can't really feel because it doesn't necessarily… well, this is the question. It doesn't necessarily have a large number of systems that are dependent on that particular sensor. And here's why this may be the case. There are plenty of phenomena that experience dramatic changes throughout the course of our day and our environments that we don't experience at all. We actually can identify that there's been a change, but we don't actually experience them. In other words, we don't use cycles of consciousness to appreciate that change. There are some that sit out of our sensory capability, like changes in ultraviolet light in our environment. There are subtle changes in our optical models for what's going on in the world around us.
There's a clock in the lab that I can see from my desk. I know the second hand is changing but I don't experience that change. I sense it but I don't actually need to experience it because I've eliminated the need to know that it's been a second since the previous second the previous instance. So have we not created enough dependent systems on sensors so that the computer can synthesize or artificially create an experience? Possible. Temperature is a good example actually, because we have a deep learning computer in the lab that when it has temperature sensors and when it identifies the temperature has gone too high, it actually changes a whole bunch of features in the systems. Does that mean now that it's experiencing heat because it has a dependence on it has a collection of systems that are dependent on that change?
Well, presumably to experience something, that has to have a ‘self.’ But it sounds like you're much more open to machine consciousness over machine intelligence.
Yeah, I think that's a fair thing to state. It's much more difficult for me to imagine a scenario where a computer is experiencing genuine creativity, genuine discovery, genuine exploration than it is a leap for me to say that a computer that is aware of its temperature and its environment and adapts to that reacts to that turns systems on and off because of that, is quasi-conscious.
Because there was a guy named Weizenbaum in the ‘60s who made a program called ELIZA which was kind of a really primitive chatbot. And he saw people pouring their heart out to it and he turned on it and didn't want to do it anymore. He said that when a computer says ‘I understand,’ that it's just a lie. There's nothing, there's no ‘I’ there, computer isn't an I, and there's nothing that understands anything. But it feels like you might say if the computer says “I hurt,” that can't be dismissed out of hand?
Precisely. When we say “I understand what you're going through” and I empathize, is that really that far away from saying “I am hurting and I am making sure that I stay in a stable position by changing the speed of my fan”?
Yeah, I don't think so. That to me, is like the compass saying I want to point north. I don't think there's an ‘I’ in there that wants to point north. But I guess we wouldn't know, would we?
Well so you use a very interesting word there, you said, “I want to point north” and I think ‘want,’ again it strikes on the motivator. Those motivators are core to the nature of intelligence. I don't think it's necessarily... I don't think it's intelligent at all really. I mean it's not really that intelligent for a computer to say ‘I'm warm’ and then to turn on an additional fan, but we've defined at some point the motivator for it. So it's not actually experiencing any motivation, it's not doing anything intelligent, it's simply reacting under the conditions that we defined as important to it.
So it has a sense of self, but it's all determined. Its sense of self is completely derived from the parameters that we defined for it. Maybe that's the delta is that a human's consciousness is capable of expanding into new domains. So you buy a new videogame or you learn how to play soccer. You say ‘I have a new sense of identity in that I'm a goalie, a really effective goalie.’ That's an element of your identity that was not derived from a higher intelligence. That was derived from circumstances through which you understood your core motivators and intelligently readapted your model of your own consciousness.
So are you familiar with emergence?
No.
Okay, so emergence as a property by which a collection of things takes on properties that none of the components have. So you, for instance, you have a sense of humor but there's no cell in you that has a sense of humor. Emergent intelligence are things like a beehive that collectively demonstrates more intelligence than any bee, an anthill is much more intelligent than any ant, and all of the rest.
So do you feel like consciousness or intelligence or creativity or any of these things we grapple with are in some ways... so emergence is a poorly understood concept: we have a word for it and there are two flavors of it. One of them is what's known as weak emergence where you could study oxygen and hydrogen for a year and never imagine that if you put them together they make water and it's wet. But once it happens you can figure it out. ‘Oh, I see.’
But then there's a highly controversial notion of strong emergence which says you can't actually derive a conclusion you can't work backwards. Consciousness is an emergent phenomenon but you can study it forever and you'll never find out what its components are. Would you buy that as being a possibility?
I would actually, I would. I do think and this is again where we get into a mess with the two definitions of AI on the one side. I am completely dependent on the fact that systems are more deterministic than we think they are, and that properties of macro systems are derived from nuances in microsystems. That's the nature of my work, that's how I feed my family is those things. But I do believe in weak emergence, if you will, which is roughly analogous to the work that we do in some ways, but I do believe that there are systems that are so dependent on the nature of which they are combined, that reverse engineering them, once they exist in their final state is pretty untenable.
In other words I think that the dependence on the conditions through which objects are brought together and related to one another and the timeframe through which particularly time as the dimension of measuring how those are brought together, have a much bigger impact on their final state than I think we'd like to believe, based on the determinist nature of things that we see more and more. So I think it's entirely possible we could reverse engineer certain things because the conditionality under which those objects merged to create the strong emergence is understandable.
So you're a collection of trillions of cells, none of which know that you exist. You're an entity that is a part of any one of them, right? You could lose any cell and not be diminished, not be any less ‘you.’ Do you think it's possible for instance, [that] plants might be conscious in that way?
Yes, I think if we define consciousness as an understanding of the implications to your own entity based off of exterior conditions or even interior conditions. I think that is something that plants experience.
What about the Gaia hypothesis that says that all living things on the Earth are in a way analogous to cells of a body and the earth itself has a consciousness or intelligence that, just like yourselves, can't perceive you, you can't perceive it, but it's as real as you are. Is that possible at least?
I would find it very unlikely. I say that only because my sense of self, my sense of identity is not dependent on the number of fingernail cells I have. It is extremely dependent. We don't know how and we don't know if we can replicate it, but we know that it has a lot to do with the cells that exist between my two ears and between the back of my head and my eyeballs. We know that that area is dependent for all that I understand to be me and that there are specific features of neurons and synapses and all the different waves and ebbs and contractions of my cerebral cortex that contribute to my very well-defined sense of self. If you look at the planet, interlaced connective cells, if you will, or an analog of cells, the earth is really not constructed in such a way.
Well, let me make the argument. It says your body regulates its a temperature, and nobody really knows how it does that by the way, you know you're 98.6 [degrees] and if you have a fever it goes up or whatever but it maintains its temperature. And the earth interestingly maintains a set of conditions which are hospitable to life. The salinity of the oceans doesn't change over time, even though you would think that more salt is constantly going into them. The level of oxygen in the atmosphere is maintained, but what happens when the Earth gets too hot? Certain processes happen that cool it down and when it gets too cool certain processes happen that heat it up and somehow the climate of the earth is self-regulating in the way that your body is self-regulating.
Through all of these complex systems, the climate is largely the same as it's been on this planet for hundreds of millions of years. And so it seems to react to its environment in a way that I'm sitting here thinking, ‘well if you think you're a computer reacting to a temperature change and therefore you're comfortable saying the computer's hot, would it almost seem like you're comfortable saying the Earth says it's hot or the Earth says it's cold or the Earth says its oceans are too salty and it's going to make them less salty that it experiences it all?’ And I guess I'm just trying to find out where that ends with you?
Sure, so it certainly ends before there. Let's see. To speak plainly, the fact that two systems are robust does not mean that they derive that robust nature from the same mechanism or the same process. So I'm self-regulating my temperature. Sure that's a complex process we don't understand. Assume again it has something to do with my gray matter making that decision for us. It could be something else, but the mechanisms that allowed me to get to the point where I've evolved as a species to autonomously make those decisions and change is different.
I'm thinking about the process holistically and that's not quite the same as having a collection of unattached organisms or disparate organisms that will react to those macro conditions, [such as] algae growing in the southern hemisphere because the level of CO2 is rising and the temperature is rising, which then offsets those things. Is that because there is a universal balance that's derived from Earth or is it because only the algae that are capable of doing that have been able to survive over the course of multiple millennia?
In other words, I think the Earth is identifying objects and selecting objects, not consciously of course, but it's created an environment where objects that are symbiotic in their nature will survive well and those that don't, won't. So that defines the frame, it defines the ecosystem. And it has a very draconian method of letting those that don't agree with its ecosystem out, despite complete annihilation. I'm not sure that the process by which I regulate my temperature is derived from individual cells or regions of cells making decisions based off of their temperature simply because that's the environment they've been exposed to.
Fair enough. We've spent a whole lot of time talking of this and I thank you for your patience going through all these issues. Tell me a bit about Valkyrie Intelligence and how do you spend your days? You're a practitioner in all of these fields, so tell us about your company and what you try to do.
Yeah, Valkyrie Intelligence was founded about a year and a half ago, almost two years now. We basically are a collection of scientists, mathematicians, and strategists who solve really interesting problems in the industrial world using narrow AI. We say ‘narrow AI’ but we also implement techniques in traditional machine learning a lot. And we also implement techniques in basic algorithmic development.
We champion two different disciplines in the field. One is pattern recognition which of course gets a lot of attention in industry from neural network design to algorithmic design. What we really champion is the second discipline which is structuring knowledge so taking data and information and restructuring it into such a format that it's amenable to pattern recognition and we've done some really cool stuff. I mean we've helped banks cut the default rates in half. We've helped telecommunications companies redesign their whole business model. We've helped investment firms make predictions about what assets to buy. We created recommendation engines in retail and our multiplier on projects with us is very impressive.
We've got a lot to be grateful for. We've had a stellar run so far. I myself am straddling the player/coach model where I serve as CEO, but I also am ‘hands-on’ code for a significant amount of time, so I take on some of our projects that pertain to graph theory the implementation of complex graphs within industrial space. I take on some of our challenges around clustering analysis in those graphs, so we develop techniques with methods like the Louvain algorithm to list one. I also do a lot of work on the sequential model and stochastic math. It's really important for some of our financial services clients as clients, and so my day is pretty big mix…traditional leadership roles and then again two days or so I target a week hands on keyboard.
The entire Valkyrie team dedicates about 10% to 20% of our time on R+D. So we have what's called “sci-fri” where Friday afternoons and sometimes Friday evenings and sometimes Friday nights, we are working on really interesting problems that we're obsessed with. So for me, it's largely semantic graph modeling and developing those whiteboards that we described earlier in the conversation. Some of my team is really keen on developing new techniques for in-memory data frames and we've just found a really great mix between valuable work for clients and then academic research.
So the valuable work for clients: Are there any kinds of success stories you can tell that a problem was brought to you and you worked on it and you got a solution and all the rest?
Absolutely. So we worked with an investment firm that was interested in creating a model for making predictions about the value of a particular asset. We created a prediction algorithm for their asset class and they were able to raise $150 million for that one collection of algorithms and that technique alone. Very successful fund from that point on.
We developed a recommendation engine for a digital experience company. If you think of it as like Fitbit for your mental health and they've now installed in over 70 Fortune 500 companies, has grown incredibly over the last three years since we started working with them and they're just a phenomenal company, ENERGI.life. We worked with a bank that was dealing with really high default rates, helped them cut that in half through our knowledge engineering techniques, got from 16.3% down to 8.4% and helped their team create a data science capability internally.
I think what sets us apart is that we're much more focused on the business output than we are on which technique we use. We don't care if we need to use traditional ML or the latest greatest commercial neural net to get to a solution. We continue to have really great project work that's derived solely from our capability, and that's why we're able to be completely self-funded. So we're not even bootstrapped at this point. We're a profitable company [with] no debt. We haven't had to raise any money from any investors. It's all been derived from doing really high-quality work for our clients.
So what would be your typical client like if somebody is listening and says “I have a project?” Is that the kind of person that should contact you?
We love when clients have projects, we tackle those. Our absolute favorite clients are those who say “I know an AI ML has a potential impact on my company. I just don't know where to start. I'm not sure if I have the right data. I'm not sure if I have the right team. I really need to figure out how to make this new transformative technology work for my business.” When we engage like that, we bring on our strategy team and our science team, and we're able to create a totally different vision for execution that's much more aligned with where they envision their business going and we execute code. Without question, we’re not a front end design shop. We don't put a bunch of frills around our work. We develop algorithms so when we leave our client meetings, they have a collection of algorithms and work not a pretty PDF if something's topical. So we love clients who want to understand how their business can transform because every business is different. We don't have a singular platform. We're platform agnostic. Another big benefit, I like to go in, understand a problem and then transform it in a unique way that gives them an advantage. We've got really great results with that so far.
We’re running out of time here. When people want to follow you personally, how do they do that?
You can follow me at twitter @CharlieBurgoyne. I'm actually much more active on LinkedIn, and @ValkyrieIntel. And if you want to reach out to me, feel free to send a message to our inquiries@valkyrie.ai
All right Charlie. Well, it's been a lot of fun. This is one of my very favorite topics on the planet and you're a really thoughtful guy. Appreciate the time.
Thanks so much for pushing the boundaries of our understanding. I've got a lot to chew on this weekend.
Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.