Steve Shwartz

In this episode of the Judgment Call Podcast Steve and I talk about:

  • Will AI be an existential challenge to humanity anytime soon?
  • What progress AI has been making and why it has sped up in the last 10 years so much?
  • Is AI already smarter than teenagers?
  • Is Twitter’s / Facebook AI evil?
  • Why are self-driving cars such bad drivers currently? Will self driving cars be autonomous very soon?
  • Are we inside a simulation and could we create one easily?
  • Has AI already changed the job market to be more short-term?
  • Will AI increase the quality of life?
  • What opportunities are there for entrepreneurs right now?

Steve Shwartz began his AI career as a postdoctoral researcher in the Yale University Artificial Intelligence Lab. Starting in the 1980s, Steve was a founder or cofounder of several AI companies, one of which created Esperant, which was one of the leading business intelligence products of the 1990s. As the AI Winter of the 1990s set in, Steve transitioned into a career as a successful serial software entrepreneur and investor and created several companies that were either acquired or had public offerings.  He is the author of “Evil Robots, Killer Computers, and Other Myths:  The Truth About AI and the Future of Humanity” which will be published on February 9, 2021 by Fast Company Press and maintains a website www.AIPerspectives.com that contains a free, 400-page, online AI 101 textbook.

You can reach Steve at LinkedIn.

You can find the episode’s transcript here.

 

Hello everyone, welcome to another episode of Judgment Call, podcasts where I talk to risk takers, adventurers, travelers, entrepreneurs, and simply mind partners. My name is Thorsten Jakoby and I’m your host. This episode of Judgment Call is sponsored by Mighty Travels Premium. Full disclosure, this is my business. Mighty Travels Premium finds the travel deals that you really want and it finds them as they happen. Between 450,000 offers every day to give you the best deals in economy, premium economy, business, and first class. We also make recommendations for four and five star hotels all over the planet when they are much cheaper than they usually are. Thousands of subscribers have saved more than 95% in their effort to get and have flown the business class, life led, transcontinental using our deals. In case you didn’t know, Americans, Europeans, and many other nationalities can now travel to more than 80 destinations again. Give it a shot and try out Mighty Travels Premium for free for 30 days today. You can sign up at mightytravels.com slash mtp of everyone who’s troubled with all these characters. Go to mtp4u, that’s just five characters, mtp4u.com. I’m very excited today to have Steve Schwartz as my guest on the Judgment Call podcast. Steve is an author, investor, and serial entrepreneur. And back in 1981, Steve cofounded one of the first AI companies called Cognitive Systems. And Steve has taken up a lot of interest into AI, again, after putting the topic a little bit of back burner in the 90s. Steve also wrote a book, free ebook, Artificial Intelligence 101, which is available on his website, AIperspectives.com. And his upcoming book is Evil Robots, Killer Computers, and Other Myths, The Truth About AI and the Future of Humanity. Hi Steve, it’s great to have you, how are you? Good, hi Torsten, thanks for having me. Absolutely, absolutely. That’s a big topic that you have for your book. It is, yeah, I’m excited about it coming out. What is the main theme of the book? You know, I can see that from the title obviously, it sounds like you have a different view on AI than most people have. And I’d say most people, the consensus right now is that AI has this enormous success in the last couple of years. And we will see the same success and kind of world changing success that we’ve seen over the last five years. We will see more of this in the next five to 20 years. You see this slightly different, correct? A little bit different, yeah. Yeah, so AI has clearly made great strides from an engineering perspective. Siri answers our questions, Google Translate helps us talk to taxi drivers in foreign lands, our smartphones automatically identify the faces in our photos. And that progress, as you said, it naturally leads people to wonder where it will all end. Will robots get so smart, they turn us into pets? Tesla founder Elon Musk says, AI is humanity’s biggest existential threat and it poses a fundamental risk to the existence of civilization. Similarly, the late renowned physicist Stephen Hawking said, it could spell the end of the human race. I’m very frustrated with this kind of fear inducing hype and the concomitant overstatement of AI capabilities by vendors, and that’s what spurred me to write this book. In my book, I explain in simple terms to a mainstream audience why AI systems are not going to become intelligent enough to have the ability to exterminate us, turn us into pets or even take all our jobs. That’s good to hear. You know, some heroism makes a similar point and I listened to his pet talk a while ago. And I think the basic myth goes like this. The way is we have a very small amount of intelligence that we see in machines, and the question is obviously how do we define intelligence? I think we’re going to get into that. But what happens is it’s definitely, it’s rising. So something is there that we can finally, we had it in the 80s probably, and I’m curious about that story a lot. We had it in the 80s, but now we see the sudden rise. And the question is not, will machines take over from Sam Harris’s point of view? And I think Elon Musk makes a similar view. It is only a question of when. And it probably won’t be in our lifetimes. I think we can all agree on this. But is it in 200 years? Is it in 2,000 years? Is it in 200,000 years? At one point, we can all agree that the question is a lot of people who work with AI and who are close to that topic, if you ask them, they would all say, this is not even in the books right now. I mean, this is not something you have to worry about. But as further you move out and zoom out, a lot of people say, well, it might be not in my lifetime, but it could be in my children’s lifetimes. And it is something we should think about. It’s a bit like the nuclear bomb. We knew the basic technology for this 100 years ago, but it took 30, 40 years to make it happen. And then it would have been good to be prepared and be morally as well as policy wise prepared for this. I think this is where most of the fear comes from. This fear is generated in order to generate this push to create policy. You know, is it really like the bomb, the nuclear bomb, or is it better analogy, time travel? In my view, even though we’ve done a great job with creating things that work with AI, no one has any idea of how to build common sense into computers. And that’s what you really need to get intelligent computers. And all the different AI technologies we have today, they’re all dead ends when it comes to this level of intelligent computers. So we really have to start from scratch. Even though we’ve built great things, we can’t use any of those great things to build, you know, what some people call human level intelligence and what some people call artificial general intelligence. And to me, that makes this more analogous to time travel. You know, people have some crazy theories about how time travel might develop someday. But would you want to say that time travel is around the corner or maybe in our children’s lifetimes? I think I saw a headline about time travel. But maybe that wasn’t real. It seemed like the cold fusion that seems to pop up every 10 years and then it turns out that isn’t real. But I saw it on an atomic level that time travel suddenly was possible. At least it was a paper that was published. Maybe it was bad measuring. Maybe let’s dive into this first a little deeper. I read your book, the AI 101. And it’s a lot about math. There’s a lot of statistics. You break it down. I think you do a fantastic job there. You break it down into, I think most readers can understand that. And I’ve been using a lot of AI tools myself the last 12 months. And I think this was wonderful for me. Even knowing some of the details, I could see a lot more. I got a much wider zoom out picture. And I thought that’s fantastic. And what you realize, and I think we both know that, is that at the current stage, it’s kind of slightly better than a brute force guessing mechanism. That’s how most AI works. And that’s not the picture most people have in mind. But I think the results that this brute force guessing generates is sometimes just sort of being magic. Yeah. Like when we see what it does say at Google, and it gives us, my children talk to Alexa about a certain topic, which is really creepy. And then two days later, we have an ad about the exact same topic. And I’m like, whoa, this is pretty cool. Too creepy. Yeah. And it’s kind of like magic to me. And that’s what I wanted to get at. At what point is basic math and basic guessing algorithms? At what point do they look like magic? Like say we have some kind of technology which is basic, but introduced to someone 5,000 years ago, would look like magic. Yeah. I think that’s right. But, you know, let’s look at it from another perspective. So in the late 70s, when I was working on my PhD at Johns Hopkins in Baltimore, Maryland, I taught statistics at Towson University. And I taught students regression and classification. Regression being learning from an input table, how to predict a numerical output, or learning classification being learning from an input table, how to predict the classification. So for example, I remember teaching, you know, if you had a big table of historical sale prices, and, you know, you had a column that had how many rooms in the house, and then with the square footage and so forth, you could build a regression algorithm that would predict sales prices for houses that hadn’t come on the market yet. And it wouldn’t be perfect, but today’s systems aren’t either. And, you know what, we didn’t call it learning back then, we called it computing a function. And if anybody were to say, yeah, so those functions have some intelligence in them, you’d say they’re crazy, you know, those are just stupid functions. But actually, every AI program today is just one of those stupid functions, it’s either a regression function or a classification function. And the difference between now and in 1978 is that with bigger computers and better algorithms, we can produce, we can calculate much more complex functions that can do some pretty impressive things, but it’s still pretty much you’re taking a table of inputs and predicting a classification. So for example, facial recognition is just taking a table of images as input, each one labeled with the correct name, and then learning to classify an image with that correct name so that if you see another image of one of the people that it had learned about, it would be able to correctly predict that person X is that image. But that’s all it can do. It can only do that one function. It can’t do another visual function like determining, distinguishing a dog from a cat, it can’t translate language, it can’t do anything else. And if you try to teach it to do something else, it forgets how to classify faces. Yeah, you had this example in your book, and I thought that’s really interesting when some of the researchers introduced incorrect labels, say it was a monkey with a guitar, they would suddenly start recognizing that monkey based on the facial attributes as a certain person. And every human who would look at this, the first immediate thought is that’s not a person, we don’t have to worry about the facial images for human facial recognition, but computers are not able to distinguish that at some point. Unless you specifically tell them that’s something they need to train for. Right. Yeah, so in that example, because there had never been a guitar in the training data set, wherever there was a guitar that was always associated with a human, it latched onto that guitar and said, oh, this is a picture of a human holding a guitar, not a monkey. Yeah, these algorithms are easy to fool. And I think the question is, will they ever get to some sense making? And we see these headlines, and there was last week was DeepMind’s success in figuring out protein folding, which seems to be, since then there’s been mixed messages coming out, that this isn’t as big a deal as it sounded, and it wasn’t actually such a big step forward, but definitely it’s a challenge that we’ve been having for 50 years, and also there were lots of talk that Google Duplex, some AI that they trained, was apparently being able to fool the touring test. So those are, in computer science, these things have been around forever, and we didn’t make any progress, and now it seems like if you look on Twitter every other day, there is something that has been around for 50 or 100 years and we’re suddenly able to perfectly solve it, but certainly we’re getting close to solving it. That’s got to account for something. It does. But what I would say is every single one of these amazing achievements is because somebody’s been able to figure out how to turn it into a classification problem, a statistical classification problem. And if we looked at what these systems are doing as computational statistics, as opposed to, quote, unquote, artificial intelligence, nobody would be worried that they’re going to take over the world. People would just say, wow, statistics has really come a long way, but they wouldn’t be worried about the Terminator or computers turning us into pets. Skynet, yeah, Skynet. I think there’s a company called Skynet. I saw it the other day. And there’s a magazine, an AI based magazine called Skynet today. Yeah, so we’re close. We’re close. We’re just a couple of years behind what the Terminator wants to tell us. But why do you think that is? Why do you think we’re making this progress right now? If you haven’t made that 50 years ago, is it just the data sets or the data itself is getting better and there’s more digitized data? Or is it something that just happened because Google needs it? I always feel like because Google makes so much money off AI in advertising, they just put billions at it, like the public research would be 50 years ago, and all that money that they throw at it, it bears some fruit over time. And we are kind of seeing that in the open source community that a lot of that is actually coming from Google. Do you think that’s the driver? You know, I think Google got into it after it had happened and got into it in a big way. But I think the algorithms have evolved. Jeffrey Hinton pursued neural networks, and Yann LeCun, and Joshua Benjio, and Juergen Schmidt Fieber. And it’s the neural network algorithms that have enabled us to go from where in 1978 when I was teaching statistics, we were mostly limited to regression functions that were linear. Now, with neural networks, we can calculate, I mean, I’m sorry, classification functions that were linear. But now we can calculate classification functions that are, you know, in a massive number of dimensions, you know, hundreds, thousands, millions of dimensions, you can’t even imagine them. But these algorithms have just gotten so powerful that they can learn functions that are very, very complex. That’s what enables them to do these things. Now, of course, the advance in computing power is also a big enabler, and big data provides a lot more use cases to tackle. But it’s mainly been computing power in algorithms that have been an improvement from the early days of linear classification functions. Yeah. I mean, neural networks sound really fancy. That’s one part that still escapes me a little bit how they actually work. But the idea is that you kind of break down a very complex product into less complicated ones, and that’s how you solve these many, many dimensions. So how does it actually work for on a layman’s terms? I know you describe it in the book, but I couldn’t really follow it, to be honest. Yeah. You know, one of the interesting things about neural networks is that it’s really hard to figure out what’s going on inside the network. And, you know, just to segue on to another topic, I don’t mean to segue on to another topic, but this is the reason people are so concerned about discrimination and AI systems making decisions, because especially if they’re neural network based, you don’t know what’s going on inside that network. You don’t know how they’re making decisions, and, you know, people are being affected by them, these systems that determine whether they can get bank loans, the facial recognition systems that they’re trying to determine if they’re terrorists at the airport. But we don’t know how they’re really working inside. Yeah. So that’s, I think, a big problem. All of the AIs that I’ve been working with, that I wrote myself, there’s two problems. One is you don’t know how to arrive at a certain conclusion. That’s a black box. I think that’s part of the game. And the other big problem is you always need someone to validate the results. There’s, you know, inbuilt validation efforts that every training comes with, but still you need like a somewhat intelligent person. Typically, it’s a human, a data scientist who then evaluates the results and says, oh, this is what I expected, and this is how we are actually doing. And so those two things definitely require some common sense that is rare in the world of AI. But I’m wondering, isn’t that really something that will only be with humans, or isn’t there a way that at some point you will have an AI that can simulate common sense? Like we kind of just have that in the Turing test on a simple level. Why is common sense, and I notice you have that in the book, it has way more dimension, it’s just a way more complicated problem. But if you, if we came from a simple statistics function and now have a thousand dimensions in a neural network, wouldn’t we be able in ten years to do something that has a bit of common sense? If you just describe it as a dimension function or something that, you know, we’re just trying to look for the right AI that we have to throw at this problem. Yeah, yeah. So the problem with that, suppose you had all these little modules that could do things, and now you’re trying to build a big module that’ll figure out which submodule to invoke. The problem is that, that control module would need common sense. You’re just pushing off the problem into that, into that control module. The alternative is to write a conventional software program where you say, if x, then y. Otherwise, if y, then z, and so forth, and that we know from our history from the 1980s that we can’t be successful creating human level intelligence using rule based systems. Yeah. It’s just too hard. So when we think about AI, we have that idea that AI is going to be a superhuman. The most intelligent person we know, and we know it was beating chess players. Yeah, this consistently the last 15 years. I always felt like chess players are the most intelligent people on the planet, and they were like, I looked up to them and I still do. Maybe that’s not a good idea, but that’s kind of how it felt to me. Yeah. If we see children, if I asked a three year old about a common sense question, there might be certain things that a three year old has, but in general, we’ll be on a really limited level, and I think in between we have anyone who is having a high amount of common sense or has no common sense. And I always feel that AI has now arrived at a level of a teenager, maybe not a 16 year old, but maybe a 12 year old. And 10 years ago was maybe at the level of a four year old. So the examples that have been made is that, for instance, GPT three is now as good as most teenagers are in school, in terms of text and essays that it produces, computer code that it can generate, that doesn’t bode so well for teenagers at least right now. Yeah, I don’t agree with that. Let’s take those things one at a time. Let’s talk about essays. So if you look at the essays GPT three generates, at least half the facts are wrong. So it doesn’t really understand what it’s saying. Yes, that would also apply to most teenagers. When I created essays in high school, I researched them and they were fairly factual. I mean, they may not have been very interesting, but I was able to create factual essays. And I think most teenagers can do that. You know, getting back to children. Okay, let’s take a GPT three for a minute, go ahead. No, I’m just saying, you know, what most fifth grader would do or sixth grader, seventh grader, what they would do is they would copy and paste from existing material, then maybe change the syntax a little so they’re not as easily detected. And that’s how usually these things look like. That’s not unlike GPT three, which is not an original work, but it’s not fully original. It’s probably not the right term to use there. But it always feels like it’s something that comes from a source that already existed because that’s how this works. For what I understand, it kind of works like a translating AI and this really focused on sequencing. And that’s how this works when I look at my children. They look at, they maybe got a certain, they understand, I don’t know, 1% of that topic. Then they look out from a material that’s already there, then they recombine it, they rehash it, and then they put this into the form that is the assay or that’s a short assay. Let’s give it a short assay or whatever their test paper is about. But they get the facts right. Well, sometimes, yeah, depending on what the TDS says. And they have some ideas of what it means. I mean, I understand the big picture ramifications, but they have some idea what they’re writing. Sure. But GPT3 has no idea. And it’ll get huge numbers of facts wrong. Yeah, but is it, we, that’s kind of, maybe it’s a bit of an observer problem. We have something that presents us with a seemingly logical answer. We’ll take a bunch of people, maybe right now, just a few minutes, but in a few years, it will take them a few days to figure out, oh, this is the correct answer, or no, that’s actually just copied it from somewhere else. Like the verification of the answer, I think, becomes the bigger problem with AIs as they progress. Yeah, although it’s, you know, even without AI, we have that problem with fake news. Sure. And they don’t bother to verify the facts. Yeah. Even though they’re easily verifiable. Well, do you think, do you think this whole problem with fake news, which is a relatively recent phenomena, at least if we have this at that scale, is that driven by AI in the sense that AI, and the way we perceive the news now, the way that it directly interacts with our brain, is that AI has made us more vulnerable or more susceptible to this, so that AI is kind of already changing our brain, and AI that is running the algorithm for Facebook and Twitter and Google, it pushes things up that we don’t really care if they’re fake or not. It just, it generates an emotional reaction with us, and we are more interested in this, at least for the last five years, into getting the facts right, because a lot of people know how to validate facts, but nobody bothers with it. We focus on the first paragraph that’s giving us this emotional adrenaline shock. Yeah. No, I think that is a problem with AI, you know, especially when you look at, those algorithms are there to generate more engagement so that people stay on the platform and click on the ads, so that the platforms like Facebook make money from it. And so what the algorithms tend to do is produce, show people more and more of things they might be interested in, and it gets them interested in things, and unfortunately a lot of times that process pushes people into fringe groups, and once you start getting in those fringe groups, nobody checks facts. It’s just, it’s just spreading rumors, and you pass the rumors from one person to another, and it goes viral, and yeah, I think that’s a big societal problem. Although I think it has more to do with the social networks than it does to do with AI. Certainly the AI algorithms contribute to it, but I think it would happen anyway. Yeah. It’s hard to find, I don’t know what the way back is, because I feel when we lived in a nonsocial media environment, we had a normal distribution of things that could happen to us. What I’m trying to say is we do things and we feel a car driving can be dangerous, but driving a car, nobody ever hit me in the last 10 years, so it isn’t that bad. So we had like an intuitive feeling for the normal distribution of events in our life, but with social media and the way AI has digested all this, and I think when social media started it was the same there, it was a reflection of the real world, but since engagement is low of things that you feel like they can never happen to you, they’re kind of outside of your sphere, AI has moved and resorted the whole world into something that looks extremely scary. That’s why we engage with it. It’s scary and maybe in a good way or a bad way, often I think the bad way is more accessible. So we always, we feel like reading Twitter, we get hit by a car immediately. The normal distribution of experiences has completely been skewed by an AI, and nobody’s able to verify this anymore. The engineers set out to do that at Twitter or Facebook, but it’s kind of running now and it’s kind of, I think this is what people feel, this is the danger of AI. You set this thing in motion and it becomes like a weapon of mass destruction because even the engineers who build it, they maybe can shut it down, but then there’s obviously a competitor who will come up. You can’t, once this AI is in motion, you can’t stop it anymore. But is this AI or is it Twitter? Well, if it’s not Twitter, it would be TikTok. If it’s not TikTok, it would be Facebook. Right, exactly. It’s interchangeable. Right, that’s my point. It’s the social networks, not the AI. If Twitter didn’t use any AI, you’d get a lot of the same behavior. You know, AI might exacerbate it a little bit. I feel like it would more reflect, like it used to be, like Sadie, the early days of Facebook, it was more boring and there was more, random stuff, I’d call it, on the side, but it would reflect a real life better because 99% of what we experience is randomness, right? It’s just boring randomness that we don’t really worry about. We worry about the car crash, then we stop and we’re like, well, we need to see this because then we change our picture of reality. And I think what happens right now is that all this, obviously you can say that’s social media, but someone would have used AI for this, maybe not now, maybe in 10 years from now. But the way that is changing our perception is we constantly have to adjust our mental image of the world all the time because there’s always a car crash. There’s one in the morning when I open my Twitter and there’s one in the evening and then there’s one for lunch. But these things would only happen to me, you know, I don’t know, once a year that I would see a crash on the very rest. But why do you blame that on AI and not Twitter? I won’t say I blame it on AI, but I feel like AI got gotten so good at that. I mean, the engineers have done a great job. This has gotten this runaway problem that has already changed the life of billions of people. And I don’t think Twitter has used this technology to something evil. You can put it this way. Right, but is it really, suppose they didn’t use any AI technology. They just used conventional computer software with if then else rules. They could still have done 90, 95% of what happened. You still have lists of people to follow. You still have algorithms that will show you the viral posts. You’ll still be able to figure out things that you like. You know, AI sharpens the pencil a little bit and makes all these traditional software algorithms a little more pointed. But I don’t see it as the AI that’s causing these problems. I think they would be there if the social networks didn’t use any AI whatsoever. Well, it’s certainly at the margin. At some point it becomes a definition problem because this is not like a self living being that has become conscious and now is going to take over the world. That’s for sure. That’s not what it happened, how it happened or will happen at any time soon. But there is something, maybe it’s just scale. Maybe it’s the sheer scale of you just need a bunch of computers. A kid can now have an impact on the world’s mind in a matter of days if you have the right AI. Maybe it’s just the password AI we use too much. That’s what I’m afraid of. Suppose instead of AI we started complaining about statistics. You know, statistics is taking over the world. Twitter is using statistics to push us into fringe groups. Look, it’s using statistics. Nobody would get very excited. But when you say it’s using AI to do that, now people start worrying because of the idea of AI as the terminator. Maybe that’s a forest tree issue. But people who are close to AI and have worked with it every day, they think it’s just an extension of the algorithm. You cannot really dispute that. But on the other hand, we see that there’s this progress that has made great strides. We heard that two years ago that Tesla said we’re going to have self driving cars. Google is having way more cars on the ground in Phoenix that are supposedly self driving. I don’t know if there’s just sometimes no driver in it or as a safety driver or at least they made that announcement. That seems to most people who have seen self driving cars for 50 years as a problem that never got solved and suddenly it’s there. I mean, it’s kind of exciting and it’s easy to extrapolate this into the next 10 years. Yeah, and it is interesting because these self driving car companies have done amazing work. I mean, I have a Tesla and I run it in autopilot. It probably does 90% of the driving. But if I let it do 100% of the driving, I’d have smashed that car about 100 times over. Did he get a chance for this? Yeah, but it is amazing what you can do with the self driving car technology and that is a great use of AI. And we’re seeing self driving cars be practical today in some areas. I mean, today you can go to a corporate campus and you can see a self driving shuttle go from point A to point B with no driver. The reason it can do that is that it’s traveling at five miles an hour. The route never changes. So it knows everything or the designers know everything that might happen along the route. So there’s no huge need for common sense decision making. And at five miles an hour, especially if it uses computer vision to make sure it doesn’t run over pedestrian, there’s very little chance of a major injury. Now you go up the scale a little bit and we’re starting to see delivery vehicles. So in some cities, they’re allowing self driving. They look like little ice chests that are moving along the sidewalks. They use computer vision to avoid bumping into people and they deliver things. And again, same idea. It’s a little bit harder because the more things can happen and there’s more terrain differences and but they don’t have the ability to use common sense. So if they get into a bad situation, they probably just stop and I imagine that happens quite a bit. And then the next step up are the self driving taxis, which are being tested in cities like San Francisco by companies like Zooks and Aurora and in Phoenix by Google’s Waymo division. And what’s happening there is they’re all being tested in very small areas where they have every stop sign, every fire hydrant, every work area, everything completely mapped out. And if any of that changes, the cars will get stuck or have an accident. And Waymo’s doing it in Phoenix with safety drivers, the ones that don’t have drivers. My understanding is that there’s a remote driver who can drive the car with a joystick if necessary. But I’m very worried and if you talk to people, we may have had this conversation because you’re from the San Francisco area, that the self driving taxis are often blocking traffic because they’re so conservative and people honk at them and they go too slow. Who was telling me that or was that somebody else? Yeah, we talked about that I think last time and the Waymo cars, most of the Waymo cars, we had tons of those, we still have tons of them in the city. And they drive like a grandfather, literally on his last day of driving before he gives up the driver’s license. It’s extremely cautious. San Francisco is a dense city with lots of pedestrian traffic and kind of everyone has people on skateboards, people on bikes, people on scooters, there’s a lot of stuff going on. Usually people watch out for each other but the idea of the algorithm is to do safety first and I think this is a big deal, how you prioritize speed, safety, all of those things that go on automatically once you are experienced driver, experienced in the city of San Francisco. And what happens is people who walk cautiously into the street because they want to cross but they wouldn’t go, you can see them stopping, most cars would just keep going, they stop and they stop a far away and a lot of accidents are being produced because they stop in a way that even student drivers wouldn’t do it, they are extremely cautious. They get confused, as you say, by the littlest changes and we have a lot of steep hills and sometimes they usually have human drivers but it takes them a few seconds or sometimes if they fall asleep sometimes it takes them 20 to 30 seconds to resolve this. So I see it day to day but I do feel it’s gotten way better. So we see a lot more of those cars and they still have to save the drivers. It could be that the drivers just put more effort in but to me it seems like there’s less and less of this annoyance. They’re still around but it’s driving much smoother than I’ve ever seen before and a lot of people call this the search function. You just need to map out the whole world which Google has been doing the whole time with Google Maps first and then we have those street view images. So if you assume you have a perfectly mapped environment, these cars might not look that bad anymore and the question is will they ever get to 100%, it might take a long time probably but will they get to 99.9% in most situations, maybe not in a dense city like San Francisco, most cities in the world. You know what, I can see that happening very soon, maybe in the next 5 to 10 years that besides a few cities with lots of pedestrian traffic those things will just do 99% of the driving, if not more. What good is 99% if you still have to have a safety driver? That’s a good question. It still helps what we use in the Teslas on the freeway but I don’t know if we will ever get rid of the safety driver for the next 20 to 30 years. Yeah, that’s kind of what I think. It’s even worse when you look at consumer vehicles like Teslas. They don’t have the advantage of knowing where every stop sign and fire hydrant is in work area because they can drive anywhere. So that makes the problem that much harder and there are a much wider range of things that can happen. Black ice, a ball bouncing in the street with a child following it, that people use their common sense to figure out what to do and cars can’t do that. You can write a conventional software program that says if you see a ball bouncing in the street, stop because a child might follow. But that’s very different than what a person does. People don’t learn all these rules in driving school. When they encounter situations, they use their common sense. So if a car can’t have common sense, it’s hard to imagine how a consumer vehicle like a Tesla could ever really get to level 3, 4, or 5 driving capabilities. Level 3 meaning you no longer have to keep your hands on the wheel, you can watch a movie or read a book. One thing that they definitely have on, and I think this has been part of the AI threat that was painted, is that once you have an AI that learned it and learned it to a level that is sufficiently good at it, every single computer in the world who uses that model, learning takes a long time and it needs a lot of GPUs and big machines. But once you’ve learned it and condensed it into a model, it’s almost like an instant knowledge. Maybe the word knowledge is not correct, but it’s an instant access. It’s a machine in the world that wants to, has an instant access to this. What we do as humans, and you field this with children, they go through this phase where they’re just like children and they go through a phase where they become more of an individual, but they all have to go through the same steps of learning, which seems incredibly inefficient. Maybe that’s not true, but it seems incredibly inefficient. Why not just start at a much higher level? To me, that’s where the whole driving debate comes in. If enough big companies work on this, maybe they work together in the open source that eventually certain parts of it, then it’s less of an incremental advancement. It goes big jumps, so it goes because there’s more mapping out there, there’s better trained AI, there’s the ball bouncing model has been incorporated. Suddenly, every Tesla in the world with the next download has it. I think this is where everyone is so excited about that once you have a model that works, and we have models that work at least good enough for other simpler things, it suddenly is in everyone’s car, or in everyone’s car, and from there it only gets better than a human driver. I think that’s how the AI argument goes. It is. I agree with that. I’ve seen my Tesla get better and better at various things because we get over the air updates once a month, just like you do on an iPhone. But it still doesn’t have common sense. So if the car is driving by itself and there’s a sharp curve on the highway, if I let the Tesla continue to drive at that speed, it would go right off the road. Yeah. Is that a training issue or data issue from your perspective? So how did I learn to slow down on curves? You had an accident probably. Yeah, I mean, I might have almost had an accident, but it only took one, I can tell you that. And you could take that one situation, and I’m sure that Tesla will eventually know how to slow down going into curves, either because the engineers will write conventional if then else software, or because they’ll train the system, they’ll have a slow down machine learning algorithm, and they’ll train it on a lot of curves. But there’s two things there. One, the person only takes, requires one example to train the machine. You need thousands, tens of thousands, maybe hundreds of thousands of examples of those sharp curves. And then you have all the other things that happen to you when you’re driving. If you talk to almost anybody, they’ll tell you about driving situations that they think are one of a kind. Almost everybody has their stories. How are you going to train, you know, if there are four billion people in the world and each one has their own driving stories, how are you going to get all of those into a computer? Yeah, I mean, the problem you describe is real, that the significance of data that we perceive from like a singular event is not something that I think any AI has right now. So you can obviously add it to the model, but the way to have a, like we can immediately feel like we are in danger, and then immediately the level of significance jumps so high that we will never do it again, as you say. But I think, isn’t that just a data problem? Again, if we would know that significance is so high, if we should learn from one experience, it’s not that machines can’t do it, right? They just can’t see the relevance right now, which doesn’t mean we couldn’t find a relevance algorithm, so to speak. Well, suppose there are 10 billion of these unusual use cases out there in the world. Yeah. How do you identify all 10 billion? The same way humans do, right? We have this, usually it’s like you fear, like an emotion comes up with it, and it just, it’s burned in your memory, these life altering events where you think you’re going to die for a moment. This is, I don’t know how it works, but I’m pretty sure that works in a similar fashion, right? We immediately, before we even consciously know it, we immediately know our life is in danger and that’s how we assign relevance. But how will a computer ever be able to, but that’s reasoning. How would you build that into a computer? No one has any, there’s no scientist today that has any idea how to do that. I agree, but we do know that before we consciously know that, so the reasoning cannot be it. Like when I was in an accident, I immediately had that sense of, you know, this fear, this hyped up in adrenaline, and that’s before I could even understand if I had injuries, what happened. Who was at fault? What actually happened was a car running into me, was I running into someone? I know this is life altering, so I just need to be as awake as possible. That was definitely not reasoning. I mean, everything else but reasoning. Let’s put it this way. I think we can build an algorithm that way. When you say it wasn’t about reasoning, so what did you do that wasn’t reasoning? Well, I literally jumped out of the car in the middle of traffic because I chased after the person who was in the truck who kind of dented this side of my car, and I ran after them in the middle of traffic. That was the opposite of reasoning. I was high on, I was shocked, I was probably in shock. Well, I would argue it was reasoning. Maybe it was faulty reasoning, but there was a lot of reasoning going on there. I mean, just starting with how you get out of your car. So you know you reach with your hand and you know if you apply pressure and you grasp the handle and you apply pressure, the door will open, then you know you have to push. You know all of these things. How do you know all these things? And then you know how to run. There’s a lot of common sense reasoning, common sense knowledge and reasoning that you do to do all of that. And how would you get all that into a computer? Yeah, but these are like, that’s triggered by that sense of heightened attention. But the heightened attention was there before I did any of this. Let’s put it this way. I mean, it just happened like, I don’t know, half a second, less than half a second. And I was probably, I barely have any memory, so I’m not really consciously aware what was I thinking or what was going through my mind. And those are all protective mechanisms, right from the limbic brain. But if the limbic brain can do it and animals can do it 300 million years ago, I think we can do it too. That doesn’t solve the reasoning problem. It just solves this immediate problem of I’m in danger, so I need to pay attention and that I push this higher in my learning priorities. Oh, I see what you’re saying. Yeah. So I think that can be done. But obviously it’s dangerous because if you have very small data set, then you might learn incorrectly. And then you’re back to the if and then problem. Right. Which has traveled computer science. Are you a fan of Ray Kurzweil? Do you believe in the singularity? Do you think that makes sense what he says? And do you think we have this unlimited amount of computing power in like just 20, 30 years from now? You know, I don’t agree with that. So if you take a computer from 1980, I just read an amazing statistic that today’s iPhones have as much computing power as a great supercomputer in 1985. So if you take an old style computer from 1980, 1985, and you put a word processing program on it, the only thing it can do is word processing. If you take a really powerful computer today that’s, you know, billions or trillions of times more powerful than that 1980 computer, and you put a word processing program on it, and that’s all you put on it. The only thing it can do is word processing. Now, if you make a computer that’s a trillion times more powerful than the ones today, and the only thing you put on that computer is a word processor, that’s all it’s going to be able to do. It’s going to be able to do it really fast, but that’s all it’s going to be able to do. Yeah, but that’s, well, yes, you’re absolutely correct. But we’ve been, we’ve been putting other stuff than word processors on our iPhones, right? And I mean, I think the iPhone is less of the miracle in this. It’s the server parts, the cloud, where a lot of the AI lifts that basically crunches the huge data sets and comes up with patterns, and especially unsupervised learning, you have that in your book. You have patterns that humans would not see right away because there are too many dimensions, or there’s just, we can’t really parse the big data set. But for AI, it becomes visible, these patterns. And I think this is where the magic is, right? This is where the self driving cars come from. They don’t come from something that runs on the front end. It is really something that happens in these server parts, and it’s all a couple of Python modules and mostly Linux. This is where the magic happened in the last 10 years. And if you scale, keep scaling this, and just thinking linearly, because we were kind of stupid humans, so just think linearly. That could be pretty amazing already. And if you think this is like quartz, while sizes is like a rhythmic every 18 month, it doubles. And then he says, that’s why he calls it the singularity. He says, it’s going to be so amazing. It’s kind of a hopeful, optimistic viewpoint. It’s going to be so amazing that it solves, like we can’t even look beyond it, it kind of solves all our problems. It’s obviously, we can’t verify that, but I think it’s such a hopeful message. And it seems to make sense, just in the core statistics. I agree with your point that if you don’t develop the software, if you don’t develop the right mindset, it’s not worth anything. If you just built big machines that kill us, like war machines, new weapons of war, then it’s not good at all. But if we manage to use it for something useful, my question to this is usually entrepreneurship. It’s because entrepreneurship can only live if you find someone who buys your stuff. If you just make things that nobody wants, then nobody’s life is improved, then you’re not an entrepreneur. I mean, you try it, but it’s successful entrepreneurs and it often takes a lot of tries and a lot of things to learn. You eventually make everyone’s life that touches your solution. You make that better. That person makes a voluntary decision to improve their life. So let’s go back for a minute to how self driving cars work, because most of the AI in self driving cars is a set of supervised learning algorithms. So you’ve got a program that can recognize a stop sign. That’s one supervised learning algorithm. It’s been trained, it’s in the car, it knows how to use it, and there are about 50 of those, recognizing pedestrians. There are other supervised learning programs that figure out what the trajectory is of this person or this car a second from now. So there are all these little individual programs in there, and they’re mostly connected by conventional, if that helps, programming. There’s very, you know, I don’t know, I can’t think of, I don’t know of anything in a self driving car that is unsupervised learning where it’s going out and figuring something out. They’re very specific classification algorithms connected by procedural code. Where the unsupervised learning techniques have had impressive results are in cases like GPT3, where you give the computer a supervised task of predicting the next word, and you get some interesting results. How does a lane changing algorithm work? I always thought that’s unsupervised. Eventually it figured this out on its own. I’m not sure, but I can’t imagine how you would do that in an unsupervised way. I would imagine that it would either be a supervised learning algorithm or a reinforcement learning algorithm that each situation is labeled with the correct response. And actually, I think, oh, I see what you’re saying. Yeah, so you could look at lane changes as unsupervised learning in one respect, which is that in every situation, you can take the correct label to be what the actual driver did. Say you’re in a Tesla and the system can look at, OK, did the person change lanes or not, and what preceded that. That’s self supervised learning, which is considered a form of unsupervised learning, but that’s really supervised learning where the labels are provided by the environment. This labeling seems very tedious. The whole idea that we have to run through this whole labeling process, so that seems less impressive than new patterns that we find from the unsupervised learning. To be honest, I don’t know the exact boundaries between the two. It seems like they kind of shift what they keep shifting because nothing’s ever, because the results kind of shape how valid those results are. They seem to shape, they’re kind of like labels in my mind at least. So when you come up with something that, say clustering, and you have that in your book that dimensions are being reduced, I found that really interesting that there is ways to overcome these hundreds of thousands of dimensions, and Netflix had big issues with this, and then they came up with a way to move that into classes and suddenly become way more easy to handle. Maybe that’s the technique for all that data that’s going on in all the dimensions in self driving cars, because there’s so many inputs at any point of time, and all of them could be important, like the ball bouncing in front of the car, but there’s no kid around, and maybe it’s not a big deal to run over the ball. Yeah, I mean. I’m an idiot, maybe I mix up a lot of things here, but that’s kind of what I feel. You know, I have a practical idea of how AI works for me, but seen from a Python script, and playing with some data frames, but the higher implications are obviously not an expert at all. Yeah, you know, there are one of the theories about how to get to human level intelligence, and it’s one that’s pushed by some of the big names in AI like Jan LeClune and Joshua Bengio, is that you can give a computer a task like learning to predict the next word in a text. And with the idea that what’s going on under the hood is that in order to do that task, the computer is learning way more than a simple function, that it’s learning knowledge about the world, it’s learning how to reason about the world, and so forth. You know, to me, that’s kind of wishful thinking. And I think the evidence of that is that it learned so many wrong facts and there’s a much easier explanation of GPT three which is that it’s just piecing together words and phrases that it’s encountered in the in the documents that it’s trained on. Yeah, that’s probably true. I guess we will never know that’s the problem with the whole validation right we don’t know what’s inside how it learned how it came to a certain conclusion. I, that’s kind of my favorite, my favorite question is asking people, how do you, how did you come to this conclusion? What, what changed your worldview or what changed your view on a specific question with the eyes you run against the wall, they even, at least the current is they can’t even if they would have a language, they can’t tell you why they arrived at a certain conclusion. I feel that’s very disappointing. Right. Right. It’s, it’s, it’s disappointing. And it’s a, it’s a big problem for society. Yeah, we don’t know. We don’t know how to shift priorities and we don’t know how they assign weights of competing priorities. Like for driving everyone says we need to open source that that algorithm because am I running over the the that’s shown in front of me or I’m going to break and I’m probably going to kill everyone inside the car. And for that, do I have to count the numbers of passengers inside and the pedestrians or do I go by age and if the pedestrian is really old and I don’t I mean that’s that gets infinitely complex. It does. It does. It’s probably more important to open source the data than the, than the algorithms. I want to lead to a different we’re very related topic. And I keep asking that a lot of people and the reasoning so interesting. You know, there has been this, this debate for some time since this paper came out about the decade ago, if we live in a simulation, it’s kind of a trope now. Yeah. How do you feel about that? How do you, how do you think that makes that original, the outset of that paper makes sense? And do you agree with that? You know, there’s really no way to tell. I mean, if we were in the matrix, we wouldn’t be able to know. That’s why it’s a thought experiment. Do you feel like instinctively from what you’ve seen, we could build a world in say 1000 years from now or 2000 years from now that would be so indistinguishable from a real world or on a higher level, like we could build a whole universe, because eventually it was just, you know, the size of a SD card and then we create the universe expanded. Could we, could we create that or that’s something that will always be like in the sphere of say a religious person or like a religious person or a religious phenomena. Yeah. God is out there and someone that we don’t, but we kind of put in these brackets, we can’t describe it, but it’s someone with a, what I’m getting to has this whole, the whole universe and AI too. Is it guided by someone with a positive message towards our future, towards the human future, or is it all random? And there was never a simulation and there’s nobody steering this whole thing. Right. Right. So there’s, you know, I think there’s two questions there. One is, are we inside a simulation? And I don’t think we can ever know. So I haven’t spent a lot of time thinking about that. The other one that’s more, more interesting to me at least is, could we ever create a simulation like, like the matrix, like the movie, the matrix in the movie matrix. Yeah. And, you know, we’re, we’re just starting to see, you know, Elon Musk is just starting to interface computers to the brain. But we’re, we’re a long, long way from figuring out how to make the brain see a reality. I imagine it’s, it’s, it’s possible someday. I just don’t know. But do you feel, do you feel this universe, the way humans have evolved, as someone is driving this car, someone is, is, is guiding us along the way? Might that be a spiritual figure, might that be aliens? Do you think there’s something to it? Or that’s just, no, it’s a, we’re basically, we might not be the only ones, but we kind of on our own in this universe. Yeah, no, it’s an ancient philosophical, philosophical question. You know, in, I think it was around 1960, or 70, Arthur Kessler wrote a very well known book called The Ghost in the Machine. And the idea is to, to bring forward that ancient philosophical debate about whether there’s a mind that’s separate from the physical body. Is the mind just, you know, the result of a collection of neurons? Or is there something else? And I think, I think Kessler’s position was the materialism position, which is it’s just a collection of neurons. But just as an interesting aside, or at least it’s interesting to me, I originally wanted to name my book There’s No Ghost in the Machine. As a takeoff on Kessler’s book applied to computers, which I thought was, I thought was very clever, but my publisher wouldn’t let me do that because he said that they’d never find my book that somebody searched for. One more thing that a lot of people reading your book will be interested in. I think we had this during the building up to the election. It’s the whole definition of wherever our jobs be in the future if AI takes so much of it away. And I, you know, that’s a daily self driving cars is a big topic in this, even if they only get to 99%, that puts a lot of people who are on the freeways out of their job. They might become city drivers, but there’s way less city drivers needed compared to what the whole driving business or the whole trucking business is now requiring from drivers. When I think I assume we all agree that there’s a lot of jobs will go away. And maybe just doesn’t have to be with AI that’s just the technological progress and now finally it seems with COVID we are all adopting a lot of progress much quicker. I think that’s a good thing. But what where do you think related to this will there be the new opportunities so where will the jobs proud AI might take away. So first of all, I don’t really believe AI is going to take a lot of jobs. I think conventional computer software has been taking a lot of jobs over the years. You know, over the what 50 60 year history of computers. You’ve had word processors replaced secretaries tax prep software. Internet travel sites displaced travel agents e commerce is killing brick and mortar retail. So those are conventional technology. And the history of automation is such that technology has always created more jobs than it’s replaced always. You know, in 1776 farms employed 80% of the people and now we produce more food with only 2% employed in our agriculture. But, you know, people do do a lot of other things. So, you know, the real question for me is whether AI is going to change that. It’s a historical trend. And I don’t really see it. I don’t really see AI taking a lot of jobs. The jobs that AI is going to take are the ones that can be classified characterizes classification tasks, you know, and with visual classification tasks being especially vulnerable. So spawning terrorists and airports reading MRIs sorting parts in a factory. You know, voice recognition technology is impacting customer service jobs that involve following a script because people can just say the words and the computer will recognize the words and and can follow the script itself. But I don’t I don’t see these as really having as big an impact as traditional computer software and traditional computer software for 60 years has created more jobs than it’s taken away and I don’t see any reason why that wouldn’t continue. You have definitely with you on that part that there is this this process of moving from a low productivity to a higher productivity job. It’s it’s still intact. Nothing has changed. The trouble do is that we we see the destruction immediately and it affects us. Emotionally first and then the build up to what are going to be those new jobs. It’s it’s not often not that clear. And you know, there’s always the anxiety that you might have been a winner in the old system, but you’re not a winner in the new system. So and that that anxiety is always felt as a negative emotion. So I think that’s what rattles people especially now that we see such a high adoption rate of new technology. That’s 10 15 years because a lot of things that I personally my startups put into this world in from in the 90s late 90s and early 2000s, they weren’t really adopted widely. And you can say, oh, that’s because this product was crap. But even as a technology, it wasn’t really adopted like video conferencing was technically possible in the early 2000s. It wasn’t really picked up. And now suddenly it’s on everyone’s on everyone’s daily agenda. And I think this is fantastic that we we we see this this big adoption. And to be honest, I’m not so sure how AI as a as a bracket as a definition as a field will will will change that and how many jobs we we say, oh, this is computer software and this is like advanced computer software. And this is AI because they’re so interchangeable at the margins. So I agree with you there that maybe AI is as a buzzword extremely overhyped. But as it is often happening, it follows the same path of technological destruction. One thing that I but I keep thinking is do is that AI has it gives us better decision making. You can say there’s a lot of stuff, but I think AI especially gives us a way to see patterns and to kind of be ahead of customers potential customers preferences before that customer even realizes he or she has these preferences. What I mean by that the example is I always felt like we if AI takes takes so many basic decisions out of the equation because it becomes cheap and we cheap at any human to look at certain data. It’s better to have AI do this and eventually it’s good enough for a lot of fields of data. Humans can focus on this next level of decision and can better anticipate customer demands and I always felt like my example is someone comes into a bar and there’s never been in that bar. And the AI looks at them and say, oh, this is your favorite drink. I always felt that’s that’s how I feel AI is going to change that game. Yeah, yeah, that is how it will change it. But is it just a is that just contributing to the same trajectory that computer software has been on for 60 years? Oh, it’s an extension of that. I fully agree. It’s an extension of that. But I think it might give us given what covid data gives us a lot of momentum and that momentum we feel the destruction first. That’s that’s really affecting people’s minds together with social media. It is. And, you know, to me, the big the big issue with jobs is that, you know, losing a job is like one of the worst things that can happen to a person. And as a society. You know, if you lose if you lose 6% of the jobs every year, but you create 8% new ones, economists would say that’s great. But it’s awful for the people who lost their jobs. And as a society, I think we need to take more responsibility for retraining those 6% so they can take some of those jobs in the new 8%. And, you know, we’ve never really done that. Governments aren’t aren’t, you know, haven’t taken the lead in that. And it’s kind of it’s kind of been, you know, the people lose their jobs are the losers. And I think that’s unfortunate. I would, I would, you know, and now we’re getting into politics, but I would be very much in favor of some kind of tax on technology companies that went to support people who lose their jobs because of technology. That’s a good point. That’s a good point. And I one thing that I think, yeah, go ahead. I think I think Bill Gates suggested that. Yeah, I feel one thing that always comes to mind is read this book and the main thesis was unfortunately forgot the title of it. Is that while society as a whole has gotten more productive and has gotten better at extending. The society is the individual’s lifespan for the individual, the freedom and the life itself might not have gotten much better. And the idea comes from think about the life of a typical hunter gatherer. That’s how far back this book went and then how the life immediately at least for the beginning of that phase in a more agricultural society changed. It wasn’t the same kind of freedom for the individual life decreased. Maybe that’s true. The quality of life decreased. And that’s different for the next generation. Next generation starts at a different point of base point. And as you add more generations, it becomes better. But for everyone in between, for that individual, it seemed like the quality of life has decreased. And I think we are in a similar junction right now. We feel that like we had this. We call it the boomer generation who came out of the Second World War and build wealth in the 70s, 80s, 90s. Great productivity growth at least initially in the 60s. And the technology took off and just accelerated that. But what it has hadn’t happened in the last 40, 50 years and specifics are debatable is that the stagnation came about. And we don’t have as many opportunities as we had, if we seeming we had, especially for young people. What people suggest is, well, it’s because software and AI as a subfield of this has gotten so good that you need way more experience to be better than the machines. That’s why we see so little opportunity for young people because they are not as good yet at making good decisions. The CEOs have gotten older and older. They are now in their early 70s as an average age. That’s something we’ve never seen before. Presidents are getting older and older. So we had that juncture, like a promising world behind them. Maybe that’s happening in 20 or 30 years. But for that generation that’s kind of in between the millennials and the generation just before that, it might not be… Well, as you say, the jobs are being created. They might not be as good in those jobs. The 6% who got out of their jobs, the 8% who come in, they might be different people. Yeah. I would say two things. First, just a comment about quality of life. Nobody’s going to convince me that my life was better before I had a remote control and I had to walk up and manually turn the knob on my TV. But in terms of the new generation, to a newer generation person, Gen X, Y, Z, it’s going to sound like an old boomer talking. But we always had the idea of paying your dues. So before you could get into a profession, you really had to go in, start at the ground level, work your butt off, and learn something. And what some people are saying about the newer generations is that they’re not willing to do that. It’s not that the opportunities aren’t there. They’re just not willing to pay their dues. Yeah, that’s probably true. What I think what the problem is is that the millennials have seen that by the time this industry or that company that they invested a decade in, by the time they are maturing into a more senior position and it pays off, these companies are not around anymore. And that’s not a false assessment. I think that’s spot on. There’s exceptions to that rule. But in general, I think that’s a spot on observation is that obviously Google is still around, but there’s a lot of technological change. When you look at the 500 biggest companies in the US 20 years ago and now, there isn’t a lot of companies left. And I think this is a worldwide phenomenon. There’s this long term investment, the boomer model where you kind of willingly torture yourself for a while because you go in and learn everything from the ground up. But the company secures your well being once you mature within the organization or you do lateral changes. I think this model is gone and the model we are now going towards is that you have one grand idea, literally one thought in your whole life. You wait 55 years for it, then you make one app, make one call to your broker, whatever that decision is, and then that’s it, then you never have to work again. But you have to wait 55 years for this and I think that’s what if it’s painful, that kind of purpose. Yeah, it’s an interesting perspective, but I agree the rules have changed. I mean, there’s no more safety in working for a big company anymore and starting out at the bottom and working your way up. I think the concept still applies. You get in and you learn something. You learn how insurance works or you learn how whatever industry you’re in, you learn how it works and that makes you more valuable for the next company. You may not be paying your dues and working your way up in one company, but you can do it going from company to company or even being an individual contributor. So I think my sense is that the opportunities are still there. I think we have a big problem with wealth inequality that we’re not addressing and that’s impacting the millennials and I don’t blame them for being upset about that. But to look at the world and say there’s no way to get ahead, I think that’s just a self defeating attitude. Yeah, I’m in Gen X, so I’m just at that border. But I agree with you in the assessment that wealth inequality, we’re definitely at that turning point. And the solution obviously for me and that’s the big reason why I do this podcast is figuring out where are these opportunities and why haven’t we created more opportunities the last 20 years especially. I think the late 90s was the exception to the rule in the last 50 years. Why didn’t that happen in a country that’s so open to opportunity like the US? And how can we fix this going forward and my answer is obviously entrepreneurship as a way to change the world, driven by technology because this is where this productivity boost comes from that makes us all better off and have a better life for our children. And I don’t know if you have specific ideas where you feel like even if AI is overvalued as a high board, where do you feel AI will really make an impact? What are those opportunities for entrepreneurs? Say the next five to 15 years where something is bubbling up but it’s not yet a hype machine that we read every day. Yeah, so when I talk about AI, I talk about AI and not traditional computer software. So again, I think where AI is making a difference and where it can make a difference is in taking over those tasks that can be characterized as classification tasks. So that’s where you have to look, but to apply that technology is an interesting problem. I think some companies say, okay, I’m going to go hire the best machine learning expert I can find and they’re going to bring AI into the company and transform the company. Well, that’s not an approach that’s going to work. You can bring in the smartest machine learning person in the world and they’re going to have no idea how your business works. So if you want to bring AI into your company, you’ve got to somehow educate your product managers and your business decision makers. What’s possible with AI and everybody’s got to put their heads together, ideally with a machine learning expert or a data science team to figure out how to apply AI to your company. Yeah, yeah, indeed. One thing that crossed my mind a while ago is something like a Boston consulting, but just for AI consulting, there’s probably a ton of people who already do this. So this is not a new idea, but a way to maybe not charge $2,000 a day, but maybe charge $1,000 a day. But finding that unique data set, I think this is what it’s all about finding that unique data set and trying a bunch of different learning models and then validating them. This is relatively easy to do if the data can be found within a company, but it can have immediate effects. And you can see results in terms of conclusions that maybe people know intuitively, but have never really taken a look at. They suddenly realize, oh, only dentists from that area actually subscribe to our product, because this is only relevant there or we have no competition there. I think there’s a lot of insights, but it takes a while to find that data set. I think this is the problem generally with AI is finding that data set and the granularity is fine enough and big enough to, and then validating whatever the learning outcome is. That’s a big challenge. And as you say, you need that industry knowledge and you also need AI knowledge. Right. I don’t know what’s the best way to combine that as a startup. Yeah, I just don’t think I just don’t think a company is going to be successful. Just bring in a consultant or data science expert or machine learning expert and and turning them loose. I think they have to commit to learning what’s possible with AI. And, you know, ultimately, I think the business has to say to that data scientist, look, we’ve got all this data about our customers, can’t you figure something out about the customers that will help us sell more to them. And, you know, then you’re, then you’re at least pointing the data scientist in the right direction. Yeah, the business team and the IT team know, know where that data is, they know what’s good about it and what’s bad about it. The data scientist is not going to be able to figure any of that out. Yeah, it’s definitely compartmentalized. I had my microphone outsteen on the last last podcast, we talked about how I can solve so many things in the medical field in healthcare. But how difficult it is to get any data in a wide enough data set. And even most studies that are running big hospitals, they have maybe 10,000, maybe 15,000 different screening for cancer screening. And that’s already a huge data set that’s very, very expensive to get to replicate and other studies where you want to run AI on. And I thought that’s really sad because you could have such an impact on people’s life expectancy if you could apply that technology, but it’s just not there. The data set in a way that you can easily access it and that hopefully is going to change one day. Yeah, it drives me crazy that we’re such a privacy focused society that, you know, we can’t get all that data together. It would be, it would be so beneficial. I mean, I just get frustrated when I go to my doctor who’s who I’ve been seeing for 35 years. And, you know, when I say to him, well, what did that what did that test show last year and two years ago and three years ago? He starts paging through his hand scribbled notes and was this and then this. What did I say it was two years ago? I mean, it just, you know, you don’t even want to ask the question. Yeah, I think I mean, the healthcare field has completely missed out. And I mean, software automation in the first place that’s slowly now making its ways into the field, but it’s definitely not eating healthcare as it does all the other fields. And AI just hasn’t arrived there yet. And people don’t understand the value of that data. They think it’s just random and it’s just there for just one particular moment. And then it just fleets away. They don’t understand that each data set that each piece of data that you gather has a meaning for whole humanity, so to speak, because if you have enough data, you could see the patterns why some people get certain diseases and others don’t. Exactly. Exactly. It’s it’s a real shame that we can’t we just can’t seem to put that together. So maybe an opportunity there. Yeah, it’s a it’s a big opportunity. You know, that that was one of that was Obama was that was one of his cornerstone initiatives to try to get all that data together to to build the. What did he call it that the home? I forget what he called it, but yeah, he they weren’t very successful with that. Yeah, well, it’s a very difficult field to be in. There’s so many legal issues and so many. There’s so many gatekeepers in that field. Yeah, you’ve got to find they call it the beach at that easy place where you can kind of build, build your company and then go from there once you have enough recognition. But nobody has been successful yet. But it’s it’s definitely happening. 23 and me is one of those companies who found their beach at probably with a lot of money from Google. But they they hover over this DNA test data and I think ancestry and there’s like every country has a few companies that do this exact same kind of test. But I don’t think they share any data. So the data kind of set is still limited. Right. Yep. Well, on that note, we found some opportunities in this. I really want to thank you Steve for coming on. That was really interesting. Thanks for sharing your thoughts with me. It’s been a great conversation to us and I really enjoyed it.

Recommended Podcast Episodes:
Recent Episodes: