Blaise Aguera y Arcas (Artificial Intelligence, Free Will, Consciousness, Religion)

In this episode of the Judgment Call Podcast Blaise Aguera y Arcas and I talk about:

  • What inspired Blaisee to work so close to the ‘bleeding edge’ of innovation and technology for so many years
  • Is the missing ‘explainability’ of AI really a problem?
  • Why the complexity of languages (and it’s efficiency) is underrated?
  • The impact of the ‘surveillance economy’ on customer perception – will it stay a ‘cat and mouse game’?
  • Why the protection of our privacy is “required for survival”?
  • How much of an advance will GPT-5 be? Will we become ‘data slaves’ to AI?
  • Is there room for ‘personal opinions’ with AIs? Will AI optimize for ‘survival’? Does humanity even ‘optimize for survival’?
  • Why ‘developed countries’ have such low rates of fertility compared to ‘developing countries’?
  • Is ‘utility’ as a concept really at the core of ‘How the world works’?
  • Should we fear AI overlords? Or should we embrace them?
  • Is ‘Free Will’ a useful illusion? Is empathy a necessary precursor to consciousness?
  • What is the relationship between religion and AI?

You may watch this episode on Youtube – The Judgment Call Podcast Episode #48 – Blaise Aguera y Arcas (Artificial Intelligence, Free Will and Consciousness plus Religion).

Blaise is a software engineer, software architect, and designer. His TED presentations have been rated some of TED’s ‘most jaw-dropping.’

After working with Microsoft for seven years he now heads Cerebra, a Google Research organization.



Blaise Aguera y Arcas: I actually just wrote a novella on these topics, which I’ve just begun looking for a publisher for, but it’s very of this moment and about those last topics that you were raising. I can send it to you if you’d like. Awesome. Yeah, I’d love to talk about it then. So I think that’s a really good match. What is the core of the novel? Well, it’s short. It’s about an 80 page kind of novella and it’s quite dense and it’s a bit unconventional, but basically it uses the, I mean, I’m not a singularity person to be clear. I’m a little bit more than a little bit skeptical of a lot of the kind of Kurzweil singularity sorts of things. But Ray works for your employer now, right? Oh yeah, he’s a colleague and he knows about my skepticism. But at the same time, I think he also has a couple of very valid points, one being that history is clearly exponentially speeding up. I mean, that’s obvious. And also that I think that brain uploading and so on is actually quite a long way off if that will ever work. But we are certainly approaching a moment when artificial intelligences starts and needs to be taken very seriously, not just as machine learning models, but as real intelligences. And I think things like GPT3 are a bit of a wake up call in that regard. So yeah, the structure of the novella, it’s very heady. It uses Walter Benjamin and his thesis on the philosophy of history as a sort of frame. And it sort of imagines that the moment that we’re going through right now is sort of like an event horizon. And it has three sorts of chapters that are broken up into sort of the present, before times and after times. The before times chapters are in the form of documents or fragments that come from the past and that are actually used as part of the training of the ML. The present tense parts are a narrative that takes place over about nine days. And it’s nothing earth shaking happens in that narrative. It’s during COVID times, it’s almost autobiographical and it’s kind of very compressed. So it’s not a lot happens, but you see sort of the development of the AI. And then the after times is written in terms of iteration numbers and is written from the point of view of the AI. And there’s sort of mysteries about who is actually writing this thing, who is the reader, who is the writer, what’s the perspective from which it’s being written. So it’s a little bit of a meta novella, kind of like Nabokov’s Pale Fire or something like that where there’s an unreliable narrator and you’re unsure until the end what’s going on. It sounds like great science fiction. I had a lot of fun writing it. I wrote it between my first and second COVID shots kind of a little bit of a fever dream.

Torsten Jacobi: Okay, okay. I might be able to do this. So you spent about two decades now, as far as I know, working first at Microsoft and now at Google, at teams that are really at the core there from what I understand at the bleeding edge of technology. And they are both technology companies, but you’ve chosen a job to really had the bleeding edge teams for, and I think now it is AI before it was more maps and more other topics. Why did this job choose you? How did you get into that?

Blaise Aguera y Arcas: Yeah, it’s a good question. I’ve been very, very lucky, very privileged to be sort of in these very exciting times and places. And it’s a little bit of a long story. My training, such as it is, is actually in physics and computational neuroscience. And my wife is a computational neuroscientist. We’ve read a couple of papers together back in the day. Okay. And in a sense, I feel like the dawn of the computing age was very much interwoven with the dawn of computational neuroscience. So this idea that computers are artificial brains, it’s not a new thing. That was a core part of the entire concept of computing from the beginning. Even to the point where things like the logic gate symbols, I think I talked about this in one of the TED talks, the logic gate symbols are actually derived from the symbols for pyramidal neurons in one of the key McCulloch and Pitts papers from the 1940s that draws an analog between computing elements and neurons. And so that was very much present in the minds of Turing and Von Neumann and the other early computing pioneers. So I’ve always had a feeling that although these were kind of twins separated at birth, they’re going to reconverge and have been sort of biding my time until they reconverge and working on other problems in the meanwhile. So the problems that I was working on in the teams that I was leading at Microsoft had more to do with classical computer vision and machine vision. There are certainly some parallels between the teams that I went there and the teams that I’m leading at Google. I was there for about seven years and I’ve been at Google now for about seven years as well, it’s a bit less than 20 years, but I had a startup before that which Microsoft acquired. And classical computer vision is not very brain like. So the first TED talk was about C Dragon and Photosynth. Photosynth is a classical computer vision problem. But what began happening toward the end of my time at Microsoft, two things changed. One was about the company and one was about the technological milieu. On the company side, I certainly don’t want to say anything negative about Microsoft. They were wonderful to me. I grew a tremendous amount at that company. But the company made a decision partly based on their failure to break into the phone market, the sort of failure of Windows phones. I think made it clear that it was destined to turn back to its roots and become more of a B2B sort of company. And that’s a move that Satya Nadella has executed very effectively, which has made the stock price go up quite a bit. And so it’s been good for the company, but it made it less the kind of company that I wanted to work at. For me, the most exciting problems and the greatest innovations are very much in I hate to say consumer, but in things that affect people as opposed to companies, I suppose. So it’s become a little bit more of an IBM style company since I left. And I saw that change coming. And that made me think about a change for myself as well. But the other thing was this was in 2013. It had become clear by 2013 that neural nets were back, really back with the vengeance. This was after some of the really groundbreaking new results from convolutional neural nets that showed that computer vision problems that had been intractable for decades, being able to recognize what kinds of objects are in a visual scene, for example, were finally getting solved and solved in a fairly brain like way. I mean, I don’t want to overstate the analog between convolutional nets and visual cortex, but it is a visual cortex inspired architecture. And it’s certainly not a conventional computing sort of approach. There’s not a program being run. It is virtual neurons being activated in cascades. And it seemed to me that, and to a lot of people, I think at that time, what we’re calling deep learning at the time, was really rising. And I felt that this was going to change everything. So Google was the place that was the company that was really at the forefront and still is of that kind of work. And that made Google very appealing. But there was also something else that I was thinking at the time, and it was quite important, which is that Google is also a company that historically has kind of done business by running massive online services. And I think it’s not a coincidence that they were also at the forefront of this new kind of AI, because this new kind of AI was very, very data hungry and requires massive amounts of training data and massive amounts of computation to train. And they had giant data centers and they had giant amounts of data. So it made a kind of sense that they were at the forefront of it. And I don’t want to diminish Jeff Dean’s sort of vision with respect to that. I mean, they had to have the right talent to recognize this, but that was one of the reasons that that opportunity was there to seize. And I felt like we were facing two possible kinds of AI futures, one in which these giant neural nets were all run centrally as services. There’s sort of a small number of AIs, if you want to put it that way, that are serving everybody. And another in which it’s much more decentralized and you and I have our personal AIs and every company, every room in your house has an AI. It’s more like a society of decentralized AIs. And I really wanted to tip things in favor of the second rather than the first alternative of decentralization. And I felt like if I went to Google and tried to push for that decentralized approach, my odds might not be great because it was running so counter to the culture of the company as it had existed here to for. On the other hand, if I could succeed in making that kind of change at Google, that would really matter. And I thought I’d rather take my chances at going someplace where I have high odds of not succeeding but where successful matter than staying at a place where I’m a more known quantity, higher odds of success, but not sure that anything that I do here is going to really change the future in the same way.

Torsten Jacobi: Yeah, I think you already answered a couple of my next five questions. But I think this was a wonderful way to see this through your own perspective and the way, the vision that you have for what you’re doing right now. And I think the unit is called Cerebra, right? Like Cerebra. Yes. Yes. And the AIs that you put out, like the AI models, like Federated Learning and Coral that enable to run AI on a user’s device without syncing much to even work offline. And I think this is a wonderful way how you describe it, how we can maybe change the way people look at AI as this behemoth, you know, when we look at the Westworld and I think it was called Solomon, the massive AI that basically ran the world. I think everyone is very worried about that. Yeah, I mean, those anxieties go all the way back to like Moloch, you know, and the sort of the 20s and 30s exactly. It’s an industrial revolution anxiety, really. One thing that I was curious about, and I think that was the two questions that immediately arose, maybe I already answered as partially one is, how do you decide from giving all these priorities and all these possibilities you have at Google, the way you put your efforts and what are you actually releasing? So it’s been like the David Hume’s problem, right? So we have all these options, but what are actually goes through your mind and through maybe other colleagues at Google, what do you actually put the resources in and you want to release? And what do you want to keep inside the company? And what do you want to push out there? And then the second question, maybe it’s a bit related. But I think the problem with AI is that we don’t know what’s going inside the box. We don’t know the reasoning of AI. That’s a core problem right now that might change over time. But right now, that’s a big issue. So we have to constantly validate it. We are worried about biases. And we don’t know because we would have to basically go through a lot of data ourselves to see what’s going on or run a different AI. But I feel, even if we federalize it, and I like that approach, we download a standard model that might have all the biases attached. So we run it on different devices, but we are just modifying existing model. So we might download other people’s biases. And I’m not talking about racial biases necessarily, it’s just decision biases that we are not aware of. And that seems as scary as having a central AI to me.

Blaise Aguera y Arcas: Yes. Yeah, these are very good questions. So you’ve asked two giant ones. Let me try and take them one at a time. We have time. We have time. Excellent. So first of all, you’re asking about, well, in fact, let me begin by taking the bias and fairness and ethics one. And then we can go back and address the question of what Google releases and whatnot and how that works and what I choose to have the team focus on too. So first of all, the question of bias and explainability, there are different questions. Let’s begin with explainability for neural nets. So explainability is something that is a question or a charge that has been leveled quite a bit at neural networks because they don’t look like code. So you can’t really step through it in any meaningful way. Instead, you have these massive banks of filters, in the case of a convolutional net, for example. So it’s just tons and tons of numbers. And that is the net. So how do you explain how it makes a decision about how to classify an object or what to reply? It’s hard to make a legalistic sort of sequence of deductions about how a certain decision came about. But I guess I would point out a couple of things. One is that in complicated real world software systems, if you ask the question, for example, before neural nets were involved in Google search, when it was all classical computing, how did a ranking decision there get made? That answer would be extraordinarily complex as well. When you have millions and millions of lines of code that have accumulated in order to keep continually improve a problem over generations of software developers, all working on bits and pieces of the thing, you also don’t end up with an explainable system. You end up with something that, in theory, you could dump a stack trace or whatever. But if you’ve ever seen a stack trace from, I don’t know, a crash on your computer or something, you know that that doesn’t look very explainable. In order to debug the simplest sort of memory overrun or something like that, programmers might spend months trying to dig through some particular stack trace and how to reproduce it.

Torsten Jacobi: But search problem right there is always an expert who knows what to do. I have these problems when I do my own development, my own programming, and there’s always someone on Stack Overflow who knows if Barkin can fix it or on GitHub. But I feel the AI there is nobody left. There is not this one expert.

Blaise Aguera y Arcas: If it’s a programming error, then there’s generally somebody on Stack Overflow who has seen it before. But if we’re talking about a bunch of code that has been written over the years to make a judgment call, which is what we’re talking about now, we’re not talking about a programming error. We’re talking about judgment calls, then I think it really is much, much more complex. The kind of questions that we’d be asking ourselves are like, why did my business become the fifth search result as opposed to the third? I could answer the question in some kind of pedantic way, well, this branch of code, that one, that one, that one, but a satisfying explanation would be just as hard as if it were a neural net. And in fact, in those kinds of cases, I would say that neural nets are actually somewhat easier to explain than large classical code bases because, unlike with a classical code base, you begin with clear set of training data and a clear objective function that you’re trying to optimize. And so that is actually quite a compact formulation of what and why that you don’t get from the accumulation of the choices of thousands of engineers. I don’t want to… A very good argument. I’m not trying to dodge the explainability challenge because I think it’s major, but I’m pointing out that I think we often have a sort of idealized strawman that we think of as being the explainable case, which is not generally there. We’re not starting from an explainable spot either. And then if we take this a little bit more meta, I would say we think about humans and human decisions as being explainable. That’s the basis on which the entire legal system is based. If a judge makes a decision, they have to be able to say why. That’s what the law is all about, the fair application of it, et cetera. A, the law is not fair. And there are many, many studies that show this very clearly. B, the narrative structures that we impose in order to explain our actions and our decisions. Again, there’s a huge body of work in psychology and in legal theory that show, well, in legal theory less than I would like, but in psychology certainly, that show that we’re very good at making stories. And those stories might rationalize a series of decisions and actions, but there’s not necessarily the causal relationship there that one might wish to put it mildly. We’re very good storytellers. We’re very good modelers. We model each other socially. We model ourselves. That’s what self consciousness is. But the idea that that model is the actual underlying thing is completely false. You have trillions of synapses and you’re probably quadrillions. I don’t know how many synapses in your brain. And your model of yourself and of your decision making processes has nothing to do with the detailed firing of all of your neurons and sophisticated model of all of that. It’s a story that you tell yourself.

Torsten Jacobi: Yeah. This is a really good topic. I think we can spend an hour on this. We could. I love what you say, but I think there is something magical and I want to maybe get back to this if we have time to this narrative as a way to not just explaining things, but also as a cloud storage are really complex issues. So we can encode things that are extremely complicated and relatively simple. It’s like when you have a zip file and it’s three gigabytes and you encode it into.

Blaise Aguera y Arcas: I agree. I agree entirely. I agree entirely towards the language is super compact and powerful. And you can use it to not only to reason things through, but also to make changes in your thinking process in a very compact way. Unlike current machine learning models. You can say, for example, oh no, you’re in a country now where the where the screw tops on everything twist the other way. So it’s clockwise rather than counterclockwise. You just say that once with those words and you’ll do the right thing every time from then forward.

Torsten Jacobi: Yeah. Elon Musk is on this trajectory where he says language is so inefficient and the encoding doesn’t work. I think he’s missing the point that there’s a lot of learning that is in the layers below that. That is very, very efficient compared to we don’t have to look at a lot of data just to exactly to your example can with one abstract message can actually change our model completely, which is amazing.

Blaise Aguera y Arcas: Yes, I’m entirely in your camp about this. I think language is enormously powerful. And both as a way of learning, transmitting information, building up cultural information over time, which I think is most of what human intelligence is, by the way, I think it’s cultural, not individual. So I’m very much in the same camp. I disagree with Elon Musk strongly on this. And I think that the language models, I mean, you were asking earlier about GPT3, I think that the progress that we’re now making with language models is bringing us closer to a world in which you can have exactly that kind of discourse with machines. And that’s very important for explainability as well as for efficiency of learning and all kinds of other things.

Torsten Jacobi: Yeah. Circling back for a moment, before we go into these deeper issues, when I looked at Coral, right, so that’s one of the software packages you released, and I think this is an open source release. Yes. I was really excited. And I thought, oh, my gosh, I can I can just build crazy AI with it. But in the end, the models that come with it, I can do my own models, you know, I can train whenever I want. But the pre predefined models that are already available on the website right now, they’re really boring. They’re object recognition, right? They’re they’re so basic. I felt like they I read something from the 70s. So we talk about AI, it’s getting finally got it’s it’s in the limelight, right? And we feel like it’s taking off. But then I look at Coral, which seems extremely powerful, because it’s a federated model, that you can run it on each device. And I was expecting things like, I don’t know, cancer recognition or something really powerful, right? Right. And it wasn’t in that prepackaged model, it doesn’t mean you can’t do it with it. But I was kind of hoping there’s some science fiction in this. Why don’t we have the science fiction in our hands yet?

Blaise Aguera y Arcas: It’s a good question. And this this sounds like it speaks to your, your other question about we know what we released or don’t release and so on. Yeah. So so first of all, there, there is actually a Coral demo for for online cancer detection. And I don’t think that it’s a model that we have we have publicly released. And the reasons for that are that the kinds of liability that come with, you know, with with Google releasing a cancer recognition model are, you know, I mean, that’s a that’s a medical that’s a medical grade thing that requires a level of study studies and validation and, you know, and regulation that, you know, historically, the company has not has not been prepared to take on that is changing with Google health. So, you know, we do now have collaborations going on, you know, with with Google health on, you know, in these kinds of areas, but it’s a long, slow, arduous process. You know, if if you were to ask me, like, do you think that do you think that that that that all might be a little bit over regulated? I think the answer is probably yes. You know, there are reasons that we have heavy health regulations, it’s to avoid, you know, unsafe, unsafe drugs and unsafe medical procedures from making their way out into the world, it’s to avoid Tuskegee experiment kinds of horrors. So there are good reasons for all of this. But, you know, but it also means that innovation in that space can be very, very slow. It’s one of the reasons that I’m, you know, delighted that the that the vaccines managed to happen so quickly, despite despite all of this, I guess, when we really care, we can we can fast track things, but it’s hard. But but more, but more broadly, you know, you’re saying like, you know, that’s right, we have, you know, object recognition, very simple speech recognition, very simple, you know, person counting, you know, this doesn’t seem very sci fi, right? It seems like it seems like stuff from the 70s. There were models in the 70s that did this kind of stuff, although, although none of them, none of them with anything like the quality that a deep neural net can. So, you know, they’re they’re doing old problems with much higher quality. But the reason that we that we focused on those very workaday things for coral specifically is because, you know, it the coral is not really so much about about sort of being cutting edge with respect to what the AI is doing, so much as being cutting edge about how it’s doing it. So that project specifically is for solving problems like, you know, if you want to put a sensor in a, you know, in a department of motor vehicles or something that that says how long the line is, you know, how long is the queue, you know, in front of the desk and then, you know, for that to go on a public website or something like this, then it would be nice to have a system for doing that that’s very simple and appliance like and where all of that computation, you know, that turns the video into this integer, you know, how many people in the line all happens locally in a way that doesn’t violate privacy. So, you know, it’s very workaday problems like queue like queue length and so on that, you know, that are that are really at play here, you know, that’s that’s 90 90 percent of what, you know, of what of what clients of this kind of stuff want. And what we wanted to do was show that those things were possible to do without setting up surveillance systems that have all kinds of negative side effects. So, you know, it’s a different sort. It’s not that’s not the sort of cutting edge research on neural net architectures or or applications, but more sort of, you know, let’s take the things that everybody needs and they’re common, you know, among many, many industries and show a different way of doing those. And now, you know, obviously, there are there are researchers in my team and in various other parts of Google research that that work on on much more sophisticated applications or, you know, and, you know, architectures that do things that are really kind of shocking in that and there are a little more science fictiony than counting people, you know, or recognizing smiles. And most of those, most of that work gets published, you know, in very short order. So, you know, there’s a huge number of papers that come out of Google research. And many of them nowadays are coming out with code as well. So, you know, so, you know, it’s reproducible and it’s, you know, it’s part of the open research community. There are some checks and balances on what comes out. I mean, you mentioned GPT three, the open AI team decided when they made that that there were some dangers in releasing that model, you know, in that it could be weaponized in certain ways. So, you know, that’s the main thing that we think about, you know, before before releasing, you know, are there are there risks? Are there dangers to making one of these one of these things public, but by large, we’re very, we’re very open about what we publish.

Torsten Jacobi: Yeah, I think the humanity owes Google. And I think we reward Google very nicely with this market cap. So, I think it goes both ways and they’re really cheap loans, you know, and zero percent interest rate that were meant for struggling airlines and Google gets them anyways. So, I think it goes both ways. There’s a lot of love currently. I think where the love is a little bit in doubt is the topic and you just mentioned that is the question that we all feel like that we become, we become, we’re fully surveilled. And that’s true. There’s all this data there. We used to not care about it, but there’s not much so much data now. There’s so many more sensors and so much AI that’s running that kind of reads our brains better than we can read it. So, people are becoming a little bit concerned. And obviously, Google says, well, we need that data to sustain our business. We give you free services. And I think everyone is kind of okay with this initially. And then you realize, oh my gosh, there’s like 2000 data points that can Google can read from you. And using those 2000 data points, they know you exactly. Like, there’s no doubt when you get married, they know the date before you even propose. So, it’s really scary how this tech works because what we are creatures of habit, we are social creatures. So, we behave more like other people than we believe in ourselves. But we have this illusion of free will. We don’t think this should be the case. And I know Google does a lot of anonymization and it plays around with giving people their privacy. But in the end, they need the data to make money. And there’s always a market for alternative data. Like, I was just talking a couple of episodes ago, how many more startups are now coming up with alternate data. And like every sensor, basically, you can create a company around it. So, you sell this data and then you will make money from this. Maybe not trillions, but a couple of million is always in the game. And the question is, it’s not really a Google issue. I think everyone has that issue. But Google, because it’s better, it’s more hat. How do you think this will play out? Will people eventually rebel against this surveillance industry that we are in? And especially Facebook, I think Facebook is the worst offender right now. But, you know, everyone has the same problem. Well, do you think that is actually drawn lines? Because I feel like everyone who draws up these lines is 10 years behind. So, by the time these lines are drawn up and say, okay, you can’t have this data, like what happened in the European Union, it’s on the web. Nobody cares about web data anymore. You just don’t care. Because device data is what people want or sensor data. And they are not even covered by GDPR potentially. And then the reality is always 10, 15, 20 years ahead of what’s just regulated. So, is this a cat and mouse game that will keep going on forever?

Blaise Aguera y Arcas: This is a great question. And it’s very close to my heart because, you know, I mean, the concerns that you’re raising are exactly the ones that brought me to Google and that kicked off, you know, all of the work of my team. I mean, they animate all of the work of my team. But I want to, before I dig in and answer in detail, I want to step back for a second and also just reset the critique a little bit, perhaps. So, I mean, I’ve read Shoshana Zuboff’s Surveillance Capitalism with a lot of interest and many of the other books that raised these kinds of critiques. I’m friends with quite a few people outside Google who are very vocal advocates for privacy and very sharp critics of Google and companies like it. The social dilemma was probably the popularization of a lot of these ideas about a year ago or something. So, the social dilemma has this kind of recurring animation of a sort of puppet that is like an animatronic version of you that lives in the data center that becomes so precise that you can be predicted completely and that’s then the basis for kind of futures market in your behavior. I want to, so that is a terrifying vision, but I also want to temper it with the reality, which is that, you know, yeah, I actually don’t believe that people are as unpredictable or unlike each other or individualistic as they believe. I mean, I’m a critique of individualism in multiple senses. I think a lot of where our intelligence really lies is social and societal and not really individual at all. But at the same time, if we imagine that these models are all seeing and all powerful and understand all of our hopes and dreams and wishes better than we do, I think that is not the universe that really obtains inside these companies. It’s almost the opposite problem. I know because there are a couple of teams within my own group within Cerebra that have done personalization models for other parts of the company. It’s not the kind of work that I generally have people in my team doing for reasons we’ll get into, but I do know how that sausage is made and they’re actually not that great. The problem with recommendation streams and things is not that they are so prescient that they know you so well that they can anticipate your every interest, but rather that they’re too simple and too reductive. And frankly, that’s one of the reasons that I believe that we end up with a simplified discourse in a lot of social media and the polarization. That sort of polarization and simplification of the discourse comes because of emergent behaviors. It’s not just the ML systems, but emergent behaviors that the ML systems are part of that are highly reductive and that just funnel people into a small number of modes rather than having a real model of Torsten and what might interest him. In some sense, a really good ML model would have a very different effect, I believe.

Torsten Jacobi: I fully agree. I think this is one of the things that are least understood about what happened since 2015 and since we were basically motivated by an engagement algorithm that Facebook invented, so to speak, and then rolled out publicly. And the deflation of likes is my theory has really led to this depression of the last five, six years. Mental depression, not necessarily economical. Now we have the economical too. I agree with that. And the incentive is always there to give you the thing that you’re most likely to click on, right? So there’s an anti explore pro exploit to sort of bias. It’s terrible. It’s terrible what it led to. I think it was well intended. And if I would have worked at Facebook at the time, I would have propagated that too. And I wanted to push this out, but it’s terrible. But there were unintended consequences. Yeah. To you, Matt. So they are not evil, I would say, the engineers, but they also need some help from psychologists and people who think a little bit outside the box, but they all want to make money.

So they do. Although the idea that psychologists have the answers or ethicists have the answers or whatever is also false, I think. True. I mean, none of us could have predicted. I say that I’m sure that there were some predictions that were accurate, even in the very, very early days. But I suspect that they were drowned in the noise of many, many other predictions that didn’t come to pass. Well, if you read Socrates, I think you would have made that prediction in a heartbeat because you realize the 90% out there will have different opinions and engagement and like this five second engagement, not the same as the five hour or five day engagement. If someone comes up to measure this, what actually sticks, what stays in our mind instead of what we just click on in the first five seconds, I think that’s the holy grail that they can solve it. But I agree. And so like two people, I mean, two modern thinkers who I think could probably have done a pretty good job of predicting it are Danny Kahneman and Amos Kurski. Right. So Kahneman and Kurski, they’re like fast thinking, slow thinking sort of thing. So yeah, I agree. I don’t think it was completely unpredictable, but I also think that a lot of these are emergent effects. I mean, if we think about, for example, the genocides in Myanmar, and social media is having been a major factor in the way those came about. You know, it’s, I mean, that’s, you know, that’s Facebook and WhatsApp, I believe, you know, and something extraordinarily evil came out of that. But the idea that there is a single actor that you can pin the blame for it on, I think is a little bit. Yes, I mean, it’s an emergent phenomenon. You know, René Girard, that’s how the mind works, right? So we need to scapegoat and then the skateboard actually saves us. So that’s how mine used to work over such a long time. And it seemed to relieve us of that pressure because we can actually move on because we have the scapegoat. So it moves around. Who is that scapegoat? But I wanted to say the individually, I certainly is reductive. And when I look at the code, I think this is just random nonsense. So why would anyone worry about this? But the problem is from a consumer, it looks different, right? So say Alexa listens to you and then Amazon gives you the predictions about cat food. But I don’t have a cat, but I get cat food ads the next day, because I talked about cats, and I maybe wanted a cat. Or maybe it’s a lucky thing, right? So it’s 99% of the ads still, I feel I’m not very well targeted and they come all the time to save ads on YouTube. But there is this day where I think, oh man, this is really creepy. And then as a person, I extrapolate this one event that’s statistically not relevant. But that’s where you pin the sense of creepiness on to it, right? Yeah. And then for me, all the ads are creepy. I mean, this is how human recognition works. Like we see one one accident on the freeway, and we think driving is dangerous. But then two weeks later, we think no driving isn’t dangerous. So it is something weird in our mind that it’s very different than the statistical dribble learning that AI is. And I think engineers also have in their mind, they think, oh, it’s not relevant. But no, it is relevant, because you only get a few shots and then people just, they just sign off because they think it’s creepy. Well, you’re talking about, you know, just from a sort of PR perspective or a business perspective, why it’s relevant. I mean, I think it’s relevant for two reasons, neither of which is about just big statistics. One of them is chilling effects and our sense of our sense of individual agency and our ability to be ourselves and have that sense of privacy, right? Which is not the same as security. It’s not the same as secrecy. Privacy is a real thing. Anybody who lived through the DDR or other surveillance states understands what it feels like to be in a society where you don’t have privacy. And it feels awful. It feels awful, even if nothing is done with the information, or even if the surveilling entity doesn’t have any problem with you, right? Or it doesn’t have it in for you. But the other, you know, the other and even more serious problem besides chilling effects and the psychology of all of that and the importance of privacy from a psychological standpoint is that if you have, if any entity has the kinds of records that we’re talking about, like let’s say that you have a, you know, a device, you know, in your house that is listening to you all the time and storing records somewhere in the cloud of everything that is said in that house, that is a, like a sort of democlease hanging over your head. And it has very real civil liberties implications. You know, even if the stewardship of that data is in the hands of a really good steward, you know, it’s still, it’s still black mirror territory. And if regimes change, you know, if liberal democracy starts to, starts to collapse, you know, and those things are there, then, you know, you can go to a really dark place societally. So I think these are, you know, I mean, when I push back on, like, you know, the models are not that good, et cetera, I’m not pushing back on the, on the problems of surveillance. I think they’re, I think they’re, they’re very real and, and, and they very much animate, you know, my work, as I was saying. Yeah. I mean, I look at it from, from, from Friedrich Hayek’s perspective, you know, that you have to be free of coercion. That’s the goal. Right. Because we know that in that, that area, we develop the best. And this is not an altruistic goal or that, that I’m concerned necessarily. I mean, I am concerned about humanity, but it’s not necessarily like a, like an empathy thing for me. It is, if you don’t achieve this, we will all suffer and we will die. But someone who’s going to do it better will take over. That seems to be the learning from history. If you have an entity, like I saw this for myself, going up in, in the DDR, in Eastern Germany, we had the perfect example, you take the same kind of people, you put one in socialism and they’re very restrictive, but very utopian, very, very well intended and very efficient in that sense, very efficient bureaucracy. And then you have the other side, which also has efficient bureaucracy, but much less and let it develop freely. After 15 years, the verdict was out. Everybody was going on. It was never even close to coming back. To surprise a lot of people who were so enthusiastic about it, like my own parents who were very enthusiastic about socialism, communism, but it would, it doesn’t work because it’s coercion and these static models, they just don’t work for long term. I think we as humans instinctively know that. I think this is what we create this freedom so much. Yeah, I agree with you. And there are studies, I mean, there are small n and I would need to go back and look in detail, but even things like rates of organ donation varied quite a bit between East and West Germany. I mean, paradoxically, because you would think that in a communist and socialist environment, there would be more willingness to do for others. But in fact, those psychological weight, those chilling effects seem to actually push the other way. I’m not, I mean, to be clear, like I am not personal. This is my own views. That’s not Google or whatever. I’m actually very much a proponent of universal basic income and other kinds of socialism. But I don’t think that’s very socialist. I love UBI by now. I don’t think it’s very socialist, but the social model, what I’m worried about, the socialist model is the restrict chance that you had to put on it to keep you alive, right? Your free healthcare and free, free bread. It’s not a bad thing. It’s not a bad thing. So I think we’re in the same spot. I mean, I worry about surveillance. I worry about limiting freedom. I worry about chilling effects. There’s lots of things like free healthcare, free education, universal basic income are kind of orthogonal to those points. I want to go into one thing that I have been thinking about. And I think and you know, I had one of the founders of GPT three was on another podcast. He wasn’t here. And he, one of the open AI developers, he basically said, you know, there is a really good chance that GPT five can look to basically everyone on this planet like it, like it is conscious, like it has a real idea of what’s going on. It could be very human like and not just in a touring test, but, but everyone who interacts with it in a digital way. Yeah. And one thing I think that’s missing from GPT three, it knows so much. Well, we don’t say it knows, but it gets so many things right that look like poetry to us, but it’s kind of random. So people say, well, this is just a random outcome of statistics. And if you shoot enough darts, some of them will look like poetry, right? So that’s, that’s, that’s kind of the answer. But what he said is that what’s missing is the user correction light is the Google click street Google Google search engine. A lot of people say you might correct me is not just the AI and everybody now could come up with the same AI. The benefit that they have is that they have the click stream and the click stream reduces like, say the AI is only 90% correct or 80% correct it with every iteration that gets better because it takes into account the click stream. So basically humans become basically just error clicking machines, so to speak, to the real AI. And that’s what he said with GPT five taken into account all this user feedback, which they don’t have yet. And they’re very weird in releasing it. I feel that’s a mistake. But that’s obviously their, their call. But once they have enough user feedback and they get to 100% or 99, 99, 9% of correct decisions, he said, nobody on this planet might be able to figure out if this is an AI or not. When you say, when you say, um, correct decisions, I mean, I’m, I’m, um, human like decisions, or better than human, let’s put it this way. So that’s, we did never any correct decisions, so to speak. It’s, it’s, um, I mean, I think it’s a, it’s a puzzling framing. Um, I think I disagree with it. Um, I don’t, I don’t disagree by the way that, that, um, you know, that GPT five or, you know, if not GPT five GPT 10 or whatever will, you know, absolutely be able to pass the Turing test. I think that’s, I think that’s highly likely. I don’t see any reason why not. I mean, the, the, the progress, you know, in language models has been astonishing. Um, I think it’s a very interesting question to ask, you know, well, if you can’t whether, you know, that there isn’t anybody home, does that mean that there is somebody home? You know, right? This is a, you know, that’s a profound question. It’s basically the, it’s similar to the question of whether there is such a thing as a philosophical zombie. And, you know, we could certainly spend some time on that one. Um, you know, Turing’s own relationship with that question. I mean, I think that the Turing test is sometimes a little bit misunderstood. Um, you know, it really is basically saying something like, you know, faking it is making it. And the parallel has sometimes been drawn with his, with his sexuality as well. You know, like, what, what does it mean to pass? You know, to pass this straight or, or, or what have you, you know, is there a difference, right? On the inside or not? You know, he was saying, you know, it’s perhaps tongue in cheek, perhaps not that, you know, if you can, that nobody else can know what is inside you. So if you can behave, you know, in, in way X, Y or Z, right, then, you know, then what, who’s to say that there’s anything else, you know, anything other than that, that, you know, that is reality, right? That, that is empirical equals reality. You know, um, so that’s a, that’s a really interesting question. But the, you know, that this is going to get, you know, quote unquote solved by having a metric or a number that goes up and up and up and that the way to get the metric or the number right is to interact with billions of people, billions of times strikes me as, as, as a little bit of a mixed metaphor or taking an approach or an idea that worked in one context and applying it in a place where we have no reason to believe that it will work. I mean, that’s not how humans work. That’s not why we are what we are. You know, we’re not the aggregation of, you know, trillions of interactions, you know, with, with, with lots and lots of other humans that tell us when something is human like and not human like. That’s, that’s not how it works. I don’t know about that. I don’t know about that. But because, you know, when I think of children, what they do is they download culture. As you said earlier, we are cultural beings. We are not, we are not a computing engine, but the outsource computing long time ago to the club, other people. So what we do is we download all this knowledge, which is kind of the model, right? That’s the standard model. And then we go out in the world and we refine the model. We, we generate our own layer of better models on top of that. But still most of the knowledge generation, I think this is really popular and lately that people say, especially economists say, you can’t be better than, than what’s already out there. Say you come up with a new theory. I don’t know. You say something political and people say, no, how would you know? Because it’s impossible because all the information is already in the market. You basically, there’s no way you can advance on anything because it’s already in an equilibrium. It’s already out there. So the market is full of information. So people say, well, I basically, I direct my own decision making to what is the mainstream consensus. Kind of the people say that’s more female like, but male like, but I think this is very popular now where people say, I don’t want my own opinion. I just look at whatever news feed whose most authoritative say, I look at hacker news for, for certain hacker, hacker news, so to speak. So I always find the source of authoritative news. And this is what I adopt unquestionably because by definition, I can’t be better. And what those are actually other, other humans, right? So the outsize knowledge generation to other, to other humans. And we have this tiny, tiny sliver where maybe we add some actual knowledge, but for most people, this is more theoretical than practical. They don’t really interact in this knowledge economy as, as an input, they just consume it. And I think this is kind of the same, what I see with AI very, very soon, right? So, so they, they don’t learn from, from their own experience really, they are 99, 99, 99% from other machines. And I think this resembles our human homo sapiens approach perfectly. Well, I mean, I guess the first question I would ask, I, no, no, I mean, it’s, I think your, your, I mean, your theory is, is interesting. It’s one, it’s one that I have, I’ve heard things sort of like it articulated before. It is different from the way I think about it. Me, we, we’re not, I mean, first of all, we’re not trying to optimize something. I mean, I think this is, this is a common misconception. You know, when we build ML systems, generally, we do have a specific loss function, you know, a specific thing that we’re trying to optimize. Although with unsupervised learning, which actually is getting a huge amount of traction now, it’s not always so clear what, what that is. And that’s, it’s also not so clear with, with GANs. Well, the loss of survival for humans, right? So, but we can do the same thing for, for, for machines eventually, right? There is a propagation of knowledge to the next generation. I disagree. I don’t, I don’t think that, I don’t think that survival is, is nature’s loss function. I, I mean, how can I, how can I, how can I put this? Think about, I actually talked a bit about this in my, in my, in my NeurIPS keynote from, from 2019, from December or November 2019. But I think that, I think that it’s actually fairly easy to disprove, mathematically, that, that we, that we lack, that life lacks a loss function. And the way, the way to notice it is as follows. You aren’t a, a sort of agent in a static environment. You know, really what we have are societies. We have groups of agents, you know, interacting with each other. And, you know, even your own brain is a group of agents, if you like, you know, neurons interacting with each other, you know, they have their own lives, you know, every cell in your body has its own life. And it’s kind of, it’s kind of like those, it’s societies all the way, all the way up and down, if you want to think about it that way, you know, from, from single cells to what we think of as organisms, to what we think of as societies. And, and so now, you know, you already have to ask the question, well, survival of what? Exactly. You know, what, what is the thing that is being optimized? So, for instance, you know, the cells in your body, you know, what are they optimizing for? I mean, they obviously, you know, they obviously are, they have to work together in order to keep you alive. And the fact that you are alive in an organism and nourishing them means that they can sort of relax, they can lower their guard in a certain regards, right? They don’t have the same hard life that an amoeba has, you know, where it has to kind of go and do everything on its own. But the idea that every cell in your body is trying to optimize for its survival is certainly wrong. Your, your neurons do live, do live for your, you know, some of your neurons, at least live for your entire lifespan, the cells in your cheek, you know, or in your gut, you know, turn over very, very rapidly. They’re not trying to live as long as they can. In fact, when one cells flip over to the dark side and try to live as long as they can, we call that cancer. You know, and, and, and in fact, anytime you have, you know, entities interacting with each other, even if they each have their own loss functions, what you actually get out of that is a dynamical system in which there’s a kind of predator prey dynamics, if you like, or, or pursuit kind of dynamics. And, and those dynamics have what you would call in, in math, vorticity, meaning that the trajectories in whatever kind of phase space you choose to look at it from curl around each other, they curve, they’re chaotic. And, and, and the thing is that, you know, anything that has curl or that has chaos of that sort does not look like gradient descent. You know, any kind of gradient descent process has zero curl and is all divergence. Yeah. So, you know, when something is curled, when something is chaotic, that means, that means that, that you can’t actually talk about the positive about, about something that is being optimized at that level. There’s no pattern. Yeah. Well, there is a pattern, there is a pattern, but there isn’t, you can’t say such and such is being optimized for. And, and by the way, this is a theory that economists, the, the, the Nobel winning economist, Kenneth Arrow, you know, what he won his Nobel for was a series of impossibility theorems about voting. And this is from way back, I think in the 40s. And it was exactly the same observation that, that you can’t have a perfect voting system, because, you know, whatever you have a bunch of people, you know, kind of developing consensus right through voting, you can no longer say, you know, what is the entire, that the entire vote is fair or is, or is optimal in any sense, no matter what the voting system is. I wanted to get at something along those lines. And I, I, I’m not sure if the cell level is the best because we’re looking at the individual level, right? That’s where the loss function, when my mind should come into play. But what I wanted to get at, yeah, it’s, it’s, it’s when there’s a lot of fear that say, assume we have these super intelligent machines in five years, not realistic, right? But 50 years, maybe 5,000 years, 100%. And there’s a lot of fear about this. And the best, and I had David Orban on, he says, you don’t have to fear it, you just, you become a hybrid, right? So you become transhuman. And that has a really bad ring to it, because we know that humans survived so well through all these challenges. So now we create something that could squash us like we are ends to them, right? That is, that is, Sam Harris’s, um, talk. Yeah, Sam Harris has argued this, of course, Nick Bostrom has argued this as well, and super intelligent. Yeah, but here’s my answer to this fear is machines have the same problems we have, right? So they won’t just go out and just optimize for very different functions than we are our function. And you say it’s not, but to an extent, our function is that we want to survive and populate the universe kind of, right? So we want to create something more productive tomorrow than we have today. Whatever productivity means, it’s maybe machines is technology, it’s better, better knowledge, better philosophy. I think this is definitely something we, we did up if we really optimize for it’s good question. And I think machines have the same problem. So morality, software upgrades like religion, hardware upgrades like better organs. Can I pause you for a moment and just ask a seemingly off topic question? I mean, you are, I just, I feel like there are some hidden assumptions in what you’re saying that I’m, that I question. Okay. I mean, what do you think is the number of children that actually do, do you have children of your own? I do. How many? I have twins. Two. You have, you have two. You realize that that’s only at replacement level. That’s not growth. Yes. So far, yes. Do you intend to have more? Yes. And do you think that, do you think that as a whole in society, you know, people in the developed world, which I think you would argue is progress, right? Or is, you know, right, represents some kind of arrow. Do you think that that is a growth population as a whole? You know, that, that is to say that, that, that, that developed countries are out competing in numerical terms, the, the less developed countries that are, you know, quote unquote, less advanced? Currently, no. No. And why is that? That is a good question. Because we have more resources. We have more resources. We have more resources. So if it’s all about survival and growth, then why would we not be having twice as many children? We could, right? We have, we have more money to feed them. You know, maybe, maybe the level to look at is not developed countries or is not developed countries. It’s communities within those that follow a certain belief system. But because countries are relatively random assignment, especially outside of the, of Europe, but even Europe, it’s pretty random. And people think about borders. There were no borders just a hundred years ago and 200 years ago that the idea of a nation state was not really formulated. That’s true. But if we look at, if we look at all of the countries on the earth by GDP, and, you know, you make that one axis, and you make the other axis fertility, you will see an extremely clear relationship, whereby at high GDP, fertility plummets relative to a low GDP. So why? To be honest, I would love to know the answer. We see, we see this, there’s a lower childbirth rates. People want less children. So you do a lot of work to only have one or two children. Yes, yes. And they also go into, there’s an enormous amount of infertility in, and that maybe might be slightly age related, but it’s, it’s huge compared to 100 years ago, especially with men. But it doesn’t exist in a developing world that seemed to have way more pollution from, from really broad stroke observations, not down to the individual. So there seems to be some dynamic at play. I don’t know if it’s nature. I don’t know if it’s humans, if it’s some maybe super either controls us. Once we reach a certain level, fertility drops off voluntarily or involuntarily, it just goes almost to zero. Yes. The data point is correct. They do. Our world in data has very good, has very good sort of charts and graphs about all of this. And do you know the answer? I do. Or at least I think I know the answer. I mean, largely it’s about choice. You know, in, you know, in, in much less developed countries, you know, women generally have fewer rights, birth control and, you know, in other kinds of fertility controls are fewer or harder to get. And, you know, and, and the, the age of marriage and the age of first, first childbirth are much lower as well. And basically, you know, less, less agency is being exercised, especially by women of how many, of how many children they have. And in the, and the absence of agency, you know, basically, you know, the maximum fertility, right, is just, you know, that all sex is unprotected. And, you know, you have babies at the maximum possible rate. And then you’ll end up with, you know, a dozen or more per couple. And, and, and in developed countries, you know, the reason to first order that we don’t have all of those babies is because people choose not to. And, and, and I raise this because, you know, this absolutely flies in the face of this, of this sort of growth oriented or Darwinian, you know, thesis that you’re propounding that the, that the point of life is, you know, is, is sort of, is, is to maximize, you know, numbers, maximize volume. It’s, it’s, I mean, when, when the moment we are able to choose, that is not in fact what we choose. We choose something else. Yeah, I don’t know. Because I don’t know if we really can choose this enough of that’s something that’s given to us when you look into, into, you know, we, we, we double population for say four billion to nine billion. We say, oh, that wasn’t voluntary, right? We can say that. But then it’s always who wins these struggles between groups of humans. Well, it’s typically the one that’s more productive and who’s more productive, the one that’s more innovative. You cannot have long term productivity growth without innovation. So that also goes back to this Darwinian argument that there is something in us that if we don’t optimize for survival, we don’t survive. When you could say, well, maybe that’s okay, then you just die, but we’ve managed to, to stick around for so long. Well, why do we, why do we, why is it, why is it that in, you know, in, in, in, in advanced countries, we make accommodations for people with disabilities and so on. Like the argument that you’re making seems to be that it’s, you know, it’s, it’s rather close to eugenic kinds of arguments, you know, as well. And I’m wondering, I’m wondering what role that, you know, why we would bother, right, to keep people alive who are, who are no longer of breeding age or who have genetic, you know, problems, you know, which many of us do, right, or, or who are disabled. Breeding, what’s the point? Breeding, breeding isn’t the only thing that helps you in a Darwinian sense, right? So that’s obviously something you want to, you want to, that’s part of this on a biological level. But there is superpowers in people’s brains. And we know this from lots of physicists, especially that seem to have a big group of people who are disabled, or maybe then the general population who are extremely, they have extreme gifts to humanity, and you enable them by having them being able to input to society. And I don’t think I don’t, I don’t think, I don’t think that most disabled people are Stephen Hawkins. But no, probably not. But they cut, you never know. And there’s, you know, any disability that we call disability might be a great ability in another window that we don’t see, right? So I feel like you’re trying, I feel like you’re trying to look at everything through a lens of utility. And that really is what I’m pushing back against. I think that, I think that this framework of utility is very limiting and, and kind of when we start to really pursue it causes us to contort ourselves in all sorts of rather, rather odd ways. I just, you know, I mean, whether we are choosing this individually, choosing it as a society, I mean, those are interesting questions. And they, you know, and I don’t think there’s actually a simple answer, you know, to where agency lies in any of this. But, but what does seem, you know, pretty clear to me from looking at, at nature as a whole, not just, not just humans role in it, but, you know, all kinds of animals and plants and so on, is that this idea that, you know, that everything is just in competition with everything else and everything is trying to, is trying, you know, is trying to just survive at all costs and, you know, and, and if everything else dies, that’s, that’s favorable to it, you know, because it creates more space in the ecosystem or whatever. I just, I just think that’s not, that’s not how it works. I feel like there are economic arguments against this, there are ecological arguments against this, there are mathematical arguments against this, there are empirical arguments against it. Even Darwin realized it. I think that, I think that the, you know, this highly utilitarian, you know, kind of optimization based approach came more from Spencer than it did from, from Darwin. And in fact, some of the Russian thinkers, I’m thinking, I’m thinking of, of, I’m thinking of Kropotkin, of Peter, Peter Kropotkin wrote about, about sort of a more cooperation minded view of evolution. You know, so what does, what does Darwin look like without, without, without Malthus, without, without Spencer? And, you know, the answer is not that there is no such thing as competition. Of course there is competition, but I think what we fail to notice is that competition and cooperation are very, very close and in some sense almost indistinguishable when you look at them from a mathematical point of view. And emergence of complex, of complexity comes from that dance. I think we absolutely agree on this. I had a similar debate with Simon Anhall that I really, really enjoyed. And we talked about the nation state and it’s on that’s his level of, of, of, of, of expertise where he really goes into, into a deep type of data. And he says at some point, these two things look the same. And I agree with that. Yeah. Well, the reason the reason this is relevant, the reason this is relevant to the AI question is that, is that I think that Nick Bostrom’s or Sam Harris’s point of view about, you know, like we’re making these AIs, they’re the next thing, then they’re going to come and come and kill us is basically applying sort of a hierarchical chimpanzee thinking to a situation that has long, long departed that, you know, that, that train station, you know, we’ve, we’ve been hybridized with machines for a long time. I mean, the fact that you and I, you know, have less fur on our bodies, you know, then, you know, then the other great apes has a lot to do with our clothes, which in turn has a lot to do with our machines. You know, fire, of course, has reshaped the insides of our bodies profoundly and shortened our gut. You know, all of those technologies, right, the technologies of language of culture, you know, we’re already so imbricated in all of that. Like, you know, your ability to survive as an individual out there in the jungle is much lamer than, than that of any other great ape. You know, what, what makes it all work is, is, is your enmeshment with, with technology and the way we all work together. And I think we fully agree on this, you know, this is the, the Homo sapiens against Neanderthals debate on a, on a different, different level as least as we think about that. But what I wanted to get at is, it’s this fear of people that is real, right? Irrespective of who shared or me share, we share them. There is an instinctive fear against something else that’s more intelligent than us. We don’t know what’s happening. You can say it’s aliens. It could be machines. Yeah. And we, we talk, we have records where I lose a colleague of yours at Google and he talks about the singularity from a very different point of view singularity as something that even he does cannot predict. And he’s probably one of the best futurists we have around. He was very good predictions in the past. And he basically says, you know, in 2038 and beyond, there is something that could give us this kind of machine. And it makes people scared. Yeah. People have been afraid of things going to happen. Well, what do I think what is going to happen? The singularity and something that is so, so intelligent that we might look like ants. I think that, I think that in some sense, we already have crossed that threshold. And what I mean by that is, you know, if you ask what is running the economy, for instance, you know, I would argue that it’s something emergent and intelligent that is larger than anything. Like the idea that they’re a handful of people. I mean, this is where conspiracy theories come from, right? I mean, you know, anti semites believe that they’re a handful of Jews running things, you know, or that Soros or, you know, or whoever, right, is in control. And I think that those, those paranoid fantasies are similar to these AI paranoid fantasies in the sense that they imagine that there is a, that there is a boogeyman. When in fact, what there, what there is already is, is a kind of distributed intelligence that is so much greater than any one of us can understand. We are ants, you know, and like, and, and, you know, so, so we don’t have like a boogeyman we can look up to and say, oh, this is, this is a danger to us. We feel like we are a danger to ourselves, but that’s it. We seem to have that image. Imagine one of your cells, you know, imagining, imagining that, you know, that a supercell is going to come along, you know, that is so much more powerful than it, it would be missing the point entirely, right? It’s that actually the intelligence of your body and if you have your brain is much greater than the cell could possibly conceive. And I think that’s the world we’re already in. Companies are intelligences, I believe. Nation states are intelligences, you know, even, even identity groups. And this is one of the reasons that I think that, I think that the politics of identity has become such a big deal or collective intelligences. Those already exist. And in many ways are, are, are much bigger than can be understood by a single person and have will, right? Those groups, I mean, when we talk about one nation waging war against another or a company, you know, acting in a certain way, like that, those are real statements, right? These are actors in a kind of actor network theory sense. And of course, they’re much more intelligent than individual people, although sometimes they also seem to behave very stupidly, right? And the two things can be true simultaneously. But they’re, they’re, they’re alien, they’re bigger. But they’re also, you know, enmeshed with us, we’re part of them, and they’re part of us, it’s not really separable. When you think of that difficult question, when, when we have these intelligent actors, right? And you say they’re already here, and we, this is just another level of another intelligent actor. So we shouldn’t worry because we find some work to work with them. I’m not, I’m not saying we shouldn’t, we shouldn’t worry. And neither am I saying we have to find some way to work with them. I’m saying that, you know, when things develop at a reasonable rate, then, you know, a lot of stuff happens. When I say a lot of stuff happens, I don’t, I don’t mean everything goes well, you know, right? I mean, development, development, historical development has, has, has always, I mean, we talked about inequality and injustice a little bit earlier. I didn’t want to misquote you. What I wanted to get at is, it’s this illusion of free will. Is that the other side of that coin? So we have this idea that we have free will. But when I understood you’re right and correct me again if that’s wrong, but the idea of free will is already an illusion that’s always been. So machines just add another layer of this illusion. Yes, I think that’s correct. Okay. And I would also say that the idea that, the idea that the idea that machines are an alien that is coming to, you know, to do, to do battle with us. It just, I think doesn’t, doesn’t make any sense. That doesn’t, that’s not really how it looks on the ground. You know, that’s, that’s a, it’s very different. I sometimes think, and I haven’t fully made out my mind, I’m not smart enough for this, but I feel like this free will is illusion that we have is an illusion that helped us survive. Again, that’s my argument. You might not join me in it. This is survival Darwinian argument. But I felt we adopted this idea of free will because it’s, it kind of gives us this illusion to get out of bed in the morning. It gives us the illusion to fight against odds that are terrible. So the best thing on from a rational perspective should have been staying the cave, never leave the cave and die quickly, because in the last painful death, instead of battling with the animals and trying to survive. But it’s, this emotional induced nonsensical view of good odds. We call it free will, but the odds are terrible against us. But here’s the, here’s the weird part. We, they made it happen, right? So it’s self fulfilling prophecy, at least up to our generation that might not last forever. It worked. I have a different view of free will. It’s, it’s based partly on, on the, on the theories of, of, of a psychologist at Princeton University, whose, whose name I’m, I’m actually blanking on at the moment. But I can, I can find it quickly. I think it’s, yeah, hang on. I’ll find it in a moment. Yeah, no problem. Let’s see. Michael Graziano. So yeah, I’m a fan of Michael Graziano’s views on this. His, his perspective is that consciousness is really about having a model. And, and originally it’s a social model. You know, I touched earlier on this question of, of, you know, what it means to make up a story about why you behave in a certain way. We have social models of others, you know, for very important reasons. I mean, if you’re a cave man, and you know, you, and you have an injured knee, and there’s another, you know, cave man coming at you and is looking at your injured knee, like, you really want to understand what’s going on in his head that might really matter for your survival, right? You’re, you’re, you’re thinking about his, about your, you’re developing a theory of mind. Now, theory of mind is, you know, just, but just by the way, not only important for, I mean, it’s important in a predator, prey relationship, both from the point of view of the predator and from the point of view of the prey. But it’s also important for, for social hierarchies and for mating and for every other kind of interaction, right? Theory of mind is always like a superpower. But the thing, the thing about theory of mind is you are never, your model is never complete. You know, you’re never able to simulate in your head, you know, what every neuron, right, in my head is just doing. So it’s always going to be a simplification. And, and that means that there’ll be a gap. I’m sorry, I have to move to plug in. There will always be a gap between the, between the simplification and the reality. And you have a lot vested in the idea of making that gap count, because the moment you can be predicted perfectly, then you’re in trouble. You know, if you can be predicted perfectly by your lover, you’re no longer appealing. If you can be predicted perfectly by your predator, you will be caught. If you can be predicted perfectly by your prey, you won’t eat, right? So we have, we have a lot vested in the idea of keeping a kind of inflated area, you know, right? Or keeping ourselves kind of on a, on a bit of a random cusp or, you know, having that liberty, right? That’s, that’s where our question comes from. So if it started socially, then it’s only, it’s kind of obvious that you will apply that to yourself as well. You’ll develop a theory of your own mind too. You know, Torsten will have ideas about what Torsten will do in the future, you know, how he would react to the situation or that situation. So I think that self consciousness, you know, is actually a side effect, you know, as does, you know, as does Michael Graziano, for instance, a side effect of social consciousness. And, and that’s a bit of a randomness, like a mutation randomness, right? So it’s, it’s, it’s just an abstract randomness that we put in our personal forecast. That’s right. That’s right. And we can, we can do it through a combination of techniques, you know, through keeping ourselves close to the cusps of dynamical, of sort of saddle points and dynamical systems, you know, where you could go either way, right? Not, not committing too early. You can do it through actual random variables. You can also do it through, through the, through the use of memory. You know, this is why, like when you write, you know, anybody who has ever, you know, written, you know, any kind of extended thing, right? You know that you, you always avoid using the same word twice, you know, into, you know, too close to each other, right? You don’t want, you want to avoid predictability, right? In every interaction. So I think that’s what free will is. And I think, and I think it applies in the sense for, you know, across species and across everything. Yeah, I love that. I think it’s a, it’s a really well thought out. One thing that I, that plays into this, and I’ve been thinking about this, you know, if abstract thinking is kind of the part of free will, we call it that way. So, but it’s also kind of assimilating our actions. So instead of killing the animal, we can just think about killing the animal and play it through, and we don’t die in nine out of 10 instances. We die once we found an action plan that works, and we can kind of stretch the time horizon and compress it. So absolutely. Yes, simulation. I agree with you. Abstract thinking and simulation are both really important. Will machines, that’s a question is, will they help us simulate better? And I think the question is 99% yes, but yes, will they also run their own simulations? Well, that depends on whether they’re in social interaction with us. So, you know, what makes these language model, you know, kind of AI is so interesting is that we’re starting to train AIs that are actually explicitly designed to be in social interaction with us. And I think that that’s, that’s why we start to have these questions about, you know, whether they have any will or they have any consciousness or we begin to empathize with them. You know, because in a way, this whole question of whether something, you know, whether there’s anybody home is really just a question of empathy. You know, empathy with others, empathy with ourselves. And so can a machine be trained to have empathy? Yes, I think so. And I think that in some sense, the way you get there is by starting to develop machines that are expressly designed to interact with us socially, as opposed to just performing, you know, asocial tasks. That’s interesting. So you say empathy is a precursor or core definition block of consciousness? Yes. Okay. Yes, I think so. In the sense that I think consciousness is effectively empathy with yourself or modeling. I’ve got to think about this. I’ve never heard that. Empathy is what you said. Sorry, you said what? Well, I think the boundary between empathy and social modeling or consciousness is a little bit of a fuzzy one. I mean, with empathy, we generally mean not just that you have a theory of mind or that you can model an other, but also that you feel their pain, right? That you have a vested interest in avoiding pain for them and things being good for them. And that’s a social instinct that often goes along with having social interactions. But of course, not always. If it’s a predator or prey relationship, then you have theory of mind about your prey, but you’re not going to feel its pain, right? You dissociate those two things. So it’s really about what works for the system as a whole. Yeah. That’s really deep. I got to think about that. I don’t know how to respond. Let’s go to some quick questions and change the topic a little bit. Maybe we can return to it once I have some ideas on that. What do you think of Apple Maps? Well, I haven’t done any kind of real comparison between the kind of remaining map platforms in a few years. So my opinions are going to be way out of date. I mean, I use Google Maps. I still find them better, at least the last time that I tried. But I know that Apple has made some pretty big investments in it. And I know that they’re not nearly as bad as they were when they shipped. So I think they’ve been having an impressive go at it. I don’t know where they are at this point. It’s very nice of you to say. What’s the best part of being in a keyboard according to your parents? Well, I don’t know. We haven’t talked very much about the keyboards where they met or what that was like. I think they were both young and idealistic at the time. My sense is that they liked the lifestyle for that time in their lives and for that period. Of course, a lot of the more hardcore beliefs of keyboard seam, the collective child wearing and so on, have been used earlier about the failures of communism. I think some of those failures also apply to some of those more radical social theories of keyboard seam that just have not withstood the test of time. What I have two quotes for you. You probably know who it is, but maybe already on the first one, but maybe we need to do two. One is our inventions mirror our secret wishes. The second one would be, travel can be one of the most rewarding forms of introspection. I like them both a lot. Who are the authors? No, who is it? Maybe it becomes clearer when we talk about your favorite Greek island. Oh, what is my favorite Greek island? Is it Corfu? I would say Corfu, yes. Is that correct? Yeah, you see me. Well, I have really kind of a fantasy of Corfu. The reality is that the Greek island that I’ve spent real time on is Crete, which I love. No, I have a fantasy of spending sort of extended time on one of the smaller Greek islands and spending some time just writing and thinking there. But it’s, to be honest, not an opportunity that I really have had yet in life. Yeah, you, at one of your prior appearances, I think mentioned Lawrence Terrell as one of your inspirations. So those are both Terrell quotes, I take them. Yeah, they are. What is your favorite espresso origin region? Oh, my favorite, my favorite origin is probably Café Vivace’s Vita Blend, which actually comes from a, it comes from a number of different, so I don’t, I don’t usually drink single origin espressos, I guess is the answer. I like, you know, I like coffee shops that, you know, that are serious about their sourcing and that blend the right things together. I like a fair amount of robusta in it because, you know, tiny bit of milk to make a, to make a, you know, a macchiato or a cortado, you know, like the robusta is important to the way it blends. Have you been to, are you traveling? I know you’re into coffee. That’s what I assumed. Have you been to some of the origin regions, say Panama or Ethiopia or the East Africa? No, of course, sadly I haven’t. That’s, that’s another, that’s another thing to do in the future. But no, I mean, I’m into coffee in the, in the way that, in the way that I, you know, that I’m not just Seattleite, it’s not, not in a serious way, where, wherein I’ve actually, I’ve actually done the, you know, the estate tours and things like this. What’s your favorite piece of art that was drawn by a machine, produced by a machine? Oh, that’s a, that’s a hard one. Well, I guess I, I am, I really liked some of Memo Acton’s work from, from a few years ago. I, I also, I also was very, was very fond of, there’s, there’s an artist who did sort of early, very early, let’s see. What was his name? Harold Cohen. Harold Cohen’s computer art was very beautiful and he was a really, he was a really early, he died in 2016. So, he was an early proponent of this kind of stuff and, and did some beautiful work. Some of his, a little bit reminiscent of, of Hockney’s kind of iPad art. So, Hockney didn’t use algorithm, you know, algorithms. He did it, you know, with his, with his fingers, but, but he really mastered, you know, just like iPad finger paint kind of thing as, as a medium and did some very beautiful work, you know, quite recently, I think in like 2014 or 2013. And, and, and I see, I see Bloom as being the, or rather, sorry, Harold Cohen as being, as being a little bit the, the kind of AI version of that. So, it’s very old school. So, it’s generally that by an AI that he trains? Well, how does he interact? I mean, it was, it was, it was very old school AI. So, this is, you know, this is before the, you know, before, before they were trained, you know, it was, it was a kind of Blackboard systems and, you know, kind of more classical AI systems. Yeah. I mean, if we broaden, if we broaden the question a little bit, you know, my favorite AI art, I suppose, is, is, is, is musical. It’s, it’s the music of, of, of cope. So, let’s see, let me, let me, let me try and, let me try and find my, my favorite. It seems to be from the, Emily, Emily, yes, I’m sorry, Emily, Emily Howells, from Darkness Light, from Darkness Light. This is, this is David Cope, who is also, you know, he’s, he’s getting on in years, but he was the, I think, I think the most, the most brilliant, sort of computer assistant composer. He used sort of natural language type techniques, you know, to, to, and he started doing this kind of work in the 1970s. And Emily Howell is the name of his AI partner. And from Darkness Light is, is a piano piece, a piece for two pianos that I think is absolutely stunning. And it’s, it’s, it’s quite recent as well. This is, this is, you know, it’s just a few years old, but I think it’s, I think, or looks like 2010, actually, not that I’m looking it up. But yeah, absolutely stunning. From the outside, just spending a little more time on it, it seems like art is absolutely ripe for AI disruption. There seems to be an amazing amount of samples, right? There’s a lot of data. It’s maybe hard to validate things. But it is, we can use the data, we can train models, and we don’t care necessarily what really goes on into an artist’s mind, we care about the end result. So it looks like AI is going to be 99% of the art in the next 10 years. Would you agree? No. I think that, I think that art is, you know, I mean, I would come back once again to the question of, you know, what is, what is good art and, and what’s the function of art? You know, if, if we couldn’t measure the quality of a piece of art as a scaler, and, and supply a bunch of examples and, and then, you know, and then just like have it produce, then I would say the answer to your question is yes. But, you know, but actually David Cope’s work is, you know, I believe a wonderful illustration of why that’s not really the case. So Cope, I mean, he used, he used traditional techniques, so it didn’t rely on massive training data sets, you know, it was more, it was more kind of, you know, old school NLP. But, you know, he, he began producing works, you know, in the style of Bach, for example, or, or other classical composers that were really gorgeous and, you know, completely compelling and, and ran into a lot of resistance from the, from the sort of, from the music, you know, from the, from the composer community. And, you know, as many of them thought that he was bullshitting and that he was actually composing these things himself, and it wasn’t really done by a computer. So, you know, he kind of kept on upping the ante and, and at some point he put out a zip file onto his website of 5,000 cantatas in the style of Bach, you know, just to prove, you know, that like, you know, whole lifetime he couldn’t compose, you know, this many cantatas himself. I heard about that. Yeah. Yeah. And they’re, and they’re, they’re largely, you know, pretty decent. You know, I think that, you know, with a little more work, you know, we could easily generate, you know, make an AI that this could be done with today’s technology, no problem, that could generate thousands of cantatas that by any, you know, objective measure, right, by somebody who does not know all of the Bach cantatas would pass the Turing test or even be more beautiful than the ones that Bach made. But I don’t think anybody would care. And, and, and that seems to be the case with, with, with, with a lot of copes work as well. I’ve actually bought all of his CDs. They’re, they’re, you can buy them on Amazon, they’re, they’re like, you know, they’ll, they’re kind of print on demand, you know. And, and I have a feeling that I’m the only one, the only person who has bought some of those CDs. I have a whole shelf of them. So, you know, it’s wrong with art, these, perhaps, perhaps, I mean, it’s an interesting question. Unsupervised learning is just going to go crazy with this stuff, because we have this huge data set, so much art produced, and then you look at Spotify, there’s a perfect list of what’s popular, say the first 2000 or whatever, whatever ranking you want to use, and you just make more of it. That seems to be so ripe for machine learning, but it hasn’t happened. So seemingly either it doesn’t work, as you say, there’s more to it, and we just don’t know it consciously, or maybe it’s hard, too hard to pull off, which I don’t think, like, I don’t think it’s, I don’t think that it’s too hard to pull off. I think that, I think that art is just not that reducible to something that is, you know, good or bad within the bounds of its own, you know, this, I mean, what you say about, like, it’s just an output, it doesn’t matter what was in the artist’s head. I just, I’m not sure I buy that. I think that we do care. And I think that, I think that it’s, it’s social function, it’s not just about beauty. You know, beauty was in some sense already, has already been democratized by, you know, the amazing cameras that we now have, you know, in all of our iPhones and things. Like, why, you know, why would it be the case that we would still care about, about the photography, you know, of the great photographers of the past, or even about great photographers, period, when everybody can make, you know, professional looking photos with their, with their iPhone? But either I tell you, like, I look at art, I go to a museum, and I can maybe infer what he thought the artist, but most of the time I have no clue, especially with more modern art, especially, you know, in the 20th century, I have no idea. And you know, maybe that’s the idea that I inferred and make my own model of it. But there’s a little, you know, little description that nobody ever reads. So for 99% of anyone who interacts with art, they don’t know. Maybe they care, but they don’t know. Yeah, I mean, look, I don’t, I don’t know Torsten, I think this is a little bit of a mystery. But, you know, we, we, we have to ask ourselves, like, what is the, what is the purpose of art, you know, or what is its function socially? And I think, I think the answer is complex. And I’m not sure anybody has a really, a really great answer. But the idea that it’s just something that, you know, has value in some kind of capitalist sense, and that, you know, and that can be measured objectively. And so we can just make an AI that will produce, that will crank it out in infinite amounts. We’ve already, we’ve already seen this proof of that, you know, there’s something else going on. And I don’t, if you put it that way, it sounds terrible. Yeah, exactly. I mean, that’s, that’s, I’m very glad, right, that, that’s, you know, the fact that we now have AI that can totally do that kind of stuff. It has not actually broken, you know, what art is. How impatient are you with technology? You mentioned that earlier, you were kind of sitting out there and waiting for AI to take off quite some time ago. When you, when you scale your own emotional involvement, how impatient are you? And how do you think will the, will that change? Will there be a quite a speed up in change? So I am not impatient at the moment with AI. I think that it’s making extraordinary progress at a clip that I almost would not want increased, you know, beyond what it is now. I think it’s already coming at a rate that is, you know, frankly, straining our abilities to think through all of the important questions, you know, as it comes. So, so I’m, you know, I mean, I get impatient with, you know, with some of the projects that I’m really excited about in my own team, you know, of course, you know, like we all want that stuff to go as fast as possible. But the rate of progress in AI generally, I don’t, I don’t think is too slow, if anything, the opposite right now. But one place where I do get really impatient is with space exploration. You know, I feel like, I feel like we, I’m, I’m very, I strongly disagree with all of the, you know, ecological arguments, you know, that we should not be interested in space because, you know, because it’s, it’s some kind of false choice, you know, we have to be focused on Earth and Earth’s problems. I mean, I absolutely think we have to be focused on Earth and Earth’s problems. But I also think that if we had gotten further with space exploration, if we had continued, we would actually be in better shape with respect to a bunch of Earth’s problems. And, you know, it seems to me, it’s sad to me that, that after the fall of the Soviet Union, which I think the decline and fall of the Soviet Union was in many ways the reasons why this space program, you know, in the two great superpowers, you know, stopped, stopped happening. And, and I think that’s a, it’s sad that it took a cold war to make that happen. And it’s even sadder that, that we have not kept our foot on the gas there. So that’s, that’s my big disappointment with, with, with technology. If we would go to Mars, would we, what do you feel a terraforming to planet? So basically putting a lot of changes into the atmosphere to make it more livable for us? Isn’t that climate change? And isn’t that a bad idea? Well, this is something that, as you probably know, Kim Stanley Robinson wrote about in detail in the Mars trilogy. And he talks about the, you know, that sort of debate between the reds and the greens, you know, or there might have been some other color involved as well. But, but, you know, that political debate, essentially, I think that, I think that we are, we are life. I don’t see, I don’t see us humans as being separate from other kinds of life, which has always been invasive in some sense. I, I don’t know what the stat, what the, what the status is of, of microorganisms on Mars. That would probably matter to my judgment. If Mars is, in fact, sterile, then I can’t, I have trouble imagining, you know, strong ethical arguments against the terra formation of Mars. And I think it would be a wonderful enterprise, actually, in a lot of ways. And would teach us a lot about Earth and about how, and about how to do better stewardship here. Yeah. If there is, if there is a local ecology, even if a very simple one, then I, then I think it’s a, it’s a much more fraught question. You know, maybe that will be a real question on Europa, if not on Mars, for example, that seems entirely possible, but there’s some, you know, non trivial ecology under the ice there. And that’ll be a real test for us and some way. Would you want to be a test pilot for the first flights to Mars or Europa? I mean, it’s always been a big fantasy of mine, of course. Yeah. Okay. I didn’t expect that. Interesting. So you, you, because the flying experience, or because you’re the pioneer, you’re that pioneer out there in space, all by yourself, what was drawing you to it? I just have always been excited about the idea of, of, of expanding, expanding the human frontier in some way. I mean, that’s, that’s what motivates all of my work. You know, I mean, I, I’m very mindful of, it’s not, it’s not an ego trip. And neither is it just about more or growth. I mean, I think about it in terms of, you know, what is the human project and how can I, how can I do something meaningful, you know, in the context of the human project? I do see space as, as, as fundamental to that sooner or later. Again, I, I, I love Kim Stanley Robinson’s books, because I think he makes that picture, you know, of what, what space means in the, in the context of the human project is very, very clear in the Mars trilogy, as well as in 2312, which is one of my favorite books of all time. We see, you don’t mind the risks, right? That, that would be what, what draws most people away from it. It would be pretty risky and you might have no way to return. So one thing is, you know, being a scientist and the other one is being a little crazy. Yeah. And I mean, I’m probably more risk averse than I was, you know, when I was younger. And, and yeah, that would be hard, of course. And, you know, there are real, I mean, like, there are people I love, there are things that, you know, I, what about, what about my, my espresso, you know, and so on. Exactly. So, so I’m certainly not saying that it would be an easy decision, but it’s one that I would be very, very tempted by if, if, if magically I were to have it, of course. I mean, that’s, I think, I think one would have to be, yeah, one would have to be a very different person from the one I am to not be very strongly tempted by, by, by such an offer if I was out. Going back to AI, it’s probably my last question. What can AI learn from religion? Both. Is there a religious model that we can build in an AI? Like, is that something we can even model A and B? What good could it learn or what negative could learn from it? Well, I, I have, I have views on religion that are probably, probably fairly unpopular. So I’m, I have to admit, I have to admit, you’re a little bit, you’re a little bit making me nervous about, about, you know, airing my, my opinions about this. I will once again reiterate that I will reiterate that these are, these are entirely my own opinions. I, I think, I mean, well, let’s, let’s do, let’s look at both sides. I mean, on the one hand, I think that religion has produced extraordinary artworks, you know, all of the religious traditions have produced extraordinary, extraordinary sort of cultural artifacts of all kinds. So we were talking about Bach cantatas earlier, but we could equally well, you know, look at, at, you know, Tibetan monasteries or, or, you know, like myriad other. And, and also, you know, if we look at, if we look at the attempts to eradicate religion, you know, Soviet style or, or, you know, or, or in the cultural evolution or, or what have you, you know, those, those also are evil and autocratic, you know, approaches. So I, you know, I’m not, you know, I think that, I think that religion is, is, is, I would be, I would be very strongly opposed to any, any attempt to discriminate or eradicate or anything of that sort. However, I also think that religion is, is, is almost a virus, you know, is, is something that, that it’s a power structure that is self propagating. It’s sort of a hitchhiker along with the mechanisms that give us cultural evolution. So, you know, our ability to learn from and have faith in, you know, the teachings of our ancestors and carry them forward and, and, and evolve them. You know, that’s what makes us human. It’s very, very fundamental. But I think those same impulses are the ones that allow the power structures of religion to propagate. You know, again, it’s probably served positive functions other than, than just, you know, cultural artifacts. I think that, you know, in the Iron Age, you know, there were certain moral revolutions that religion empowered. But at the same time, it’s, it’s, it’s, it’s a power structure. It’s rigid, you know, those, it’s also resulted in enormous evils and, and, and, you know, and the abuse of power. So yeah, I’m, you know, I’m not religious as, as you might imagine, as you might imagine from what I just said. Yes, yes. And, and I am not thrilled about the idea of, of, you know, AI quote unquote learning from religion in the sense that I think that what that really looks like is, is, you know, you know, an exercise in, in, in how to do manipulation or how to do the propagation of power structures, which strikes me as, as a weird game to play with, with a, with a technical system. You know, I mean, an analog to that maybe would be the Scientologists, right, who, who sort of played with integrating technical systems together with religion in order to reinforce it. And, you know, I think, I think too very problematic effect. Yeah. I think there is, I, I, I know where you’re coming from. And I think this is a valid reserved critique for mostly religious institutions in my mind. You probably take it further. But religion that was lived and built over 2000 years in the Catholic church. There’s a lot of things that have nothing to do with the original Old Testament, because it doesn’t even reference the Old Testament much. It’s New Testament. It’s more power than thinking in my mind. There’s, but obviously this is a very fine line to walk. And I am, I am a great, I am a great admirer of the New Pope. You know, I think that, I think that he, you know, has, has used, you know, the, used the power of the institution in ways that advanced certain things, you know, in very useful directions. I really liked his encyclical on the environment. You know, I thought that was fantastic and was also a very good use of the power that he’s been invested with. So, you know, I, I, you know, I certainly, I wouldn’t want to be cartooned as saying like, you know, religion is evil or something so simple as that. Yeah. No, it’s, it’s a complicated topic. What I’m trying, and I think this is a very similar topic. That’s why I wanted to raise this. If we make this assumption and you might, you might completely argue against this, but let’s just run with it for a second. You’d say that religion and codifies a behavior that is good for the individual, but also good for society, kind of like Adam Smith, but it’s not, it needs some more abstract rules, right? The society feels it needs to error correct certain things. And that’s what religion is for an error corrects on a civilization or, or on a society level for things where individual behavior, properly incentivized by our emotions or properly incentivized by social behavior, isn’t good enough. So it gives you like this, this big view on error correction. Maybe. I mean, you’re, you’re using a teleological argument. You’re saying like, because it exists, it must be solving, solving for something. And that may be true. But I think, you know, I mean, you could say, I mean, let’s just assume it. I just want to assume that I want to, I, I, I probably won’t, won’t convince you that that’s my opinion, but that’s, let’s just assume this for a second. Okay. So we will, let’s, if you run with this, what happens if we say, well, the problem is these, these error corrections are really difficult. So we’re very difficult to break down. And when we apply them, and when not, it’s a complicated system. And think about 3000 years ago, people didn’t know how to read and write. There was very basic education only. So basically, only what you can download from the immediate neighbors, there was no abstract knowledge to download anywhere, because literally, when it downloads, and you needed to codify it into, into more abstract rules. So narratives, it came up with something that would, would people find interesting enough to orally transmit, because otherwise it would have died out, because there’s no way to put it down. So let’s assume from a moment, that’s what it does. Okay. Nobody knows how this actually works or how these rules are. So the knowledge of the creators of these rules, they might be so old that you can’t, you can’t error correct, right? They might be 5000 years ago. So you can’t talk to them. There’s no way to say, Oh, why did you institute your role? And this resembles AI to me quite a bit. It comes up with, we talked about explainability earlier. It makes up these rules, but we can’t just error check and say, Oh, why did you come up with this rule? Why did you say this is a good rule? But with the religion kind of solved it, right? It went through this not inside, especially the new test I’m inside. Well, just believe in me and be, you want to be fine, right? It just tried out. And if it doesn’t work for you, just drop it. Maybe that’s something that AI has the exact same problem. Maybe the explainability doesn’t need to be there. As long as it works, let’s go back to the utility function. I mean, what you’re saying reminds me a little bit of a theory that I really like. I think I’m trying to remember where I read this. I feel like it was probably in Joseph Henrich’s book, The Secret of Our Success, which is about… I love how you remember the sources. I often don’t remember the sources. Maybe I have something else too. I tried, I tried, but No, you’re great. You’re great. But what Henrich claimed, and I don’t know what’s… I actually do need to go and look up the primary source for this, but it was about divination, which is a sort of simple… Well, I shouldn’t say simple. It’s a feature in many hunter gatherer religions. Whether the divination comes from entrails or from the flights of flocks of birds or other kinds of phenomena like this, but it’s something that has arisen independently in many, many indigenous societies. It’s used in some very typical ways for directing where you go to hunt or where you plant and when. The thesis is that really what these are are actually the random number generators, like the entrails or the flock of birds or whatever. The places where they’re adaptive, where they’re useful, are the places where picking a random number is better than using your rational thinking. For example, if you’re hunting, then absent divination, you will go to the same place that you shot the deer last time. You’ll try to reinforce, build on prior success, but the reality is that if you go back to the same spot, then your odds will be worse than random because the animals will be avoiding where you were. Basically, in any place that an optimal algorithm or a more optimal algorithm is random versus reasoned or principled, divination will win. The funny thing though about that is that divination, in order to work, requires that you believe in it. If you learn from your grandfather that you have to cut the entrails in a certain way and if they fall this way, you go that way, ignore your rational brain, do what the entrails say. It’s only the ones who believe it who will outcompete the ones who are too clever for their own good. I think that’s a version in a way, a simple version of what you’re talking about. I feel like if we’re playing games with AIs in which they’re trying to come up with stories that work on our credulousness, that create myths because they serve some kind of utility like that, I feel like that’s just a very… It’s a dangerous place to go. I would rather that we actually understand when random algorithms are the right thing and then know how to actually optimize that. Or maybe there’s an even better than random algorithm that would do the trick even better, but you can only get to that when you actually understand the principles underneath. Religion tends to be about replacing systematic thinking with some article of faith and that can be better than not having the article of faith, but better still, I think, is actually being able to see all the way through it, if that makes sense. Yeah, that’s a great way to look at this. I like that idea of the random number generator, but the better way to look at it, would that be math or physics? Yeah, or modeling. This is where we get to the explainability problem. If you have a neural net that can model a function really, really well for some definition of good modeling, then it may not be particularly reducible further. In some sense, it may fail the explainability test to really go much further with it, but you still have a model. You know what its inputs and outputs are, and you know in what sense it’s optimal, in what sense it’s a good model. I think in many cases that will have to be enough. Take the cancer example. If you don’t understand anything about why certain cancer diagnoses are happening, then that may be a problem because it may be doing something bogus, like reading the text on the edge of the x ray and making some inference based on your neighborhood or doing something that you don’t get. At some level, if you’re sure that it really is just looking at, say, the structure of some cells or some tissues and making a judgment that is kind of ineffable, that is the sort of judgment that a skilled expert can make, but the machine can make it even better, then I don’t know what it means to explain it any more than that. It’s just, okay, we’ve got a really good judgment that we’ve made. We understand well enough that we kind of know what goes in and what comes out and approximately why, but further explanation is not necessarily in the offing. Kind of at the limits of language. Yeah. That’s going to be an interesting future because we got to look at a lot of things that are going to look like magic to us. Let’s see how we deal with this. That’s going to be a big social experiment. We are talking about it. When I look at some AI’s these days, it looks like magic. I mean, I can read about it and then it becomes less like magic, but it still feels like that. We are totally in that social experiment now. I mean, we talked earlier about super intelligences already being here, Torsten. The stock market is an example of one such. I mean, you said things about economists think all the information is out there and so on. Maybe, although there are also all kinds of high frequency trading bots manipulating the markets in various ways that we don’t understand, how can you really explain why the stock price of a company rises or drops in a given day? I mean, it is beyond explanation already. We’re already in that kind of universe that we’re talking about now. In a way that’s consequential, right? I mean, money is consequential. I feel like we’re already there. Well, on that magic note, thanks for taking the time. That was fantastic. I learned so much and you definitely borrowed my mind. Thank you, Torsten. That was incredible. That’s really kind. It was a great conversation. You brought into mind as well. Thank you for the opportunity. Absolutely. I hope we get to do this again. Likewise. Thanks again, Blaise. Cheers.

Recommended Podcast Episodes:
Recent Episodes: