Michael Graziano (Can we decode consciousness soon?)

  • 00:01:38 What's the current state of consciousness research? Why there is a surprising amount of progress?
  • 00:09:10 Is the rise of the Homo Sapiens connected to the changes in consciousness?
  • 00:11:43 How does the 'attention schema' work that Michael puts forward?
  • 00:15:22 How will machine intelligence adopt consciousness?
  • 00:21:28 What will it take to give self-driving cars consciousness?
  • 00:27:37 Is consciousness and Free Will an illusion? Does it matter for Prof. Graziano's research?
  • 00:40:41 Where do the models for our brain come from? How can we be born with it despite the rather small data size of the DNA?
  • 00:45:20 When will brain uploading be possible? Will it be possible at all?
  • 00:53:03 What would happen to our consciousness when we multiply it?
  • 01:03:03 Is the time speed of time related to our consciousness? What is the neurosciences of time? How can we travel to deep space?
  • 01:08:58 Why does the structure of the DNA and computer code so strikingly similar?
  • 01:11:11 Does the structure of our consciousness indicate that we are 'living in a simulation'? Is there a real reality?

You may watch this episode on Youtube - #66 Michael Graziano (Can we decode consciousness soon?).

The patient characteristics did not differ significantly among the four groups. The medication in Angeliq can be sold under different names. This medicine has not been studied in children and adolescents under the age of 16 years; therefore, the effects in this age group are not known and use is not recommended aptekabezrecepty.com. Tablets: 500 mg (white, oval, film coated, biconvex) debossed with "FV 500" on one side and no markings on the reverse side.

The mechanism of anti-inflammatory activity of topical corticosteroids is unclear. Metabolism takes place in the liver to produce an inactive metabolite. However, treatments such as Fosamax may help relieve symptoms and keep these conditions from getting worse オンライン薬局. At least 90 percent of the following bacteria exhibit an in vitro minimum inhibitory concentration (MIC) less than or equal to the susceptible breakpoint for cefotetan against isolates of similar genus or organism group.

Michael Graziano is a scientist and novelist and currently is professor of Psychology and Neuroscience at Princeton University. Michael is the author of several books including Rethinking Consciousness: A Scientific Theory of Subjective Experience and Consciousness and the Social Brain.

 

Transcript

Welcome to the Judgment Call Podcast, a podcast where I bring together some of the most curious minds on the planet. Risk takers, adventurers, travelers, investors, entrepreneurs and simply mindbogglers. To find all episodes of this show, simply go to Spotify, iTunes or YouTube or go to our website judgmentcallpodcast.com. If you like this show, please consider leaving a review on iTunes or subscribe to us on YouTube. This episode of the Judgment Call Podcast is sponsored by Mighty Travels Premium. Full disclosure, this is my business. We do at Mighty Travels Premium is to find the airfare deals that you really want. Thousands of subscribers have saved up to 95% in the airfare. Those include $150 round trip tickets to Hawaii for many cities in the US or $600 life let tickets in business class from the US to Asia or $100 business class life let tickets from Africa round trip all the way to Asia. In case you didn't know, about half the world is open for business again and accepts travelers. Most of those countries are in South America, Africa and Eastern Europe. To try out Mighty Travels Premium, go to mightytravels.com slash MTP or if that's too many levels for you, simply go to MTP, the number four and the letter U dot com to sign up for your 30 day free trial. So Michael, you are a professor for neuroscience at the Princeton Neuroscience Institute, which must be really exciting and you are really putting your finger into the wound of one of those topics. A lot of people think about a lot, but I think there isn't as much progress and it's how we understand consciousness, how we how we define consciousness and it's something that's kind of been vexing philosophers and neuroscientists as a newer discipline for a long, long time. How do you define consciousness and what is your new approach to giving us a better idea of what it is? Sure. The first thing I would say is, I think there is a view, a mistaken view, that there is not much progress and actually there is. There's a lot of progress. I think the problem with the consciousness world, with studying consciousness, is that a lot of people think about it in a fundamentally magical way, a non scientific magical way, and they want to know what produces the magic essence. So for many people, the definition of consciousness is that non physical essence of experience. Why does anything feel like something? So I can think thoughts, a computer can construct propositions, but I have a feeling, a sensation, the computer presumably doesn't. I can see the world around me, a computer hooked to a camera can register and a compute and process visual images, but I have an experience, a subjective experience. So whatever it is going on in here, whether it's simple sensory information or memory or deep thought or decision making, all of those things we know how to build artificially, but people have what many people think of as a magical essence hanging around it, the experience part. Where does the magic come from? And when you frame the question as what produces a non physical magic, there is no answer to it. And that's the mystery that has no solution. So what's changed is people are beginning to realize, wait a minute, we're really complicated machines, whether we like it or not. I mean, we're way more complicated than any modern computer, but we're still computing devices in a sense or brains process information. And so we are machines that have self descriptions and self models. And as part of that, we think we have magic essences inside of us. Now that's something we can explain. That's something entirely scientifically approachable. Why does this big complicated biological machine think it has a physically incoherent magical essence inside of it? And there's a lot of progress on that question. So it's a matter of a shift of perspective. The word consciousness I think alludes to why do we have meaning in this world, right? So there's just lots of different things bundled into consciousness. Why do we do things that we feel are important? Why do we have free will? Why do we assume someone else has consciousness, right? So we can't really see the consciousness in someone else. But when we are together with other people, we can kind of feel they have consciousness as well. We can't really see it in lots of animals and plants, right? So from a religious perspective, we describe it as a soul. So there's many different things that come into this word consciousness. And I think what you guys do, and I think this is really awesome, is you've taken it down to a level where it becomes a measurable bird. It's something that we can reproduce. As you say, we are not that far away from reproducing it inside a machine that will maybe one day think it's conscious. But we might have trouble to see that in the first place, right? So you've touched on something really important that is often overlooked in consciousness research. And that is in humans, at least in humans. Consciousness is not just about your own private self. It's how you understand other people. And that's one of the most fundamental ways that we use this whole concept. I see consciousness in you and in the people around me. And that's how I can have any kind of empathy or any kind of social connection to other people. So consciousness is not just my experience. It's my ability. I use that construct and attach it to all kinds of other things around me. So consciousness at the root of social interaction, social cognition, that's a crucial part of the puzzle. But again, it comes down to, I'm this really complicated machine. How do I process you? Well, I can't generate a really accurate detailed simulation of your neurons and your brain and everything and compute everything about you. What I do instead is construct a simplified cartoonish model of you. And in that model, you have this kind of magic essence inside of you, the consciousness essence. And that's my way to track you and understand you and make sense of you without having to do too much neural computing to understand you. So consciousness is basically a simplification. It's the simplified way of thinking about what another brain controlled agent is. And we use it on each other and we use it on ourselves. It's essentially a construct that helps us keep track of ourselves and other people. But you're absolutely right. The social component of it is really crucial. It's a crucial part of this larger puzzle. Yeah, Blaise mentioned that in the prior podcast episode. And he said, you know, without empathy, there is literally no consciousness. And he attributed this to you. And we didn't go into that at the time. But when I look back and we have this distinction between the Homo sapiens, some ancestors of ours who suddenly got really social. And we don't really know why. We don't really know why it changed the world. But we have the Neanderthals, who apparently were more intelligent, who were probably better adopted to the way the world looked at the time. But they weren't as social. Do you think these things are connected? So we have consciousness that necessarily is kind of a predictive model of how other people are around us. And that allows us to specialize because we know what other people are good at, so to speak. And we can live in bigger tribes and we can build civilization around us because some precursors of Homo sapiens were around forever. But civilization seemed to just suddenly come up 15,000 years ago, 20,000 years ago, which is quite stunning. A lot of anthropologists are still surprised why this actually happened at that point. Yeah, I would, I first of all, I've spent a lot of time speculating about the evolution of this. And it is speculation. And nobody really knows. But for what it's worth, my speculations will go like this. These are much more gradual changes than people realize. And the ability to look at another creature and realize that it has a mind to essentially attribute consciousness to it, like it is aware of the food that I also want. Or it is a predator and it is aware of me. And so I better watch out. I think those date way back. I think the precursors of this are present way back. You're looking at the origin of mammals. Mammals and birds probably share some of these features. And so the complexity grows enormously. And you get enormously complex social structures and social interactions in all kinds of animals in elephants and primates, of course, and whales. It is true that humans went through some phase of massive social expansion. That's obviously true. And our social ability is bizarrely well developed. And how this came about, of course, is debated through evolution. But yes, we went through that phase. I think our our consciousness model or consciousness idea that we attach onto other people and onto ourselves that must have gone through some major upgrades relatively recently in our evolutionary history. But I would be darn surprised if Neanderthals and Denisovans and Homo erectus and so on and so on didn't have a pretty large dose of what we think of as consciousness. Yeah. In another way, you describe consciousness as an attention model, as a kind of a bubble around us that constantly we interact with on an unconscious level. And we know that our whole body and how the way we react to there's a lot of unconscious input of the screen all the time. It's a dinner party example, right? You suddenly hear your name and you realize, whoa, did I really listen to all these 100 conversations at the same time was massively parallel. But when your name is called out, you immediately look that way and you're like, well, how did that actually happen? So you describe this consciousness, a bit of a this dinner party system where we attribute attention and in that sense, resources to the most pressing or the most intelligent way to focus our resources at that point of time. Is that still correct? Yeah, that's right. So there's lots of different ways one can get to this perspective. And one of them is, again, through social interaction, social thinking. So I look at you. And of course, right now, you're just pixels on a screen. But I look at you and I attribute to those pixels a mind and a consciousness. And what does that mean? I think you're conscious of me right now. I think you're probably not conscious of the wall behind you, except that I just said that and drew your attention to it. And so now you are kind of in your mind conscious of that wall behind you, but a moment ago, you weren't. And so when I attribute to you consciousness of something, what I'm really doing is modeling, building a simplified way for me to understand, you're paying attention to the something. And so this is consciousness as a way to model or understand attention. Attention as you started to describe it is just it's a data handling method that the brain uses that focuses more on one thing than another thing. We all need that we all do that the brain will be overwhelmed by information if you couldn't, if it didn't have a way of focusing on the most pressing information of the moment. So when I look at you and I see you doing that, again, it does me no good to say to myself, oh, his brain's neurons are interacting in such a way that this information has risen up in signal strength and so on. No, I don't do that. What I do instead, my social machinery is generate this simpler, more intuitive representation. I look at you and I say, oh, he's conscious of that. He has a mind that's conscious of that. And we're doing the same thing with respect to ourselves as well. We understand ourselves through the same mechanism. And so this is to be conscious of X is a way to think about a brain attending to X. That's really the link between attention and consciousness. So again, the idea of consciousness, consciousness is the magical essence that we think we have, because it's an easy, quick way for us to understand the more complicated, deeper truth of attention. Yeah. A lot of people ascribe this way of preparing ourselves for the future as we are just this bootloader of weirdness that there for the machine intelligence that's coming. And what is obviously the vexing issue right now, the in machine learning, we have the ability to deal with a lot of big data and come up with a certain model that takes a long time. But once we have the model, we can use it really quickly. And it kind of works 99% or 99.9%. There is always a couple of places where it doesn't work so well. And then once we do this, we probably need another layer, how we attribute resources, right? So that's the old issue that David Hume, I think, first came up with this. We have all these options, we have these endless amounts of options, but where do we actually want to go? What do we want to do with our life? What are things that are not just short term limbic brain emotions like hunger and we are thirsty, we need a mate? What do we want to do long term and where do we want to be when we die, so to speak, as a most long term question? Is that something that we can build with this model of consciousness? Can we put it into a machine, so to speak? So the idea, so this particular theory of consciousness, it's called the attention schema theory, that we build a schematic model of attention and this is what leads us both to be able to control our own attention and understand other people's attention and attribute consciousness to others and to ourselves. So it's called the attention schema theory and bit by bit the theory is collecting data and experiments and so on. This theory does not, so I think you can build it and indeed we're working on that. I think we, one can build this kind of mechanism. It is not a theory of how a brain can have very complex thoughts or make good decisions or have memories or feel emotions. It's, that's other stuff that the brain does, that many, many smart people are working on how to build artificial devices to do that kind of other computation. The attention schema theory addresses the question, why do we think we, why do we say we have a subjective experience of doing those things and why is it useful? Why is that component useful? And so if we're right, building a machine that has this component, this extra consciousness component, is to build a machine that is way more capable at all those other things. If you can build it to do those other things, then it will do them way better with this mechanism because this is the root mechanism of attention, of resource allocation. And so this is really a crucial mechanism. What parameters would it follow do, right? So do we know is like something that you found in your research, what parameters, what hierarchy, what priority we follow in our resource allocation? I assume it's been with humans for quite some time and I heard you say there's signs for this kind of consciousness also and animals. Is that relatively easy to decode? Do we have data on it and can just plug it into a model and then a couple of months later we know the outcome? Well, most of the data comes from vision, where there's just a lot known about how the brain processes visual input. And so a lot of the work on consciousness is about visual consciousness. So the brain processes visual information constantly, vast amounts of visual information. Only a teeny tiny part of it we have a subjective experience of. Like you see a little bit of this and you see a little bit of that and you don't realize that there's a 10,000 times more visual information flowing in and being processed. You're only aware, consciously aware of a little bit here and a little bit there. The kinds of models we're building right now and I'm guessing will be what people focus on for the next decade or so are visual processing models. So we think it's plausible to start building machines that have this consciousness stuff with respective visual experiences. Like a self driving car that goes through a day of data, so to speak, or the last five minutes, whatever the time frame is, and it would retroactively, so that's the other question, how real time that could be, it would find out, okay, those are the two situations where that could have led to an accident very quickly for instance. That's how I consider my consciousness, right? I drive the whole day, I don't remember anything. But there's this one or two situations where I felt I was pretty close to an accident and I remember this, they make it into my consciousness. Is that something that you can teach a car relatively soon? I think that in a sense cars must, self driving cars, I don't, well, I mean, it's proprietary, so you don't really know how they work, but they must have attention. They must have the ability to focus on something because that camera or set of cameras is taking in such vast amounts of data that it's not computable. It has to have algorithms that say, focus your resources on here right now and on here right now. And when we drive, we do the same thing. That is, we don't take in everything. We see the sign and we see the person in the crosswalk, and so we're focusing our resources on this or on that. If we're right, this theory is right, then we have a handle on a much better way to do that, to attend, to control attention. And so there are immediate definite potential benefits to things like self driving cars or other technology. One thing that waxes a lot of people in self driving car research, is basically you compress all that massive sensory input into relatively simple labels and instructions. So stay in the lane, for instance, know where the lane markers are, keep enough distance. So they're all relatively simple, to be honest. But the biggest problem they have, and this is why it doesn't work in cities, is if you have so many close calls, like there's a pedestrian that's in the road, but this pedestrian might just cross the road. Like in San Francisco, this is a normal thing. You don't wait for any signals, you just go. But it might not be a danger in human drivers, notice. But people from outside of the city have trouble with this and self driving cars can't compute it at all. They immediately hit the brakes and they don't move anymore. So depending on the environment, there's too many things that could be a close call that I've paid attention to already, but they stopped the vehicle from doing anything useful. So you can see this actually in real life, that these vehicles, self driving vehicles and their demo version with the driver, they slam the brakes and then they're being released again by the driver because it's actually dangerous. That's why they can't really deal with all these things that could be potentially very impactful. Like the AI would say, well, there is on average 99% of the time, this is a dangerous situation. But here in San Francisco, it's not, and then the model fails and they don't know what to do. So they would have to make a model for each block in the city just to get it right. So first thought is, so self driving cars must have this, you know, models and attention and so on, otherwise they can't function. They're not as good as people at that, obviously. Here's a layer to the problem that nobody's thinking about yet. The self driving car must interact with other self driving cars. And so self driving car A has to make predictions about the behavior of self driving car B. Now, how does it do that? It has to be able to understand somehow or build a model of the little mind or brain of self driving car B. It must be able to say to itself, in effect, this other car is attending to that pedestrian so much that it might miss me. And therefore I have to adjust my behavior on the assumption that that other car is focused on this thing instead of that thing. And that layer of processing doesn't exist yet in self driving cars. But that's really crucial. And that's where I think it starts getting interesting. Because in a sense, that's the beginning of self driving car A, saying that other car is, quote unquote, conscious. But I want to know what it's conscious of at the moment. So that determines my behavior with respect to it. So it's that second layer where things really get very rich and very, very interesting. And I, you know, yeah. So could we say consciousness is a way to outsource model building knowledge generation for us as humans, right? So I think this a lot of people say this is why humanity is seemingly taken off a technology because we have learned especially the internet is beautiful for this like YouTube. We can outsource knowledge generation and be trusted enough to make a part of our own model. And that was what Mark Bridges told me that last time. If you don't trust other people enough, and other solutions they have come up with, you run it very quickly into this infinite regress, you can always ask why. And then you end up going back to what the Greeks struggled with, right? They never got any technology built because they seemingly they never trusted anyone. They understood the model, they understood philosophy of what consciousness is. But you never trusted anyone enough to build these layers of technology is that will be accelerated out of right now that we have finally built that trust that we trust other people like the other card that does something useful that we learn from it. Well, I think we do that with other people. Yeah. And we have to that's that's that's evolution's solution to pro social behavior in people. Right. So imagine I one of the thought experiments I do is imagine you walk into a grocery store, and it's filled with other people and imagine that you have no concept that they actually have minds or consciousness or anything. They're just objects that are moving around and weirdly in your way. What would stop you from just mowing them down and getting to where you want to go? I mean, an agent without the ability to know what consciousness is attributed to others and to oneself and see the similarity. Such an agent is both socially incompetent and also sociopathic. Right. And so we have a bunch of those still in our society. We do. But the but the and many people have pointed out a lot of people say right. Yeah, well, people have pointed out while there's people who are conscious and yet, and they know what consciousness is, but they're still sociopathic. I mean, my my my answer is, for goodness sake, 99.99% of the time, that's not what happens. Other ways every time you go to the grocery store, you'd be in danger of getting killed because you're an object in the way of some other agent. Right. We as a society, we're incredibly pro social as a species, because we understand this about each other, because we have this capacity. And so to to your point, machines, well, yeah, as machines get more human like, as we develop, it almost like a puppet, right, you look at a puppet, and you can develop a feeling of a rapport with the puppet, and you, you know, it's just a puppet, but your social machinery starts to generate a sense of a consciousness coming out of the puppet. And that's the moment when you really begin to engage in at a social level with the puppet. And, you know, the interaction between people and machines will will soon go that direction if it has not already. Yeah. When we talk about free will and consciousness, and I are gonna just put them together for for a moment, it's probably there's a lot of issues with this. What I want to just put out there as a hypothesis is, is it just an illusion? consciousness and free will, do they have to be real? And my my assumption is they're not real, but it's helpful that we believe in it. We bought into this illusion, and that made us better and survival made us better at living in the society that's more productive. Do you do you think they are an illusion? Well, I, there is a whole branch of the philosophy of mind philosophy of consciousness that says consciousness is an illusion. It's called illusionism. And I get I have an interesting relationship with the illusionists, I get into a very weird sort of argument with them, because I think I agree with what they're saying. But I can't stand the words that they use, because I don't think illusion is the right word for it. So, I'm going to go way back if that's okay, pedal back a bit, and, and use an analogy that I often use, which is white white light, or white objects, we look at white objects, and, and our brains builds a model of what's out there. And the model is highly simplified and tells us that this thing is bright, but has zero color. It's like high in brightness and low in color. And that's our natural in evolutionarily built in model of what a white thing is. And our brain builds that model, because it's simple, because it's easy to compute. It's a cartoonish model. It's a caricature. But the truth is, as we now know, everyone learns in school, white is not brightness with no color. It's the opposite. It's all possible colors in the visible spectrum. But we don't see it that way, because the visual system builds this cartoonish caricature of what's really out there, right? Now, here's a philosophical question. Is our perception of white an illusion? I am not sure I'm comfortable with that term, because illusion implies there's nothing out there. It's a mistake. There's nothing out there. I think the right term is, is caricature. It's not an illusion. It's a caricature. There is something real out there. But the brain is built a simplified version. And our understanding of it is based on that simplified version. And so now we get to the consciousness question, and the people who say, well, consciousness is an illusion. Instantly, everyone who hears that says, oh, there's no such thing as consciousness. There's nothing out there. It doesn't exist. We just mistakenly think it does. And my point is, no, there is something there. There's a brain, and it's busy computing stuff, and it's focusing on this and on that, and it has attention, and it has all these marvelous properties to it. Consciousness is not an illusion. It's a caricature. It's the brain's super simplified, caricaturized way of understanding what it's doing. And so illusion, I'd say no, it's not an illusion. But what we think we have, the consciousness we think we have, is way simpler and way less intricate in detail and physically intricate than the actual truth, the actual thing that we have. So that's my roundabout answer to is consciousness and illusion. I don't think so. I think there's something real there. We just have a simplified caricaturish understanding, intuitive understanding of what it is. Yeah, I'm fully with you when we think about consciousness as a pure, recess allocation, super CPU, so to speak. That kind of has the ability to shut off energy and processing power wherever we go in the brain. But I think a lot of people associate it slightly more to consciousness and free will. The idea is that we are not just a cog in the machine. We have this ability, obviously limited by lots of things outside our control, physics, tons of things that happen in society in everyday life. But I'm the master of my fate. I can steer that ship, even if it's only so slightly. I'm in the end the one who makes that final call. And that's true, right? So we allocate the resources. We say, okay, I don't pay attention to the person who's ahead of me in traffic. I know we will be fine. I'm just going to keep going with 50 miles an hour. I'm in the end the one who makes that call. I feel I could attribute, maybe people call it a higher calling, religious people, people call it maybe faith. Maybe it's a trust in the creator. So there is a little bit of, because when we think of this final decision to allocate resources, it is the ultimate CEO decision. It's the ultimate president decision. If the president feels and it's elected, and let's assume we're in a state of emergency, we want to go to war. We all have to go to war. There is nothing else anyone can do. This can change everything. So it is the CEO decision that we have and we feel that's so intrinsic to us. But is that really true? Because we are so constrained with this decision. And maybe it's just better to believe in that we have this free will tend to actually use that power. And I think this is kind of what I'm getting at. This is what most people think of that. We just paid that illusion because it feels better, but 99.9% of people will never actually use it. Basically, they're basically part of that big herd, the big civilization. It's probably different when we just try it to 100 people. Then maybe there was way more leeway. It could be. So in a sense, to think of ourselves as having free will, it reminds me a little bit of Newtonian physics. When you look at the solar system and Newton said, well, okay, the earth is a point with a mass attached to it. And the sun is a point with a mass. And if we do that, then we can understand the motion of these objects. I feel like free will, the free will in the consciousness is the point mass inside of us that we use as a simplification for the reality. Reality is more complicated, but it is indeed very useful to think of ourselves in simplified terms as a conscious agent with ability to make decisions. But I would say this, the link between decision making and consciousness, in my view, is over exaggerated. And here's why you make tens of thousands of decisions a day. Only a few of them enter into what you think of as consciousness. So we constantly make decisions. In fact, we were talking about self driving cars as a good example. The self driving car makes decisions, go this way versus go that way. And the ability to take in lots of information, filter it, and then choose path A over path B, that's what a decision is. This is something that doesn't require consciousness. But sometimes it happens with consciousness. Sometimes we have the subjective experience that we made a certain decision. And so this is kind of my perspective on it. The machine up in here in our heads is actually making decisions and choosing this over that. It's not like we have no autonomy. We do. It's not like we can't make decisions. We do. I mean, within the limits of the laws of physics, we have basically free will. But the question is, you know, the link between that and conscious experience. And I don't think the conscious experience causes the decision. I think the conscious experience is one way that we understand some of our decisions that we make. Yeah, so it's like it fills in the gaps retractively, right? When they say we touch a hot stove, we pull back our arm, and by the time we realize that, you know, the arm is way more retracted. So the neocortex is way behind the nervous system, but it gives us a story retractively. Is that how you would describe it? That's part of it. But, you know, of course, retracting your hand is a very simple circuit, right? But you can make very complicated, high level decisions and only bits and pieces of it reach what we think of as subjective experience. But here's the problem. I often don't understand why I'm making these decisions. I make them consciously. But there's so much information prior to me that I don't know where it came from. Is it someone that is a propaganda? I don't even remember where it came from. Is it a book? Is it from my childhood memories? Is it something I heard on the news? I have no idea. But I think I make a conscious decision. But I'm sure, and this is Sam Harris point, if you manipulate all these information sources consistently enough, and I used to live in Eastern Germany, I know that I would make a completely different decision. But I would still call it free will. And it seems to not compute. Maybe there's something we're missing. Yeah, I think what's missing is there's a fixed concept, a simplified fixed concept people have. I am subjectively aware. That's my consciousness. And it is the thing making the decisions. And that's not true. There is a thing making decisions, actually lots of things making decisions in your head. And sometimes the subjective experience mechanism gloms onto it, and sometimes it doesn't. It's almost like the subjective experience mechanism is almost like a spray can spraying red paint on some items and not on others. And then you can ask, well, what made that particular item like the car, what made it drive down the street? Was it the fact that red paint was sprayed on it? No. The red paint is interesting and important and part of the way our brain works. But the car was going anyway. So it is kind of a selective bubbling up of all this conscious data stream. It's like a fire hose. But only certain elements will be prioritized. And then retroactively we associate this with we wanted it. But actually, because we are not aware of what unconsciously is in this fire hose, that we made that decision is actually bogus. Because there's so much other information that we never were aware of. But you did make the decision. That's my point. You, being a brain, made that decision. And if you are not aware of it, that doesn't mean you aren't responsible or you didn't do it. So the awareness is just one teeny piece of you. You made those decisions. You took in that information and digested it and made the decisions. It really is you. It's not like some weird, devious beast hidden under the surface. That's you. But the consciousness thing inside of you is just a small piece of you. And so one just has to accept that. I have to think about that. That's just a very different way to look at this. I'm thinking of it in like computer terms where I know this is huge data streams, like terabytes of data, like log files are being sent into a central fire hose. But then you can get certain analytics out of it. But that's kind of the consciousness, right? There's very few analytics you extract out of it. But you might miss most of the picture because you're not looking for it. That's kind of my, in my mind, what comes to mind immediately as an analogy. One thing I've been thinking about, which goes into a similar direction, we have all these models we are born with, right? So when we think in AI terms, right, we build these models, takes a long time, and then we use it, takes a couple of milliseconds. It's basically free. And our brain, when we are born, it knows a lot of things unconsciously. We can breathe, you know, we have basic survival instincts, they are all encoded. So we have these models. Is neuroscience any closer to figuring out how are these models actually being encoded? Because they must be in this piece of DNA at some point, right? Which isn't that much data. It's only about a gigabyte of data. And there's some really complex models that must be in that relatively small amount of data. Is that even possible? I think that the models are not literally in the DNA data stream. I think the data, the DNA doesn't tell you all the, it's not a blueprint that tells you all the pieces. It's an instruction manual. For example, if you're making, I don't know if you're a caveman and you're making your stone tool. There's a difference between a minute description of the tool and all its edges and sharp points and so on versus a description of take the stone and whack it with another one here. Like there's an instruction manual is simpler than the blueprint. I think what's happening is the instruction manual basically tells the brain tells the person, you know, put yourself in this situation and that context will tune you up. Now put yourself in that situation and that context will tune you up in this other way. So I think these models are, these models are learned, they're learned through experience, but the experience is prescribed. And so, you know, genetically prescribed experiential steps starting from the very beginning of conception from the point when the nervous system actually starts developing. So I think there's this really complete intermashing of, you know, what people call nurture in nature. I think that you will not find all those rich models explicitly in the DNA code. Yeah, because when we have models now, and obviously you're just discovering that so we are far away from the final solution. But we have a model that I can download from GitHub, right, that in pictures, this covers cats with 99% of success. But it could be spiders. Let's just assume it's spiders. And when we show pictures of spiders to even really young children, six month old, they will recognize to some extent it's a spider. It's a danger. They don't know what it is. They can't say it, but they would recognize it and they would move away if they're six months old, nine month old maybe. So it must be a model they're born with, right? Well, sort of, but sort of not. So one of the areas that's most heavily studied is face recognition. So babies are born with very rudimentary face recognition, and then they tune that up and it gets better. But why is that? What allows the brain to have face recognition ability? I mean, that's pretty remarkable. We have special modules in there that seem dedicated to it. If you raise a baby with no eyeballs, with no optic nerve and no neural signals coming in and tickling the visual system, I don't think that would develop. You know, I don't think the circuitry would develop. It's not that the DNA said, build the neuron this way and hook it to that other neuron. I mean, the DNA gives a really rough sketch. And then the experience of random neural discharge in the womb starts working on it and massaging it. And then a visual input from the outside world starts hitting it and massaging it further, and it kind of gets to where it needs to go. So it takes advantage of the expected environment as part of the system, part of the way of tuning up the system. So it's not all in the DNA. The spider thing is another example. I don't think we have in the DNA a spider code. I think the spider thing is tuned up early, probably in utero. Even if kids never seen a spider, they have the same behavior. But it's still, I mean, I agree with you. So it's basically the instruction how to build a model and certain coding to what are basically the outcomes. So it's like a big, well, the data set comes in, I guess, that's usually the biggest amount of data. So it's basically an order of building plan of all these models over time. That's what we could describe it, maybe from a computer science perspective. A lot of people are really curious, and I know you've been commenting on this in the past. Ray Kurzweil is obviously the one biggest proponent of that vision. When can we finally upload ourselves to our brain and maybe our bodily experience to a computer? Is that something we can do within the next 20 years? The singularity is supposedly happening in 2038. How far, how much longer do we have to wait? I don't know. Obviously, I don't know. I doubt that it's 20 years. I doubt that it's decades, but I'm absolutely certain that this will happen. If any of this science of consciousness and the mind and the brain is right, then it's all about data processing, and that's what the brain does. And the mind is a trillion stranded sculpture made out of information, and that's something that can be put in artificial devices, ultimately. Here's what I think we have right now toward that technology. Well, first, we have a lot of people working on it with great dedication, which is one of the reasons why I think it's absolutely inevitable. How do you imitate a brain? How do you take your brain, take the information from it, and create a version of you that's artificial, that lives inside of a computer hardware? Number one, you need artificial neurons. We have really good artificial neurons. Number two, you need them to be hooked together in massive nets, like giant, there's 86 billion neurons in the human brain. We're not quite there yet, but getting there real fast to be able to put together 86 billion artificial neurons is something that I'm absolutely convinced will happen relatively quickly. The third piece is we need a way of scanning your brain at minute enough detail to see how everything's connected and what kind of connections there are between this and that neuron and what the dynamics are so that that can be duplicated. That's the part that's missing right now. That's the part that depends on technology that has not been invented yet. Do I think that technology will be invented? Yes, I mean, I think that's, to me, that's inevitable. That's just a no brainer, so to speak. Obviously, we will get there, but that technology doesn't exist right now. And I don't quite see, it's far enough in the future that I don't quite see when something like that will be invented. Well, the brain works kind of on a holographic level from what we understand, right? So it's not pinpointed so easily pinpointed to each particular neuron. Isn't it like saying we need to know what every single air, atom and molecule does and then only we can predict the weather. No, no, no, we don't need that, right? So we need the right layer in that complex system. We haven't figured it fully out yet, but say with the 10 day forecast, we have figured it out quite well. Although we have no clue what the molecules are up to. And maybe the same is true for the brain, right? If we find a way and your very clever way of describing consciousness is maybe a great help towards that project, isn't it much easier to mimic the outcome? Literally make a person that is maybe a five year old right now and we make it from a machine and we know these things are missing. And then let me just put on these layers until we have 20 year old and then basically we have a grown up and we are done. We never need to know what each individual neuron does, like we do in the weather, right? Well, that's right. I think you're right. So to build a human like artificial intelligence is, I think, not that far in the future. I don't know exactly when, but not that far in the future. And you don't have to understand how all the parts work and why they work. You just have to build a thing and then it grows and so on. I mean, that I think I believe in. The thing I think is farther in the future is to take your particular mind that exists right now in your biological hardware and transfer it, so to speak. Copy it in enough detail that it's still you and upload it as they say onto the computer hardware. That's the part that I think is tricky. And I don't think I think it'll happen, but I don't think you have to understand what each neuron is doing. You just have to copy it. So what if we what if we train kind of what we do with children right now, right? So we have children and we through nurture through these very long period of growing up. We also transport much more complex ideas and they become hopefully the next generation of ideas and they carry them, they accept them and carry them with them. What if we have like machines that we train and there was I think like mirror episode about that we have a machine that's basically like a new born. It's empty and we train it over the years say over 10 10 year time frame and then basically becomes us. But it's also not as what that's the problem. Yeah, I may be. I am skeptical of that. I think. So one of the things that neural network technology has taught us, you know, so called deep neural network learning and algorithms which basically runs the world at this point. One of the things that's taught us is that a network of neurons. It can be very smart. It can compute extraordinary things. It can take in inputs and make decisions and give you incredibly smart outputs. You don't know why it's doing it. You can't pinpoint which neuron is doing what the secret is in the subtlety of how they connect to each other in a really rich way and the dynamics of how it works is really complex and rich. And so as far as I can tell the only way to truly imitate you so that it really is another you is to duplicate neuron by neuron every neuron in your brain or at least your cerebral cortex without knowing why without knowing what they're doing. Just wholesale copy them. And now you have another you. I don't think you're going to have, you know, if you have a machine shadow you for 10 years, I don't think the machine is going to be you. It might be a good imitator of your voice and your behavior, your body language. I don't see that. It would be maybe me, but not to myself, right? That's always the problem when we see in Star Trek that people beam somewhere else. I'm always very worried that if they recreate another being, they would have to kill the old being. Otherwise, you're two people that appear at the same time and they both claim ownership of that consciousness. Well, but the though there's so much to say about that. So the Star Trek beaming devices basically more like what I'm talking about because it copies slavishly every detail without knowing how they function just copy the darn thing and that's good enough. And yes, one of the interesting philosophical questions people that always comes up with mind uploading, let's say I can go pop myself in some futuristic scanner and it scans all my connectome as it's called all the connections of my neurons and it duplicates it artificially. And now there's a second me. Is it really me? The biological me dies. Did I die? Did I live on? Which is the real me? And I think that the reason we have those philosophical concerns is only because of our limited imagination because we've never experienced a mind duplication before. And so we can't wrap our brains around it right now. But it's it's a bit like you have a computer and you have a file on it. That's an important file. And you know your computer is getting kind of old. And so you take a new computer and you copy the file onto the new computer and then erase it from the old one. It's the same file because what defines it is the information and the relationships between the pieces of information. It's not like you killed the old file and now there's a totally new file. I mean, this is what mind is. This is something people having quite grasped at an intuitive level. But mind is information and algorithms and that stuff is duplicatable in principle and transferable from device to device. That doesn't mean you killed off one and then created a new one. It's the same one or duplicates multiple ones. We could have 5,000 versions of myself. We split up at one point but then we all live a slightly different life. I think there was a Black Mirror episode where they basically take that idea. They have this small amount of what is the compressed consciousness. Everything that can be said about your brain and I think the rest of the world also experienced that. And what they did in one episode and I thought that's really cool. They basically take your consciousness, all your mind but not your body and they run it through assimilation and that episode. Every potential life partner, everyone you could, every made potentially. And then it goes out 100 years and you live through your whole life and then there's only one person that's really your perfect match. And that's the match you're being introduced to in the real world because that's the out of millions of scenarios that have already been tested with your consciousness is the one that's actually going to work for you. That sounds like a great idea. It sounds really interesting except that the simulations are all having exactly the same experience as the biological one because if they're good enough simulations then you basically have created 100 versions, 100 people and had them all live 100 lives. And they don't want to die, right? None of them want to die. They're all equivalent. So it's not like there's a real you and then simulated you. They're all you. They're all equally valid in the mental sense. Yeah, I don't know how to think about that. Like I would feel very, like I feel there's a huge amount of compassion and so when someone who's close to us dies. What happens if say 99 out of 100 die, but one is still living? Do we still have that problem? Or we say I don't know you're still around. It's very strange. I think this whenever it happens completely restructures our concept of life and death and identity. I mean, it's something what strikes me about mind uploading in science fiction until relatively recently is that because people can't grasp it for narrative simplicity, writers always make it such that there's only one of you at a time. So intron is the really classic ancient version of it where you disappear and then you appear in the computer, so to speak, and then that one has to disappear for you to reappear outside. And what strikes me is how much work writers have to do to simplify it and take away all the really bizarre rich complexity so that the audience can grasp it at an intuitive level. That's not how it will be. It will be something mind bendingly strange with multiple copies of us slowly diverging from each other, each with its own sense that it's entitled to a life and entitled to some sort of rights, each with its own emotional integrity and intensity. And there will be arguments that the biological one is the least important of them all because that's the one that's doomed to die on a short time scale. Well, already great told me that's not going to happen anymore. He was very confident about that. So I stopped questioning. I just went with it and David Sinclair says the same thing. And they basically are all very confident that aging is going to be solved in the next 15 years. A very short timeframe. I'm absolutely sure that's not correct. Why is that? A form of biological? Well, first from a biological point of view, this makes no sense at all. Parts wear out. That's what there is. Even the brain wears out. If you could extend life, I think there's this wonderful example actually from Gulliver's Travels where there's the island of the people who live hundreds of years. And Gulliver's like, wow, it's amazing. They must be so wise and then he meets them and their vegetables because their brains have degraded in the meantime. This is really, this is A, not going to work and B, if you look at lifespan, this is a totally tangent subject. But if you look at lifespan, the common misconception is that our lifespan has increased over the years as medical science has improved it. And that's not true. The average has increased, but the upper limit has stayed very similar. You can go all the way back to ancient Greek times and look at their records. And the oldest people were living around 90 to 100 years. And that's about how the oldest people are living today. And so I don't really see, you know, few of us die young, but I don't really see, I know, I don't think that's going to happen. Sorry. Yes, I'm with you. And I think it's still a relatively new thought. What longevity research claims it's able to do relatively soon is reverse aging. So you literally, until there's a certain limit, but say you up to your 65, 70, it can basically reprogram you and yourselves, not your chronological, but your biological age and anything in your body to any timeframe it wants. So it can make you an 18 year old or 28 year old biological. Except your brain. That's the problem. The brain included. I asked Aubrey and he was very confident about that. He's the expert. I'm just a messenger. Yeah, but if you alter your brain, if you basically, yeah, then you're not you. I mean, if you re rejuvenate your brain, you're just recycling over into different cells, right? So we don't own, we don't have the same cells. I don't know what the time frame is about a year and all the cells have been replaced in our body, but we still are ourselves. No, that's an interesting point. In the brain, the cells, there is some turnover, which was a surprise that people only learned in the past few decades. Most of the cells in the brain are the same cells. They don't die. They don't turn over. I mean, the stuff of which they're made may turn over some of it. But the cells themselves endure because their connections have to endure because that's what makes us who we are. Right. So you can't just kill off the cells and then grow new ones. If you do this continuously, every 10 years, you're a totally different person. So that's not how the brain works. Like other parts of the body. Meet the uploader. Yes, meet the uploader. Download it again. The uploader is the only feasible way for true longevity. And that's what will ultimately happen is people will live essentially indefinitely or the mind, the essential part of who a person is will live essentially indefinitely. Once that technology emerges. David Orban told me that and I thought it's so fascinating. He said, well, once we do that upload and we become many, many different personalities at the same time, we just basically go in random directions in the universe and basically spread out the message and just with light speed. We're going to take a long, long time, but we are just this tiny little nano probe, but has a full consciousness and we just in some state of population. We just wait until we hit the star or a planet or a couple of million of years, millions of years later. That's exactly right. One of the most interesting points of speculation is basically space colonization. So the way I think of it is all the science fiction shows have it wrong. They're all based on the concept of human beings bodies in a spaceship environment going out into the universe out there. And it's really hard to build a spaceship environment that's safe for biological beings and people don't quite grasp how incredibly toxic cosmic rays are once you get outside the magnetic fields of the earth. Like this is not going to happen and we don't live long enough for any reasonable exploration spaces way bigger than most people quite grasp. The only way we become a truly spacefaring civilization is not by building a spaceship environment to house the human body, but building a platform to house the human mind. And when you do that, time is not important anymore. So you don't have to be in suspended animation. You can be a mind living so to speak on this artificial platform and you could take the hundred thousand years to cross the galaxy. It won't make any difference. You know, the artificial device can be rebuilding itself or repairing itself. There's no particular limit once you grasp the principle of mind as a thing that can be transferred from platform to platform. And that's the only way we become a truly spacefaring species or civilization, not really a species anymore at that point. But that's our deep future. That's our deep future. Do you think this is so fascinating? Do you think that time is related to consciousness? So our experience of time is relatively stable, but it might be learned in early childhood. I'm not sure where this actually comes from, but most of us have a certain experience of time. It goes slower when we are younger than it seems to speed up as we get older, but it's within relatively narrow bands. Once we change our consciousness, maybe before the upload even, can we slow down the way we experience time? So say a thousand years feel like one year? Nobody really understands the neuroscience of time or time perception. There's a lot of people who study it and there's a lot of interesting work, but nobody really understands how time is represented in the brain. And some of what you're talking about is retrospective time. So looking back on some period of years and asking yourself in retrospect, does it seem like it went fast or slow? And then there's time in the moment. Like, does it feel right now like things are dragging or what? So I presume memory is intimately involved in retrospective time, but the neuroscience isn't that well understood. So I don't know the answer to the question. I doubt time perception will change super radically in our biological selves. But the mind uploading thing is really interesting because now you have a device that, first of all, you can set its time, its time increments. And so there's no problem with having it think that one hour passed when actually 10,000 years passed or vice versa. You could give it a thousand years worth of experience time packed into one year of actual external time. And time doesn't matter that much anymore. I mean, these are bundles of information, minds, bundles of information that live indefinitely. So what does it matter if they take 100 years to reach the nearest habitable or the nearest interesting star system? They spend 100 years hanging out with each other and chatting each other up. I don't know. So when we get to this plan, this pure information, whatever the package delivery is, but we are just pure information, how much more are we? Is there any relationship to the life that we know right now? Because when you think of this time, you just said time will stop existing, at least in the way that we know. And also distance, obviously, because it's space time that will stop existing. So as long as we have enough energy, we become this almost like stardust. We just go randomly or maybe less than random through the universe. But will we, is there any bridge back to the world that we know right now? That's what I'm wondering. Because maybe that's already happened. Maybe there's some other intelligent intelligence that already does that. There seems to be no way we look for radio waves, right? So the SETI project, I never understood this, because it seems to be radio waves from the first place where relatively small frequency band and only small, you know, the time of radio is basically gone already. We were just looking for this particular signal. But what if an intelligence is already doing this? Will we just describe for our future? But we wouldn't notice it. Yeah, it could be. I don't know. You're right about SETI. It's very weird. We had this really quirky, weird technology of our own. And then we went and looked through the rest of the universe to see who else had that weird, oh, quirky technology. And, you know, so far nobody. But yeah, it's a link back to the biological world. I imagine this thing goes in phases. And initially, the biological world is the important one. And the uploaded world is kind of like a nice place that, I don't know, a digital afterlife, so to speak. Or a way of preserving very smart people so that they can continue to contribute to the world in the long term. At some point, that's got to tilt over. The power differential has to tilt over. Because in a digital form, you could still hold the same jobs you have in the real world. And you can just interact. You could be the president. Why not? You don't need a biological body to be a president or a CEO. And power and wealth accumulate as you get older. And that kind of thing will tilt more and more toward this digital world. The biological world, I imagine, becomes more and more almost like a larval stage, training up brains in interesting ways so that they can then be uploaded and have a more extended, interesting, complex life in this other world. So that may be one way that the biology interacts with this digital upload world. I don't know. This is not a very scientific question. But I'm going to ask you now, hasn't it struck you as kind of coincidental that we have this very digital looking code in our DNA which became the basis of any living being on this planet. But it looks so strangely like computer code or like a basic formulation of bits and bytes that we do now. Isn't it a very strange coincidence? And that's been what, four billion years ago? Yeah, isn't it a coincidence though? That's an interesting question. I mean, information has to be coded somehow. And there are certain efficient ways to code it. And DNA is one. And, you know, it's that we as humans invented a way of data encoding that's efficient. Before we knew the DNA, though, right? So we kind of invented that at a similar time frame. But it was a little earlier so that the whole bits and bytes thing was a little earlier say 50 years. It wasn't a hugely different time spent. But it is nobody thought of the DNA and nobody had any idea for 50 years that the DNA would turn out to be the same way. And then we discovered it and I felt like, whoa, this is too much of a coincidence. If anyone would plan it in a digital way and be this intelligence out there, they would come up with something that looks like that and would design it a similar way because of this kind of our heritage. Maybe, but it's efficient, right? It's sort of like the conspiracy theorists say, oh, my God, look, pyramids all over the world. There must be some intelligence that coordinated pyramids in South America and in Egypt. But there's another argument, which is a pyramid is just a very efficient, simple solution to building a tall structure. And so that's what I'm thinking is DNA, you know, prelife evolution, so to speak, had to find an efficient information code, otherwise it wouldn't work. And when we eventually developed our computer age, we had to find an efficient information code, otherwise it wouldn't work. And an efficient information code just looks like that. That's the efficient way to code information. That's a really good argument. When you look into these arguments people make, and that's very, very similar, if we live in the simulation, do you think our consciousness maybe resembles the simulation because we can all kind of build on top of that? Like so we have an animal that doesn't have abstract dot. So it kind of cannot simulate as much that runs into a lot of dangerous situations, has to react in the moment and fails a lot. And then we have the consciousness or we have a neocortex that allows us to definitely run simulations and then choose the best in our mind, the best outcome. And so simulation seems to be a superpower of any higher being in the universe and any sentient being. So we will keep on doing simulations. So someone else must have done it. So we are, if you keep going with this, we must be in a simulation of someone else because simulation is a superpower and will always help you because you make better decisions if you have that ability. It could be. I mean, there's no, as far as I know, no data that can resolve whether we're in a simulation or not, because all the data, if we are in one, comes from inside of it. So I don't know the answer to that, but I would say this. But you could say you're saying about consciousness, right? So a lot of people said, oh, we can't decode the brain because we're looking at it with the brain. So it can never happen. You know, the earliest 19th century brain researchers had exactly that argument, which doesn't really go anywhere. You're right. And it could be that somebody discovers something that sort of proves we're in a simulation. I don't know, but that hasn't happened yet. So I remain skeptical of the utility of the simulation idea while I acknowledge that it is technically possible. But there is something interesting about simulation. There's kind of two kinds of simulation. And one is any, you know, take your dog, your pet dog, looks at the world. Its brain is creating a simulation of its world. But it's a real time in the moment simulation. So what its brain sees, the visual world and the world of smells and the tactile world, that doesn't really exist in that form outside the dog. Like the real world is really weird and complicated and a funky quantum mist. And the dog's brain generates this essentially false colorized, simplified world of surfaces and hard objects and other features that are not wholesale invented. They're just super simplified simulations of what's really out there. And we have that too. So the world we think we live in is a simulation. The brain is a simulator. It creates our simulated world. There's a second level of simulation where we say, okay, now let me imagine future situations and assimilate those. And of course, this is, as you put it, a superpower and higher intelligence can create simulations of future scenarios and then use that to inform current behaviors. But it's an interesting point about simulation that it's all simulation. Like the world we live in, the world of our immediate right now perception, the world we think is around us is a simulation. The brain has simulated that. It's all information being handled inside the brain. Yeah, the question then is what is reality, right? So if everything is a simulation, what is actual real? Is that something that's open to our discovery? And it gets really weird with quantum mechanics, but there is a certain amount of observation we can do, at least where certain molecules and atoms are and where the sun is, or we can discover that, although the colors might change and the temperature perception might change, obviously, if you have a different view on reality. But is there real reality? Yeah, well, that's a wonderful question, a deep philosophical question. My take is, yeah, there is. There's a real reality. And our brains have provided us with a simplified caricature of that real reality. And that's very fundamental to how brains work. They don't give us the world. They give us a caricature of the world. And they don't understand themselves. They understand a caricature of themselves. And this is really fundamental. But there is the real world out there, and the physicists are busily figuring out what that is. And it's a really weird world. It's not the one that we intuitively grow up understanding. Does quantum mechanics bother you? Because we have these two elements that are in very different parts of the universe, and they are connected. I don't know what the scientific term is, you probably know. They had this superposition, I think that's what it is. And then once you change one, the other one changes too, in real time on the other end of the universe. And then we might have tons of multiverses. Nobody really knows how this quantum mechanics stuff works. Doesn't that bother you? Because it's so different than the world that we experience. Yeah, it does. I mean, I have the benefit of having been a physics student back in the day. And I was a physics major in college, and I studied all this stuff. And the basics of quantum mechanics have not changed that much. And it's weird, and Richard Feynman famously said that if you think you understand quantum mechanics, you don't, and he had various versions of that, or if you think you understand quantum mechanics, you need your head examined. I think it's very weird stuff. And you're referring to this nonlocality of, you know, you change the spin state of one thing over here, and instantaneously the universe changes over there. So, yeah, it's a very strange universe we live in. And on average, when all the little weird microscopic things kind of converge and average out, you produce a macroscopic world that's much simpler. That kind of statistically works in a much simpler way that fits our intuitions because we evolved our brains to deal with that macroscopic world. But the microscopic quantum mechanic world is bizarre. And, you know, many worlds interpretation, if that's true, that's pretty strange. And nonlocality, if that's true, well, it is true. It's been experimentally confirmed nonlocality, and that's pretty strange. So the reality is weird. It kind of, I feel like this whole directionality that we make the world a better place, or that we just create machine intelligence, whatever our direction actually is in this whole game of evolution, when you take quantum mechanics seriously, it kind of goes out the window, right? So there is no direction. It doesn't matter what you do. It doesn't matter what any of us do because there's ultimultiverses, and it already happened in all the other multiverses. It's soul crushing, I feel. Well, at least the multiverse thing is hard to test. Some of the aspects of quantum mechanics are testable. Many people still don't buy into the multiverse thing because it's hard to come up with data that can test it. But however you cut it, multiverse or not, there's still a statistical quality to it. Like, let's say that there's a universe in which I die horribly in some freak accident in the next 10 minutes. If there's multiverses, sure, but it's a small proportion of those universes. There's still a very large, much larger proportion that's going a different direction. So maybe we replace our concept of absoluteness with probability, but it's still important. So there's ways to wrap your mind around it. Do you believe, and is this something you found in your research, that there is this directionality of human evolution or evolution in general? And we know it's just not getting more complex. It seems to be going somewhere. All of our ancestors, us included, we seem to be in that train, but the train is going somewhere. We are not really aware of this, maybe because it moves relatively slowly. Is that something you ponder about sometimes? Yeah, I mean, I don't see it as a linear progression, right? And that's a very old concept of evolution, as many people don't realize. The linear evolution predates Darwin. Like the idea that there's some linear progression or ladder is something that's very old. And Darwin kind of came along and said, well, no, it's not linear. It's this weird mess of stuff spreading out all over the place and branching out. Right, but there is a lower layer that seems to have a direction. So I mean, we start with like one cell organisms, you know, and then we have this multi cell and we have us now in the consciousness. We don't see any other animals, which is really weird to me. I mean, there is some consciousness, depending on how the definition, but there's no language, at least we can understand. So we can do no civilization of apes or of dolphins that have built big cities. It seems to be headed somewhere. Sort of. There is an explanation to that, which is really intriguing. And I guess one could call it statistical spreading. And it works like this. If you're in a room, you know, with a flat floor and you, I don't know, take a big jar of ants and shake them out in one corner. They'll spread across the room. And you could say there's a progression because the vanguard of ants is moving across the room. And eventually one ant will get to the other side and then you could say, ah, ah, progress. But what's really happening is a totally random statistical dispersion with a wall that prevents it from going more this way than that way. And so, for example, size works the same way. There's a lower limit on how small you can get and still be a viable living thing, but there's no upper limit. And so over evolutionary time, the largest creatures around keep getting larger. And so whales are the largest known animals that have ever lived. And as evolutionary time continues, things, it seems like there's a progression toward larger and larger, but there isn't because all the little ones are still there as well. It's a random spread, but there's a limit at one end. And so it spreads more the other direction. Intelligence is working the same way. There's, you know, all these creatures are still out there with limited brain capacity and some with no brains. And most of the life on Earth is still single cells. But the smartest animal to be alive at any one time is somehow usually smarter than the smartest animal at a previous time. That's just statistical spreading and has a spooky property that it seems like a progression. I've never heard about that. That sounds really interesting. When you, and that was the other question I just wanted to ask you, if it isn't headed somewhere, is it completely random that we go through this period where we now experience something of a civilization, right? Let's use this as a good marker of an advancement. It might not be, you know, because maybe we got a room on this planet, but let's assume it is a good thing. And we have that, we immediately have that feeling that it is comparing different time periods. Why is it that it developed in a certain relatively small species, but other animals since then never had that problem? They never seem to have discovered the utility of consciousness. Maybe they did, but it didn't develop for some reason. This consciousness, higher consciousness, when we call it that way, that's like a language, a relatively complicated language. Is it just pure randomness that we experienced this now and say maybe in 50 million years it will be other species, it will be the birds who have that? Yeah. Well, first I would say they do have consciousness. What they lack is the technical ingenuity, right? I mean, they're conscious of their world. They're conscious of the things they do. They're conscious of the less intellectually complicated stuff that they do, they're conscious of it, just like a baby is conscious of its world, or someone who's not very technically capable living in our world is conscious of what's around them. So I think they do have consciousness. It's really interesting that an animal evolved that has this amazing technical capability that developed this technical capability. I mean, if you believe the basic concepts in biological evolution, it's natural selection of random chance. So it's not purely chance. I mean, it was chance that something blipped this way instead of that way. But the natural world then selects that and says, oh, look, that's a survival advantage and then runs with it. And so we're this species that in our past developed stone tool technology. And our brains evolved to be really good at thinking technologically and in terms of chipping this and the geometry of that. And that's the foundation for this. But it's extraordinary when you think about it in terms of the millions upon millions of species that have evolved and died out and evolved on Earth. There's this one species that builds computers and spaceships and potentially mind uploading someday. I mean, it's remarkable. Yeah, I find that nature didn't copy this. Usually when we find the solution, it gets copied across species, right? But it's so low probability, right? I think that's what's going on. It's hard to copy something that's so low probability is just a bizarre accident that we hit on the things we hit on. But the nervous system is still the same like lobsters. That's a famous example. But the consciousness hasn't spread anywhere. Maybe it will eventually. Like this kind of consciousness, higher technological ability. Higher technological ability, yeah. I mean, what it required was not just mental capacity and people have argued that other animals have enormous like elephants have twice as many neurons in their brains as humans, which is really weird. They also have bigger brains. They have bigger brains, but often animals with bigger brains just have bigger cells because they have bigger bodies. But elephants actually have, I forget, it's like 200 billion neurons to our 86 billion. It's something absurd like that. But we have hands and fingers and we evolved brain structures specifically to deal with stone tools, i.e. the geometry of construction and technology. And so we're really weird specific accidental things happens that prepared us for this. And could that happen with other species? Yeah, I mean, I think eventually or on other planets somewhere in the universe. Of course, I don't think it's necessarily unique to us. It's just so low probability. I always compare it and that's obviously science fiction, but I compare it to the way we do computer code, right? So a good part of our computer code right now, it's relatively distinct and deterministic. So we know what we want. We try out a different things, but it's a high intelligence that is what we know where we want to get with this computer code. Now, I know AI works slightly different, but there is a part of AI that's random and statistical, but big part is still we need to design something. So we actually know what is good and what is bad. So designing the scenario has now become designing the solution. You know, asking the right question is the problem. But I always feel like if you don't have higher intelligence to make this jump to something that's so much more complex is quite a stretch. We have never employed it. Well, that's wrong. We have employed it, but it's relatively minor part of how we build buildings. Most of it is pretty much solid science, so to speak. There's some randomness involved in that too. It's so hard to wrap your mind around this pure randomness, which it might just be, right? You want it to be a creator. Like every one of us wants there to be a creator. Doesn't have to be a God in the Old Testament, or some alien intelligence. We all want that there is someone we can look up to who has, we're in good hands, so to speak. Yeah. Well, I think once you realize the time scales involved, that helps to some extent. Because there's enough time for really weird random events to happen. I mean, it's almost like if you take a giant wall and you throw little ball bearings at it, and there's a teeny little thimble stuck to the wall, and one of the ball bearings goes right into that thimble and lands in it, and everyone looks and says, wow, that's amazing. That's an intentional, amazing throw. And then you step back a little bit and you realize, okay, four billion ball bearings have been thrown at that wall, and only one of them went into that thimble. Suddenly you realize, okay, yeah, chance actually could have a pretty large role to play. Yeah. Yeah, absolutely. I mean, just the time frame is so much more unimaginable than that. Michael, I think that's all I had. That's all I had. Okay. Thanks for taking this journey with me. We went to the Stars and back. We did. We really appreciate that. We really appreciate you taking the time. It was awesome. Yeah, sure. It was fun. I hope we get to do it again. Yeah, all right. Maybe we bring Up, Break, Rain on the podcast as well so you guys can lock horns a little bit. We can argue. Yeah. Michael, thanks for doing this. All right, good. Yeah. Bye now.

Recommended Podcast Episodes:
Recent Episodes: