Scaling Selfhood: Collective Intelligence from Cells to Economies

Michael Levin
Or, listen elsewhere:
The Apple Podcasts logo buttonThe Spotify podcast button
Guest Introduction.

How does collective intelligence emerge? How do parts get integrated into larger wholes? How can we increase the intelligence and agency of collective systems? Are cities, economies, or even societies intelligent systems of which humans are unwitting parts?

On this episode, I'm joined by Michael Levin to discuss how his research in the collective intelligence of biological systems might help us think through larger collective systems, like the economy.

Michael is a professor of biology at Tufts University, director of both the Tufts Center for Regenerative and Developmental Biology and the Allen Discovery Center, an editor of three academic journals, and so on. His pioneering research has direct applications in regenerative medicine, cancer research, and artificial general intelligence.

I wanted to speak with him for two reasons. First, for all the theory and philosophy we've covered about 'selfhood' on this show, Michael's work brings a refreshingly concrete perspective, offering a 'biology of the self'. He provides a story of how selfhood emerges via evolution, which is really a story of how collective intelligences emerge.

Second, Michael thinks about collective intelligence in a way that is 'substrate-independent'. That is, his research on collective intelligence should apply to any intelligent system, whether it's made of flesh, metal, or anything else. This allows us to apply principles he's researched that scale up agency in biological systems, and apply them to larger systems, like an economy. If we understand the economy as a system of collective intelligence, can we apply the same principles that worked for evolution in biological systems, to increase the intelligence and agency of the economic system?

A few more themes we explore:

  • Why are "goals" the fundamental ingredient that identifies a system as intelligent?
  • How do little selves (like cells) combine into larger selves (like...me)?
  • Are humans parts of larger systems of collective intelligence, and could we even know if we were?
  • Should we have concerns about what happens to use as we become more deeply embedded in increasingly vast, planetary systems of collective intelligence?

The first hour of the conversation explores his research within biological systems. The final 45 minutes uses that as a foundation to explore systems that are larger than humans. Even if you find the first hour rather technical, I highly recommend at least checking out the final 45 minutes.

Enjoy!

Time map.

19:00 - Why area “goals” the fundamental ingredient that can be used to make sense of any intelligent system?

27:14 - Cognitive horizons - a formal-ish framework for mapping & assessing the goals of any intelligent system.

30:40 - What is “intelligence”?

36:40 - How does selfhood scale up? How do little agents, like cells, combine to form larger systems, like organs, or even whole organisms, that have a coherent, integrated, and single sense of selfhood?

52:20 - How scaling selves and ‘mind-melding’ changes the calculus of game theory by changing the number of agents in a system.

58:20 - What is modularity, and how has evolution exploited it as a strategy to scale up intelligence and agency in biological systems?

1:04:32 - How far up does ‘mind-melding’, as in the gap junction mechanism, work to integrate parts into larger wholes? How do whole humans become parts in larger systems?

1:06:02 - Breaking the binary between first-person and third-person studies of consciousness; what happens if you fuse human brains together?

1:24:32 - Can we apply principles that have worked to raise the collective intelligence and agency of biological systems to designing social systems?

1:31:02 - What kinds of problems are better approached top-down vs bottom-up

1:43:22 - Should we have concerns about become parts of larger planetary systems? Will complexity drain leave us unconsciously enslaved to systems beyond our scope of comprehension?

Links from the conversation.
Support the podcast!
You can support the podcast by sharing on social media, with a friend, or leaving a rating &review on Apple Podcasts!

Receive new episodes & related musings by joining the newsletter community. If you’d like to get in touch with me, you can reach me on Twitter, reach out to join the Discord, or contact me directly through this site.

If you’re really interested in helping the podcast exist, consider becoming a Patreon supporter with a small monthly donation of even $1! Your support means the world, and goes directly towards improving the podcast’s audio quality, equipment, research, and overall experience.

Thank you!
Become a Patreon supporter button
Transcript

Oshan Jarow (00:02:43):

Hello, I'm, Oshan Jarow and welcome to the musing mind podcast, where I get to speak with folks at the crossroads of consciousness studies and economics exploring how each can inform the other, if you would like to skip the intro, get right to the conversation you can skip to about the eight minute mark. Otherwise, bear with me lately. I've been reading a lot of the late sociologist, Eric Olin Wright. So instead of just saying that this podcast explores economics, I'll lean on his work a little to be more precise in that my actual interest is what he called emancipatory social science, which is about generating scientific knowledge for the collective project of transmuting, of transforming conditions of human oppression, into conditions for human flourishing. And the main idea there is that there is a moral not only economic or material purpose to the production of this knowledge.


Oshan Jarow (00:03:43):


And while Wright says that purpose is flourishing, I think that's pretty vague, or at least too abstracted, and it can be broken down. And when you do, I think the piece that is most absent that is most consequentially left out of the equation of how social science can create the conditions for human flourishing is a focus on consciousness on phenomenology, on how we can leverage the fact that we design our own socioeconomic structures to create environments that are maximally likely to produce richer and richer kinds of consciousness. So my hope is by bringing consciousness into conversation with emancipatory social science, we can build those bridges with scientific rigor and oh boy do I have a rigorous phenomenally interesting guest today on this episode, I am joined by Dr. Michael Levin. Michael is a professor of biology at Tufts university. He is director of both the Tufts center for regenerative and developmental biology and the Allen discovery center.


Oshan Jarow (00:04:53):


And so on full academic pedigree, his work is shocking. It is rigorous and it's pioneering in so many respects. It doesn't fit neatly into any discipline, certainly not biology alone. But the reason that I wanted to talk to him was twofold. First we've covered a lot of theory and philosophy about the self and consciousness on this show. Levin brings a biology of the self to the table, right? Providing a, a really concrete story of how selfhood emerged via evolution, which is really a story of how collective intelligences emerge, right? How smaller intelligences like cells get bound up into larger and larger systems like a human body, or even as we'll explore and economy. So when I've spoken with folks like Barnaby Raine about capitalism and the self, or with Chris Letheby about psychedelics and the self, we haven't had a biological basis for understanding what the self is.


Oshan Jarow (00:05:54):


And with Levin's work, we can do that. The second dimension that I think is even more interesting is that Levin's studies intelligent systems broadly, and he looks at the strategies and principles that evolution has used to scale up both the intelligence and agency of biological systems and crucially Levin's interest is in developing an understanding of these strategies that can apply to any system, whether it's biological, technological, or something in between so long as it's intelligent and we'll explore what he takes that to mean. But for us, if his work is really substrate independent, then we should be able to take the principles that he's studying by which evolution has increased the collective intelligence and agency of biological systems and apply them at different scales to something like the economy. So as to similarly scale up and foster the collective intelligence and agency of that system.


Oshan Jarow (00:06:52):


So for this conversation there's roughly two halves to it. The first hour explores Michael's work within biological systems. And the second half starting at about the minute or one hour and four minute mark gets into questions about systems that are larger than human beings and things start to get really wild there. Okay. Another thing about this episode is that while the vast majority of it is free and available to all, I have reserved a few extra sections for Patreon supporters only. These are kind of more in-depth examples. He gives about things we talk about, including his research with creating Picasso, tad poles and regenerating plenary flatworms among other things. So if you want to hear more from Michael about his research, you can become a Patreon supporter for as little as $1 a month and gain access to the goods. You can also just become a Patreon supporter if you'd like to help support the show. You can find links on the show page at the music mind website, or just go to patreon.com/oon JRO all right. That's enough for me here is Michael Levin.


Oshan Jarow (00:08:04):


All right. So Michael Levin, welcome to the music line podcast. Thanks for taking the time.


Michael Levin (00:08:08):


Thanks so much. I'm delighted to be here.


Oshan Jarow (00:08:11):


So I've never had a biologist on the podcast before, which makes this a little unusual, but I also think that you're an unusual biologist, right? You've co-authored papers with Buddhist scholars, you've written with philosophers of consciousness like Daniel Dennett and your entire approach to biology is defined by taking the goals and teleology of intelligent systems seriously, which stands in pretty direct contrast to kind of recent trends in biology, which have shied away from asking questions about the ultimate purpose or goals of things. So I wanted to start here by asking you both what drew you into biology as a realm to explore the questions that animate your thinking, but also about this pushing outwards, right? Rubbing up against those boundaries. I'm curious how you experience the fit between your interests and the traditionally demarcated boundaries of biology.


Michael Levin (00:09:07):


Yeah. well I think you know, when people ask me how, what, what, how, how to refer to me, right. And, and what it is that I do. I, I almost never say biologists. In fact, I, I don't know. I don't really know what to say. And part of it is that I don't, I don't really, as, as I'm sure will come out when we talk about other things, I don't really believe in a lot of sharp categories. Anyway. I think these distinctions that we put around disciplines are largely a, a holdover of kind of times of, of lesser knowledge, where we didn't really understand how tied together everything is. And, and, and I think they're sort of gonna gonna disappear in, in the future. My own background, I was interested from a, from a very early time.


Michael Levin (00:09:49):


And when I, when I was a kid, I was interested in two things and that was technology. And in particular, not that there was a lot around me. I was you know, born in the Soviet union. There wasn't much to look at, but, the TV, you know, kind of the television set, right. And the ins of, of our, of this ancient, like two with, with vacuum tubes and all that was, was absolutely fascinating to me because it was obvious that somebody knew how to put this stuff together in the right configuration to make interesting things happen. So that, that was like, clearly those, you know, kind of magic, but then, but then you find out that you can actually go learn it reliably. So that's amazing. And then there was that, and then there was hanging outside with a friend of mine who was really into insects and, and bugs and caterpillars and things.


Michael Levin (00:10:29):


Hmm. And just thinking about that and what made those two things different if they were in fact different, why it is that these other creatures seem to have preferences and they have complex behaviors. And it, it, it was very clear that they that they liked some states of affairs and they didn't like other states of affairs. Whereas the TV seemed pretty pretty, pretty neutral to whatever came through it. And so, so this, this kind of grabbed me from very early on this, this question of how it is that you can have minds made up of parts, because it was clear that both types of things were made of parts and, and we were made of parts. And and so how can you have that and how can you in this physical world have things that, that, that have goals and, and, and preferences and, and, and certain kinds of minds. So that, that kind of that, that, that kind of interest, basically, I think fuzzes over any kind of boundary you might put around a discipline, because it, it asks questions that have have roots in every type of activity, right? Physics, biology, computer science, math philosophy, it's, it's kind of, you know, it transcends by one discipline.


Oshan Jarow (00:11:34):


It it's so interesting that preferences were kind of the the differentiator, right. Having active preferences was what divided systems like the television that you took apart with the vacuum tubes, from systems like the insects that you saw outside, but okay. So zooming out what, what I'd like to do is provide a rough outline for the conversation today. We're gonna cover a lot of ground, a lot of technical stuff at the level of cells, a lot of higher level stuff at the scale of society on the whole. So I wanna create a little frame for that. So one of the fundamental problems that I read your work as engaging with is this coming kind of cambrian explosion of types of beings that will exist in the world in the next few decades. So here's one way that you've described this you've written novel technologies are exploiting the plasticity and interoperability of life to create novel living beings, such as hybrids, chimeras, cyborgs, brain, computer interfaces.


Oshan Jarow (00:12:36):


Our future will involve a highly diverse space of novel beings in every possible combination of evolved cellular material, designed engineered components and software. Current distinctions that rely on the origin evolved or designed or composition biological versus technological of agents will not survive the next couple of decades. So today we have these kind of common sense understandings and formal and informal frameworks for thinking about intelligence and agency, even morality that rely on these pretty clear distinctions between a biological system and a technological system, right. I don't feel morally dutiful to my computer, but I do to another human. And you're arguing that these convenient distinctions won't last. So in response to this one thing that, that you've done is focused on this principle of substrate neutrality, right? So for example, you have theories of intelligence and agency that can be used to assess systems no matter what they're made of right.


Oshan Jarow (00:13:44):


Flesh metal or some combination in between, and where I get really interested here, apart from the implications of your work on consciousness, which I hope get into later is, is that I think this feature substrate neutrality affords us not only the capacity to apply them to a diverse set of kinds of systems, right. Both flesh and not, but we can also apply them to different scales of systems. And that scale is really large, right? We can take it from cells to organs, but we can also go beyond them and maybe look at things like society or an economy, or a global media platform, and apply the same principles throughout to understand the ways that we can optimize for collective intelligence and agency within these systems. So one of the higher hanging fruit that I really want to take a few swings at today is getting a better understanding of the principles that you studied in biological systems that have enabled them kind of in concert with evolution to scale up intelligence and agency.


Oshan Jarow (00:14:44):


And to see to what degree we can begin applying these to, to thinking about not only, you know, robust and thriving biological systems, but also robust thriving social systems, too. Right? How transferable are the strategies between these two substrates and in order to do this I was thinking that our conversation today can run from small to large, right? So we'll start, we'll start by setting the context with a quick overview of goals and intelligence. These are central to everything. Then we'll D dive down and kind of start at the smallest scales. We'll talk about cells and gap junctions and how cell food scales up. We can move up to multicellular systems like neural circuits and talk about modularity and symbolic representations of the world and so on. And then we can go up to full humans and even more than human systems. And at each step of the way we can fill out a bit more of the picture of strategies for nurturing fostering intelligence and agency. So let me pause there. Does anything come up with you so far? Anything you want to add or correct before we dive in?


Michael Levin (00:15:54):


Well there, there was, there was a lot of good, good stuff there. And so we, I'd just like to say a couple of quick things. One, one is that, you know, this business of substrate independence and the, the project of leveraging, leveraging your morality or your ethical obligations on the construction of things. You know, when, when I, when I argue against that, I'm definitely not. I mean, I wanna be clear, I'm not the first person to come up with this, right? So science fiction has been dealing with this for probably a couple of hundred years at this point, this, this idea that we, we have a very parochial view of what systems that, that have some kind of moral worth and responsibility, what they should look like and what they're made of from basically an end of one example here on earth, right?


Michael Levin (00:16:38):


So this is the phylogenetic tree here on earth. And there's really absolutely no reason that we have to think that first of all, evolution is the only way that you get morally worthy minds there, that there, there are probably many ways to get there. Evolution is, is probably only one way. There's no reason to think that they have to look like us or have to have brains that look like our brains, or have to be made remotely similar to the way animals are made here on earth. And, you know, the field of science fiction has been dealing with for a really long time when, when a spaceship lands on your front lawn and something sort of TRS out and, and hands you some poetry that, that it's written on the way over, because it's so excited to meet you.


Michael Levin (00:17:18):


The one way to decide what you're gonna do with it is not to scrape a sample off of its surface and decide that that's illuminated. And therefore, you know, therefore you're, this is just, this is just like your toaster and, and, you know, and, and do whatever you want, right? Like, like I think we all understand that's not how you handle those things. So I think, I think we should, we should be very clear of that as a context, this isn't some crazy new idea. And in fact, has been dealt many times in science fiction, you can just do, do the experiment in your, in your mind, you know, that within, in terms of that, that dichotomy that you said earlier, you know, you don't have obligations to your household appliances, but you do towards humans.


Michael Levin (00:17:55):


So you can just do that experiment. You know, you've, let's say you've got a spouse and and, and they sort of go off and they come back and they say, you know what? I had an accident. I had some, I had a couple of metallic toes grafted on and okay. You know, that's not going to change your your, your relationship much. Right. And you can see where I'm going with this, how, how many things need to change before you're going to end up saying, okay, that's it from now on from now on, you're like the toaster. So that's a really difficult thing to answer. And it's, it's, it's, it's actually pretty instructive to, to think it through that way. And to ask if you, if you wanted to have some sort of a human level relationship with an alien or a novel creature or something else, what are your criteria, right. Because how they got here and what they're made of are not going to cut it,


Oshan Jarow (00:18:42):


Right. This kind of moving from a binary to a continuum is, is always tricky business. Yeah. Yeah. And I mean, the way that you've thought through this is taking goals as that inva element, right. In any intelligent system that we can use to make sense of it and our relation to it, but using goals as that reference frame, it really brings up attention because as you've written, the field of biology has been very, very conscious to avoid talking about the goals of the system. Right. You've written about this as teleophobia, the fear of telos, of purpose. So let's open this up a bit and start with goals. Why do you take goals to be such an important part of understanding intelligent systems and why is that such an unorthodox approach? Right. Why have goals been shunned?


Michael Levin (00:19:30):


Yeah, first I'd like to just kind of talk about some ground rules that I, that I like to follow in these, in these kinds of discussions, which is that we, we all have some criteria upon which we judge various various world views. And so some people have specific a priority commitments that they have. So for example, some people like explaining everything in terms of chemistry. And oftentimes they'll say that's because they would like to be good reductionist. And, and that, of course, isn't quite true because then they'd be talking about quantum foam, which they really don't wanna do they want to talk about chemistry? What, what that means is Dennis Noble has pointed out, they have a, they they've picked a privileged level of causation, right? and they sort of, they feel that, that, that the best explanations are at that level.


Michael Levin (00:20:16):


There are other, other types of like Morgan's Canon, for example, urges us to come up with the lowest possible with the explanation for things that has the lowest possible agency involved in it. Right. And so these might be, these might be the kind of commitments that somebody might have. I, I don't share any of those. I think those are very derived and I, I don't know why, why you would start with those. My, my commitment is very simple. I like things that help research move forward. I think these should not be philosophical questions. I think think these should be empirical questions, any worldview that helps me do new experiments, get to new capabilities and do research that I otherwise would never have thought of doing that's then, then I think we're onto something. Right. So that's, so I just wanted to say that, that, that's how I judge things, because I'm gonna say a number of things that people probably fundamentally disagree with.


Michael Levin (00:21:04):


And my point is simply that let's, let's, let's drag our, our assumptions and our preferences out into the light. And, and let's just see what, what axioms we're working with. And mine, mine is very simple. I just like things that help research move forward. That's it? Mm-Hmm <affirmative>. And so having said that then, then the task looks, something like this, what do all agents have in common? Right? What is the, what is the variable, if you can think of a con this, this, this control knob that and, and we can talk also about why I think it's a control knob and not a binary switch, but what, what is this knob that, that you're twisting when you go from something with, with extremely low or even zero agency, if there is such a thing up through things that have progressively more cognitive capacity and eventually human, and eventually beyond that.


Michael Levin (00:21:53):


So, so what is it that all these things have in common? Because what I don't think you can say they have in common is any principle of what they're, what they're made out of, or how they got here. So I don't think evolution is, is, is, is unique in its ability to produce produce cognitive structures. So when I think about the answer to that question, the only thing I can really think of that these things fundamentally have in common is some competent, some, some level of competency in reaching particular goals. I think that that actually formulating things in terms in these cybernetic terms, right in this, in this, this idea of, I mean, cybernetics is really an ideal example of science for all this, because it's, it's really focused on, on, on Steerman ship. It's focused on control. It's not focused on the details of implementation, which you think is ideal. It's what we want. And the idea is simply that we are going to look at how much competency something has in reaching reaching particular goals. And that, that I think is a good way to, and in fact, I've, I've sort of trotted out this this, this, this model made of that's focused on the size of mm-hmm, <affirmative> the right, this, the space, your temporal size of the kinds of goals that a system can possibly work towards. That's a nice way to directly compare all sorts of highly diverse types of creatures,


Oshan Jarow (00:23:09):


Right? So you take goals as that inva element that's present in any and all living system or intelligent system that we can latch onto no matter their composition or origin to orient ourselves. And that, that makes a lot of sense, but backing up, you know, regarding Teleophobia in biology, you've written that generation after generation of biologists have been trained to avoid questions about the ultimate purpose of things. Biologists are told to focus on the, how not the why or risk falling prey to theology students must reduce events to their simplest components and causes and study these mechanisms in piecemeal fashion. And, you know, also this is the current trend in consciousness research, right? To look at neural correlates and study everything in pieces. But in, in terms of biology, I can imagine explaining this idea to someone I even had this reaction and their immediate response being, you know what do you mean? There's no talk of the goals of biological systems. I mean, living systems want to survive. It's the classic Darwinian story and their behavior is fundamentally driven by, you know, the imperative to reproduce our genes. So is the Darwinian story, an insufficient account of the goal of intelligence systems in your view? Do you mean a different kind of goal? How does that kind of familiar Darwinian story kind of fit in with the Teo phobia that you're talking about?


Michael Levin (00:24:30):


No, there's, there's nothing wrong with that. With, with that Darwinian story the, the, and, and in fact, people, people will try hard to disavow. They will, they will say that that story is not really a story about goals at all. That, that, that basically, basically, I think what a lot of people will say is that trying, and, and it's funny because if you're consistent about it, you would have to say that about yourself, which is, which is, I'm not sure how, how you do that, but, but people do do it. You know, people, people will often say, look, look goals and trying to do things are for advanced metacognitive creatures, like humans and everywhere else, it just looks like trying it isn't, it isn't really, nothing's really trying to do anything. They sort of reserve this, this striving for goals, for systems that, that I suppose know that there, you know, it's, it's it's a kind of second order thing where, where, you know, you're trying to reach a particular, a particular goal.


Michael Levin (00:25:24):


The, the reason that, yeah, the reason that people are, are really still kind of teleph phobic is because back in the day, when, when biology was, was getting started, there was really only there was only one way to, to there was there, there was only one way to understand goals. And that was as, as a kind of like, like the full blown magic at the, as it was at the time that humans seemed to have. So you have two choices, you can try to accept that there was some sort of magic that you just do not understand in, in your signs. Or you could, you could say this part, there is no such thing. Let's, let's pretend there's no such thing. And let's see how far we get, and let's just make sure we do this without any sort of, of, of, of magic or religious overtones or any of that stuff.


Michael Levin (00:26:07):


Now, the good news is that since the 1940s, we've had a science of machines with goals, that's that being cybernetics and control theory and, and engineering, and it is no longer, that's no longer a problem. You, you, when, when you are talking about goals, you are no longer talking about magic because we have a perfectly good science that helps us make thermostats and self-guided rockets and, and, and, you know, all kinds of robotics and who knows what else that understands that you don't need to be magical. You don't need to be biological. And you know, there are, there are, there are ways to to have goals and, and it's, and it's okay. Right. It's not scary. 


Oshan Jarow (00:26:44):


Yeah, you've, you've already touched on this a little, but goals and intelligence are not the same, but are of course, very tangled together. You've already mentioned the competencies towards achieving goals earlier, but before getting into intelligence, I wanna have you expand a little on something. You just mentioned that framework, that you've trotted out, you know, for sort of mapping the goals of any intelligent system which you've called a systems, cognitive horizon, or it's light cone. You wrote about this in your essay with Dan Dennet, that cognitive horizons provide a kind of formal ish way to measure the goals of any intelligence system. And this idea is one that I think we can will find helpful at a number of scales. So explain this a little bit, what is a cognitive horizon?


Michael Levin (00:27:27):


Yeah. I imagine, imagine that you, you were again, again, the idea is, remember, remember what the, the point of this is, the point is to be able to compare directly different types of agency that might be extremely diverse in, in origin composition and so on. And so when you, when you want to think about what they all have in common, and you want to compare the their, their level of agency, you might ask, what are the most grandiose goals that the system is possibly capable is, is, is capable of following. So if you tell me that the system, the, the only thing you can really care about, and when I say care, I mean, in terms of practical import, meaning, meaning it's going to spend energy to achieve one state of affairs versus another. If you tell me that the only thing that the system I is, is really capable of caring about is local sucrose concentration.


Michael Levin (00:28:14):


Then I'm going to say, well, you're probably dealing with a bacteria or something, something like that, right. Which, which is able to measure local, local values of things, and it's able to adjust swimming. And that's about it. If you tell me that this is a system that is that, that really cares about the state of the financial markets over the next hundred years, and it's worried that the sun is gonna burn out at some point, then I'm going to say, you're dealing with at least a human level intelligence or, or beyond, right. It's an intelligence that's able to comprehend this gigantic type of state that it can treat as a goal, right? So some sort of, you know, what justice for all, you know, happiness of, of beings across the planet or, or whatever that that's enormous in between. Those things are all sorts of different diverse intelligence.


Michael Levin (00:28:58):


That's that, that carry that care about different sizes of of types of goals. And when I say size, I mean, in terms of space and time. So for example, if you have something like let's say, let's say you've got a dog it's got some ability to reach backwards into the past some ability to plan into the future, certainly more than a bacterium, but you are never going to get your dog to actively care about what's gonna happen three months from now, two towns away, right. It's just never gonna happen. Right. It's just never gonna happen. That, that, that animal's cognitive system is not able to to, to use those kinds of large state of affairs as the target of its activity. If you have, for example, a, if you tell me that you have an intelligence that is able to build and, and, and then rebuild when it's damaged a particular structure that happens to, you know, have five fingers and, and, and be of a, of a, of a certain length, I'm going to say, probably you're talking about a, a tissue level, collective intelligence of cells.


Michael Levin (00:29:55):


It has as a goal a particular target morphology that it can defend and try to, you know, try to remake and spend lots of energy trying to get there. And, and again, there, there are all kinds of things that will be out of limits for that intelligence, but you can, you can, you can very easily nail down the kinds of things that it cares about and the kinds of things it's able to work towards.


Oshan Jarow (00:30:14):


Okay. So we have goals covered as that inva feature of any intelligence system that can be mapped Pacio temporally vertically based on how far back into the past it remembers and how far into the future it plans and horizontally based on how far away from the system its goals are concerned about whether a centimeter or a mile, and these two dimensions make a, a literal cone that you've mapped out elsewhere. So the last thing before embarking on our spectrum from small to large is defining intelligence. So at a high level, how do you understand intelligence?


Michael Levin (00:30:49):


Yeah, what I would, what I would say about intelligence is that when you're talking about intelligence, you are talking about a degree of competency in reaching goals in some arbitrary space. Now, the, and, and so, and so what does that mean, right? That mean, that might mean that you're just randomly stumbling around. It might mean that you're doing something like run and tumble, which is what bacteria do to navigate spaces. You might be actually pretty good at avoiding local minimum. And if you see, you know, something in your way, you you're able to walk around it. You might have a considerable degree of patients or a delayed gratification where you can see sort of a Bline to your goal. But you know that actually, if you wait a little bit, or you go a slightly different path, you can actually reach, reach a better goal.


Michael Levin (00:31:31):


So you might have, you might have all kinds of different degrees of competencies. And the one thing I wanna really say about this that's that's important is that to me in much like much, like most other things that I'm talking about, intelligence is very much observer dependent, which means that when you make a claim about the intelligence of some system, you are equally making a claim about your own intelligence. What you're saying is here, here is how competent I've been in figuring out what space I think this system is working in and how clever I think it is. And much like trying to give an IQ test to somebody who's smarter than you. That's a very dangerous proposition because it's in, right, because it's entirely possible that you're looking at something and saying, well you know, to me, this looks like a pretty good paperweight, and it might be a human brain and you might have, you might be right that in some space, that's pretty much all it does is kind of you know optimize a gravitational pull and, and hold your papers down.


Michael Levin (00:32:28):


But in a different space, it would be this incredibly intelligent thing that you're not even recognizing. And so I think, I think people who work in animal behavior know this very well, that you have to be really humble about the fact that you're bringing your own perspective, right? And that, and that your, your models of, of the intelligence of something are only as good as your own ability to to recognize these things. And by the way just think about where that comes from. And for, for humans, all of our sense organs are, which are pointed outwards. And so we're very good at detecting intelligence and agency in medium sized, kind of medium speed things wandering around in three dimensional space. We're really not very good at recognizing other types of intelligence. Imagine if from, from the time you were born, you had this inner sort of biofeedback, a kind of sense where you could tell what your blood chemistry was like and what your pancreas in your liver were doing at any given moment.


Michael Levin (00:33:20):


I think, I think in that scenario, you would have no problem recognizing these things as really cool intelligences that operate in physiological space. And I can solve all kinds of problems and get around obstacles or whatever, but we don't see physiological space. Right? Most of the time we don't, we don't think about it that way. So we have to realize that, that whatever we're saying about the intelligence of anything else, these are just models. These are models that we, that we make to help us relate to systems. And you know, these models are up for grabs and they're up for, for sort of empirical comparison with each other. Who's got the better model, but they don't reflect the true reality. And by the way, this each system is, is, is free to make its own model of itself, which may or may not agree with what, what, what other observers think about it?


Oshan Jarow (00:34:06):


Yeah, that, that's so interesting too, to think about this in kind of Pacio temporal mismatches. It's one of the reason that, you know, if, if I watch a, a video at a hundred times speed of a forest, the thing looks alive, right. Whereas if I look at it from my kind of default state, I don't see that that's, that's such an interesting variable to


Michael Levin (00:34:22):


Keep in mind. That's exactly right. And, and that, and that crops up everywhere. So for, so for example, for basic developmental biology, right? If, if, if you were shrunk down to the level of a, to the size of a single cell, and you were placed in the middle of a frog embryo or a mouse embryo, or, or any other, you know, any kind of embryo and you look around and you see all the noise and all the, all the chaos, all the cells running around, some of them are dying and it's never the same way twice, really. And, and, and they're, and they're missing their target and all this stuff is happening. If you didn't already know what embryogenesis was, and that the fact that it creates the, the a correct embryo every single time, I don't think you would in a million years be able to guess what was gonna happen.


Michael Levin (00:34:57):


Right. Yeah. I don't, I don't think you would ever be. You would just see chaos. I don't think you would ever be able to say, oh yeah, this thing's gonna make a, the same thing every time. And by the way, I think that thing will be a fish. You would never be able to say that <laugh>, this is right from it. It perspective of is, is everything. And I think that this is what leads us to be grossly you know, misled about, about the agency of all kinds of things around us, including as you earlier said, massive things like the evolutionary process itself, or, or large scale, you know, large scale social structures…


Oshan Jarow (00:35:23):


So with this in mind perspective, let's go down to the bottom of the continuum and look at cells. And, and this idea in particular of scaling cell foot, it's, it's one of these ideas of yours. I think it's so fascinating has all many, all kinds of implications, but the idea of how cellularly speaking selfhood scales up, right. And you've written that the self is a computational boundary and little cells are essentially little cells and evolution is found a way to combine them into bigger and bigger cells. And in that essay would de it. You tried to, I think, convey the gravity of this process. You guys wrote the key dynamic that evolution discovered is a special kind of communication, allowing privileged access of agents to the same information pool, which in turn made it possible to scale selves, this kickstarted, the continuum of increasing agency.


Oshan Jarow (00:36:12):


So I wanna open up this, this view of what the key dynamic is, how self scale. And as I know, you know, famously in, in the philosophy of mind, specifically with pan psychic today, there's this big problem known as the combination problem. And for listeners loosely panpsychism would be the position. You know, you're rejecting Cartesian dualism, that there are different substances for matter, in mind. And instead mind is fundamental to the natural world. Everything has some capacity or, or degree of consciousness. And the formal problem is the question of how then all these different instances of consciousness in the world combined into larger and larger instances. And, and your work with cells here, I think has a lot to say about this. So can you walk us through the idea of how little agents like cells combined to become larger agents like organs? Yeah.


Michael Levin (00:36:55):


Well I, so, so I'll tell you, I'll tell you a story, but I wanna sort of preface that by saying two things. The story that I'm gonna tell you is quite specific in some details, and I'm in no way claiming that this is the only way it happens. I'm saying that this is one way that evolution has found to do this here on earth. There are probably many other ways for that to happen. And, and we don't know those ways, but, but I think, I think this particular scaling mechanism is very instructive. So I think we can learn a lot there and maybe that'll help us find alternatives elsewhere, either through engineering or, or exobiology or something like that. Hmm. I'll also, I'll also say that, yeah, I, I will start with cells because that's the story that I know how to tell, and that's the story that experimentally we've been following it up, but I actually think that cells are in no way the basement of this whole thing, in fact, even, even, even we and others have, have looked at memory, memory and learning and things like chemical networks.

Michael Levin (00:37:50):


So, so simple gene regulatory networks of let's say, you know, half a dozen to a dozen elements already have six different kinds of learning that they can do. And if you've looked at any of the work on active matter, like like some of the work on the, on the, on the droplets, the chemical droplets that do mazes and things like this, I, I think that, I think that it, it, it really goes all the way down. And, and I mean, people who are much, much, much more sophisticated about the physics of it, like, like Chris fields and Carl Friton and so on, can tell stories about basic pieces of of, of, of physics that are nowhere near being cells. And I think from that perspective intelligence and kind of nano goal directedness starts long before you get cells.


Michael Levin (00:38:32):


I think what biology is great at is scaling it up so that you don't get you know, BA basically, so that, so that you combine little tiny little tiny pieces of physics, which basically only know how to do least action kinds of principles and active inference, right. And that's, that's probably the basement of, of, of of cognition and, and what biology is great at is scaling it up so that it, it, these things now care about bigger and bigger goals, as opposed to let's say, bigger and bigger rocks, which are, you know, sort of, they, their, their goals don't really scale. It's, it's, it's kind of the same as, as, as the pieces. So, so, so, so having said all that I, I, I, my, my favorite story starts, starts with cells. And, and the story I can tell is, is simply this, that by the time you have a functional cell, the one thing you know, that it has to have is some kind of very simple homeostatic cycle.


Michael Levin (00:39:22):


So otherwise you wouldn't have a living cell. So, so it has this loop that has three main components. And, and let's, let's think about something very simple, like like metabolism or holding pH constant or something like that. The idea that the cell has the ability to measure something that it cares about. So it has some sort of sensory apparatus, and that might be a receptor or, or some other, you know, some other thing like that. And then it it has some sort of internal representation of what the correct measurement might be, or a range of measurements. And then what it's able to do is to take those measurements and when those measurements are out of the appropriate range, the cell can take action. So this is, this is a basic homeostatic loop. It's the same thing your thermostat does, right. And keeping in keeping your your, your house at a, at a certain temperature.


Michael Levin (00:40:05):


And that is, that is what a, a, a, a, you know, a tiny little look, that's what a tiny little goal looks like. But the reality, I think is that you could, once, once you, once you have these very basic simple homeostatic loops that give you the ability to expend energy towards specific outcomes, then, then other, other very interesting things happen. One interesting thing happen. And, and, and I wanna kind of I'll, I'll, I'll tell the story of the scaling of the goal first, and then we can talk about the scaling of the, of the mind that goes with it. One, one thing that, that you might that you might have in these cells is something like, if you're, if you're driven to to do this homeostatic cycle, one thing you might want to do is, is try to predict what happens next.


Michael Levin (00:40:51):


So instead of being purely reactive, you might have a little bit of a mechanism for anticipation or learning. In other words, from a pattern of what happens, let's say circadian you know, these kind of rhythms in nature, you know, the sun goes up and down, you know, the temperature's gonna change various things happen. You might want to start take action before, before you actually need to right. To, to, to sort of be, be really optimized. And so, as soon as you do that, you have an interesting problem, which is you want to predict your environment. Your environment is really complex, all kinds of things happen. What's the, what's the sort of least surprising thing in your environment. Well, that would be a copy of yourself. So one thing that you might do as a cell that wants to make, make good models, predictive models of its environment is to surround yourself with copies of yourself, because then the cells in the middle, and this is, this is something that Chris fields and I wrote about as the, we, we, we called it the Imperial model of, of multicellularity.


Michael Levin (00:41:48):


What you might do is say that the cells in the, the cells on the outside are still, they're sort of frontline infantry. They're still facing this unpredictable world, but the cells on the inside have a much nicer, more predictable physiological Milia, because because their self models match the behavior of the thing they're next to, which is basically a copy of themselves. So you can start a kind of multicellularity that way. And then something else interesting happens when you, this is, this is, this is how these goals scale. If you, if you are sitting next to. So, so now we know why you're sitting next to other cells. If you're sitting next to another cell, if you simply send a signal to that cell, let's say some diffusable molecule off your surface or something that other cell can sense it with a receptor. Then it knows that it came from outside.


Michael Levin (00:42:31):


It basically, you can, you can, you can take action based on it. You can ignore it. It might be lying who, who knows, who knows, you know, what, what that is, and you can, you can decide how you're gonna act with respect to it. And that helps you keep individuality, right? When you have this sharp division between sender and receiver and of information, it helps each one keep their individuality. Imagine now that instead of that, what you invented or, or, or discover through revolution is something called a gap junction. A G junction is this little, little protein hatch that sits on the surface of cells. It's like a little submarine docking hatch that basically directly connects to another gap junction on the surface of another cell, that the magic there is that what it allows is, is signals to go directly from the inside of one cell to the inside of another cell.


Michael Levin (00:43:17):


So when that happens, let's say something happens to cell a there's a, let's say a calcium flux that serves as the memory Ngram of that event. You know, it triggers all kinds of internal processes. If you've got a gab junction open to cell B, that calcium signal is gonna propagate into cell B, cell B now receives this memory of something happening. Calcium doesn't have a metadata label on it that says, where did I come from? It's just calcium. So as long as you're both interpreting information, the same way that second cell has no idea whether this is real information that, that of, of, of about something that happened to it, or it's coming from outside, right? Mm-Hmm <affirmative> so you lose, you lose the ability to what, what that does is that erases the boundary between you and me. If we are sharing information, if your memories are my memories and vice versa, and we can't tell who had what memories, right, what happens is that that calcium flux or whatever, whatever that memory signal is, is a false memory.


Michael Levin (00:44:09):


As far as cell B is concerned because that event never happened to SBE. However, it's a, it's a true vertical memory to the new sort of hyper agent that consists of both a and B. And so that ability to share information helps erase. It's a kind of like you know, like a mind meld, basically, that that helps erase individuality. And of course, of course, there's a lot of nuances I'm, I'm glossing over because of course not everything propagates through these gap junctions and so on, but it becomes harder and harder. So that's so, so that's the first thing that happens is that, that, that informational, that, that era erasure of ownership information binds cells into into, into larger, larger units. The other thing that does that is stress. So if I'm under stress and I have some stress molecules to indicate that, that this things are out of out of homeostatic boundary out of the correct homeostatic range, and I have some stress molecules that, that indicate that, that I need to be going through this kind of response loop to try to make things better.


Michael Levin (00:45:07):


If I, one thing I might do is I might propagate those stress molecules to my neighbors. Now, why would I wanna spread systemic stress? Because then my problems become your problems. Stress is a kind of, right. It's a kind of cognitive glue. It means that you're going to be motivated to help me. Not because you're altruistic and you care what happens to me? You just want me to stop stressing you out. And so, and so stress, but right by, by propagating stress, it means that now, now, now look at what it means. It means that if, if we are connected, if there are se multiple cells that are connected in this way, by, by, by informational propagation by stress, it means that now when they take measurements, they take measurements of much larger things. A tissue takes measurement of, of spatially, of very large things.


Michael Levin (00:45:48):


Also it takes time for information to cross the tissue. So that means that you are now smeared out in time, as well as space. You're not just bigger spatially, your bigger temporally. So your now moment is kind of thicker. It also means that so, so you take measurements of bigger things. Your memory capacity is now bigger, whereas before you could only store very simple things, as far as your set points, you know, now you're a big network network in store, all kinds of networks can store all kinds of information. So your, your computational capacity is bigger. So, so remember the three things of our homeostatic loop, your measurement, your, your memory of your set point and your action. So your measurements are bigger. Your memory is bigger. And the kinds of actions you can take are bigger instead of just doing little single cell level things, you're a whole tissue.


Michael Levin (00:46:29):


Now you might bend, you might you might you know, migrate in a particular way. You might undergo some sort of deformation. So, so your actions are bigger. Everything is getting scaled. And because of the, because of the scaling of the stress and the scaling of this goal, directedness the kinds of things you care about now become much, much, much larger, right? You, you, you are capable now as a single individual of caring about larger states. For example, what is the curvature of my tissue? It now, now it can be in a certain range, whereas before single cells don't know about the curvature of your tissues, they don't know how many fingers you have, right. But a large collection of cells does, right? So this is so you can see what's happening here. This is, this is the scale the scaling up of specifically of, of goal.


Michael Levin (00:47:13):


Directedness, it's the ability to care and act toward progressively larger and larger things. And that's the process. I mean, look, we're all collective intelligences, right? You know, people kind of act as if, well, an Anhill is a collective intelligence, but me I'm, I'm a centralized, you know, being no we're all bags of cells, right? We're all bags of neurons and other kinds of cells. And what evolution does is it is it apparently uses the same kinds of dynamics and pivots them. Whereas before you might have been taking care of metabolic problems and transcriptional problems, and eventually as a multicellular creature, you might have been taking care of anatomical morphous space problems, meaning navigating in the space of anatomical configurations to put your body in, in whatever structure is, is optimal and eventually brains. And and muscles came along and now you can do the same trick in three dimensional space.


Michael Levin (00:47:59):


And now you can have goals about moving around and goals about being a rat maze or a human with, with, you know, long term goals. So that's kind of a, a whirlwind sort of story about about the scaling of all this. And then, and then there's another, there's another piece to this, which is if you are metabolically limited, meaning that energy is not unlimited, you know, Mo Mo mostly our AI. Now they, they have as much energy as they want, right? They're plugged into a network and electrical grid and they get as much energy as they need, but real cells aren't like that real cells. They have to worry about where, where their energy's gonna come from when their metabolism, and they have a budget, they have an energy budget. So let's say, let's say that you're a seller, a collection of cells, and you have various factors.


Michael Levin (00:48:39):


There's things you can do. You can think about it as the outer sort of output layer of a neural network or something like that. You know, these, there are, there are actions you can take. What you wanna do is, is because, because your, your metabolism metabolic budget is limited. You want to very rapidly figure out, which are the most causally effective actions. You don't wanna be sending messages to things that don't matter. You don't want to be trying to Twittle knobs in the environment that don't do anything you, you want to identify. Where are the pressure points in my environment? Where are the things that when they change, lots of other stuff changes, right? What, what are the, what are the triggers of important change? Because that's where you wanna put all your energy. You don't have the energy to just sort of, you know, signal in every possible way to everything that's out there.


Michael Levin (00:49:24):


So that means that you have to have some, and, and, and by the way, we don't know yet how this works, where this is under heavy investigation right now. You have to have some mechanism for gauging the causal effectiveness of the things around you. And if, if you're good at that, if you're good at figuring out where the, where where causal agents are in your environment, you will eventually turn that same lens onto yourself. And I think this is how we all end up with stories of ourselves as the central protagonist. You know, Dan den causes the, the, the center of narrative gravity. We, we all end up with these self models as the center of causal influence of free will of all these kinds of things, because it fundamentally comes from metabolically challenged and, and, and limited cells trying to ascertain which things in their environment are actually worth spending energy to try to manipulate. And so all of this kind of stuff that really reaches a high level in, in humans and other animals in terms of recognizing agency and you know, around you and then telling stories about yourself, right confabulation about your own reasons, why you did things and, and how, you know, how causily powerful you are and all these kind of things. I think everything can be traced back to this scaling story of single cells, trying to figure out how to get the most bang for their book, given a limited metabolic expenditure.


Oshan Jarow (00:50:46):


Mm, wow. That's fascinating. So, one thing that I, I want to point out within all that, that I think is so interesting, which you have elsewhere, is that when the cells are joining together to form these larger systems and work towards these larger goals, it's not that the cells are becoming more altruistic or deciding to become team players, that the, they all remain perfectly self interested, just that since whatever happens to one cell, you know, get, gets transferred to the other and they can't distinguish from who it came or to whom it happened. It, it becomes in every cell's self-interest to act in a way that aligns with the self-interest of all the other cells in the network. I think you, and Dnet called this some kind of karma, even if I remember correctly, but is that right? It kind of aligns the self-interest. Yeah,


Michael Levin (00:51:24):


Yeah. I mean, the, the, the, so, so my analogy to karma was simply this if, if, if we are connected by gap junctions, there's no possible way for me to so-called defect right. In, in game theory terms and do something nasty to you because it's gonna come right back to me immediately. If I poison you guess what's gonna happen minutes from now, I'm getting poisoned. This is, this is the most efficient karma possible. This, this is, you know, this is karma that works great. And, and there's absolutely no way to once we're under those, those kind of constraints, there's, there's no way to defect from each other. Yeah, I think I, I, I think the, the, think about something very interesting happens to the calculus of, of game theory and selfishness and so on when you don't keep the number of agents fixed.


Michael Levin (00:52:10):


Right. So, so to my knowledge all of, all of the, the, the existing work on, and, and I'm actually teaming up with, with some people who, who work on this in, in economics and, and, and so on because I, I think we, this, this can be developed rigorously and as rigorous math, but think about, think about this. You, you, when, when we do, for example, prisoners dilemma types of models, right, as far as who's gonna cooperate with whom who's gonna defect and so on, you make those kind of models. What's always fixed is the number of players, right? There might be two players. There may be more, it might be a specialized prisoners dilemma. There might be, might be lots of players trying to do, you know, selfish, economic, ration, economics to sort of get the upper hand. Something very interesting happens when the number of players isn't fixed and, and we're doing now simulations on this, but, but we actually did my, my, my son and I did a home project last year.


Michael Levin (00:52:58):


And it's it's online as a, as a pre-print the, the, the outcome of this thing. It's there's this slim mold called PSAR and PSAR is interested for many interesting, for many reasons. It it's a, it's a single cell. It's got many nuclei, but it's a single cell kind of thing. And it spreads out, it almost looks like lightning. It's like this fractal pattern and it spreads out and it knows how to, how to run around its environment and and look for, look for food and make certain decisions. So, so, so here's the experiment that we did imagine you've got this Feys arm of a certain size, it's a kind of a large colony, and you put a piece of oat in the oats is, is what it like Steve eat. You put a piece of oat next to it. And you know, part of the Feys arm starts moving towards the oat, and then you take a razor blade and you basically cut the P the part that's moving towards it.


Michael Levin (00:53:40):


You cut it off from the rest of the body. Now let's, let's, let's do the calculations of, of where the selfish actions lie. If the, the little piece, from the perspective of the little piece, what should it do? Well, what it should do is go eat the oat and not have to dilute it by with this gigantic body that's sitting there, right. And not share. So that's, that's the obvious, that's the obvious thing to do is not, not, you know, get all the nutrients instead are having to share them with, with this, with this big body. But instead, if what it did, if, if in fact it first merged, then this whole question completely goes away. You don't have the ability to even think about this. It doesn't make any sense. It's not defined to try to take this action. That's going to be to, to, to take the nutrients for yourself and not leave for the rest of the body, because you, in a, in an important sense you don't exist anymore.


Michael Levin (00:54:28):


So the number of agents is either two or one, and the calculus of what you ought to be doing depends very heavily on how many agents there are. And the number of agents changing the number of agents, meaning splitting, merging, that kind of stuff is, is an important action. You can take as well as actions in the regular environment, as it happens. The fru tends to tends to rejoin first. And that, which is kind of interesting. And, and, and who knows what the evolutionary pressures on that were. But, but the, this, this, this idea that in order to even he, in order to even be able to have selfish thoughts, and I use thoughts kind of in quotes, cuz none, none of this of course is, is the second order kind of process the way that, that, that humans, you know, a self aware process, but that in order to be able to consider defections, you need to have a self that is differentiated enough from the rest that this, that this kind of calculus even makes sense. And that's, I think that's what happens in cases of cancer, where the cells are no more selfish than any other cell. It's just that the self has gotten really small and they're back to kind of a MEbA lifestyle. So I think, I think that's, that's the kind of process that, that really ratchets up the, the, the cooperation and the intelligence.


Oshan Jarow (00:55:35):


Yeah. That, that's so interesting. And, and there's related work on this too. That is kind of complimentary parallel to the way, the direction you just described. I'm thinking of the quality research Institute. They've been trying to formalize this idea of open individualism, but basically the idea being that the boundaries of identity or selfhood really do change the calculus of game theory. And you're describing the way that works kind of at the cellular level. But another way that another direction research has been done on this is giving people MDMA or certain kinds of psychedelics that again, act on the kind of felt boundaries of selfhood, and then, then run through all these kind of prisoners, dilemma, dilemma type situations. And they found that the people, you know, who take are given a much of MDMA actually act in ways that are consistent with a kind of higher, more collective sense of self food relative to the individual ones. You get more cooperation such that, you know, the, the open individualism, when you are less confined in your cell foot actually lends to more cooperation changes the, the dynamics and calculus. It's, it's super interesting.


Michael Levin (00:56:30):


It's very interesting. And, and you know, and I'm not, I'm not shocked by it because these kinds of neurotransmitter dynamics are way older than neurons and brains. This is the kind of stuff that cells would've been using very early on to in fact, figure out where their boundaries are. So one of the things that that is not obvious, even though we, we sometimes treat it as obvious is where the boundaries of any particular agent are cells have to figure it out, you know, to, to even inside the body for any given cell, the neighboring cell that's its external environment. The cell doesn't know where to put that boundary. I mean, to us, you know, sort of staring at each other, we kinda say, okay, at the border or the skin there it is. But, but, but to, to the biological system itself, the question of where you end and where the environment begins is completely not obvious. It has to be figured out on the fly. It has in order to be able to predict the environment that to have good homeostasis cells have to make models of themselves, of the environment of the boundary between them and the lots of room for involvement of neurotransmitters from the very first from the very first stages in that. And so, yeah, I'm not, I'm not shocked that that kind of thing has carried over into into human psychology.


Oshan Jarow (00:57:38):


Yeah. Okay. So by connecting the internal Milu of cells, they join into these larger cooperative systems, perfectly aligned self-interests this loop cascades over and over again, we get a scaling up of goals competency intelligence, you, you, you've also pointed out how this driving upwards is not something that evolution is micromanaging and controlling. Instead you've written that evolution. It seems doesn't come up with answers so much as generate flexible problem solving agents that can rise to new challenges and figure things out on their own. And so kind of going from single cells to multicellular networks like neural networks, organs, I think this is a good scale to tease out how evolution is doing this and the strategies it's using. And you've written about at least two of them that that is used being modularity and pattern completion. So I wanted to touch each of these in turn. So let's start with, with modularity, what is modularity and how is that helping enable the kind of upscaling of intelligence and, and goals here?


Michael Levin (00:58:35):


Yeah, so let's think about, okay, so, so, so, so I call this the multi-scale competency architecture, and we all know that swarms are made of animals, which are made of organs, which are made of tissues and cells and organs and so on. Right? So that's, that's kind of the structural modularity. What's more important than that. I think for our purposes here is teleological modularity, meaning that you, you're not just a collection of modules. Each of those modules is a local goal directed agent, which is trying to solve problems, optimize its own experience in some particular space. And I'll give you lots of examples by, by the way, I wanna say a minute, a minute ago, you had said that this, this whole thing sort of, you know, leads to perfect cooperation. I, I, I wanna, I wanna nuance that by saying that in cooperation, but not perfect cooperation.


Michael Levin (00:59:26):


So for example, one of the things that we've studied is the competition of organs inside the body. And lots of people that we, we wrote a, a review on this where there's lots of data to, to, to show that cells and tissues inside the body are actually competing for informational and metabolic resources, even though they have identical genetics. Right. It's, it's very interesting. And so, so I think in the end, the intelligent, robust kinds of things that we see a and their failure mode, so things like you know, dissociative identity disorder and all kinds of other stuff, all the, the failure modes, as well as the success are due to this dynamic of cooperation and competition of your components within and across levels in the body. So not only, but, but here's the important thing. It, it, isn't just that evolution uses this as a nice feature to kind of be more, more powerful.


Michael Levin (01:00:15):


There's no avoiding it because unlike, unlike engineers of old who worked with passive materials, right? If you're working with copper or, or, or wood or something like that, you're working with a passive material, you, you have to take care of everything it's ever going to do. Right. All basically basically, I mean, it has some structural properties. That's really all you can, you can count on you can do a little better with active materials or computational materials, right? And you can act, you can, you can, you can take, you know exploit some of that. But when you're in biology, you are building with a genial materials, you're building with cells. So whether, whether you as a bioengineer and I, and this is really important, and people, people try to make a, a huge distinction between design and evolved. I, I, I, I think the more interesting thing is, what's what, what's the same, what's the symmetry here as a bioengineer or as evolution itself.


Michael Levin (01:01:05):


It doesn't matter. In both cases, you are dealing with an gentle material. You are dealing with a material with agendas, meaning that you know, the, the, the, the kind of image I have in my mind of this is like trying to build a tower out of, out of bricks or out of cats. You need completely different, right? You need, you need completely different strategies because the bricks will sit where you put them, right. Whereas when you're building with cells or tissues, you don't, you don't search the space of all possible micro configurations, because those aren't up to you, the cells are already doing stuff. They have their own agendas. They will, you know, they, they will do all kinds of active behaviors, what you should be searching and what, what evolution searches and what, what future bioengineers will search is the space of signals, inducements different kinds of rewards and punishments that will, that will cause the material to do what it needs to do.


Michael Levin (01:01:58):


So if you know that your eye is perfectly capable of, of, if you can rely on the fact that your eye is perfectly capable of getting where it needs to go from, from certain distances that it, you know, in incorrect positions, you don't have to worry about micromanaging that whole process. But what you might do is tweak the thing that tells the eye, where to go the, the, the mechanism that encodes the target morphology, right? It's a completely different, and, and it's a completely different way to engineer. It's a way to, and, and by the way, it changes not only how you engineer, because you are now not not, not, not micromanaging everything and, and solving all yourself. You're a collaborator with your material. This, by the way, is going to be an interesting development in the future for for, for IP, for, for patenting and intellectual property.


Michael Levin (01:02:44):


Because, because in the past, you could, as a, as a, as an inventor, you had a, you had a recipe where everything was up to you. This, I did this and this and this, and that is why my thing worked. And, and there's my patent in the future. It's going to be, well, I rewarded the cells for this and this, and then they did this amazing thing. And so it's not, it's not the same, right? It's not the same thing as having a recipe for, for for, for passive materials. You're, you're literally collaborating with your, with your material. And and, and I can tell you some, some interesting examples of that with respect to Zen bots and other, other things we've done, but that's, you know, that's what happens to, to, in, in an evolution that, that, that, that modularity is the ability of the parts to get their job done, even when circumstances are changing.


Oshan Jarow (01:03:26):


Yeah. man, that's so interesting and I can't help, but think this might be a stretch, but I'm curious about it. I, I, this whole time I was thinking of, of entrepreneurship and innovation, and you've, you've spoken about modularity, you know, in terms of, you know, the stakes for testing out mutations being kept reasonably low. Right. And so I think about something like a safety net, one, one way you can kind of speak about the role of a safety net in society is to keep the stakes low, to try out entrepreneurial, experimentation, innovation, whatever you want it to be. And, you know, and you can talk about unemployment insurance, guaranteed income, public goods, whatever it is, but this idea of reducing these stakes for experimentation and allowing mutations to be felt out and explored with kind of applied to an economic scale. Do, do you think that there's a, a way to draw a parallel there?


Michael Levin (01:04:12):


There, there may be, and this is, this is kind of, you know, once, once we, once we get to that part, part of the part of the talk we can, we can sort of talk about what, you know, what, what the, what the, which parts of these could be, could be pushed over.


Oshan Jarow (01:04:27):


Yeah. All right. Let's, let's move up a level of scale. Again, we've been talking about circuits and systems within, you know, single human organism. Let's move to systems where the components are not cells, but humans, right. Larger than single human systems, maybe economy, society, a community media swarm. So on. And the first question I have here is how far up we can take your work, because, you know, for example, a theory of gap junctions as the kind of primary mechanism applies, I assume, with, with a human organism as the upper limit. But I don't wanna assume cuz it's, it's not clear to me that there are gap junctions that can connect humans in their internal Milu in the same way that they're connecting cells. It's kind of a notorious problem that we're black boxes to each other. And I can't share the I can't perfectly share my inside with you. This is actually a source of a lot of anxiety. But maybe there are parallel mechanisms. I don't know. So how far do you see kind


Michael Levin (01:05:16):


Of mind melding being that mechanism that, that unites parts into holes? Yeah. I mean really interesting questions. And again, I'll just preface this all again by saying that I'm, these are just, these are just kind of my, my, my thoughts on this. It's not like I have any, any actual research on this, but, but I will say that first of all the, the, the black box to each other is, is a very interesting problem for a number eight first. First of all, you, in fact, could you, you could G junction us together and, and people have done that before you can, you can do you can do parabiosis and in particular you can do brain fusions at the level of at the level of tissue and right. And so, and so think about this is, this is, this is a thought experiment I did to try to, you know I, I, I, I really, I'm really not into binary categories.


Michael Levin (01:06:06):


And, and I thought, well, well, surely the, the binary category of first person was this third person. Like that sounds like a really binary category. And, and no, you can, you can make a continuum there too. And it goes like this I'm sitting here studying studying your brain. And the way I do it is I've got some electrodes sticking in your brain. And I'm looking at the data, the, the data's processed by computer. I'm looking at the screen and this is science done in third person, whatever you're experiencing. I don't know what you're actually experiencing. What I do know is that there's activation in particular areas of the brain. And, you know, and I can say some things about that, but, but I don't actually know what it's like to have that experience. So, so I'm looking at this, I'm thinking, you know, there's a lot of electronics here that, that may not need to be here.


Michael Levin (01:06:47):


Why don't, why don't I, why don't I take some of these some of the electrodes that are coming outta your head and instead of plugging them into my computer and then having to look at a monitor, I'm just gonna stick 'em into my own brain. I'm gonna like, like, right. Like, I mean, why not it's biofeedback, right. If I can, you know, people, people have, have people have learned to, to have magnetic senses. And you can, you can instrumentalize almost anything you can learn to, to kind of handle like the blind. They have these the electric lollipop, which is this thing that takes images off of a camera and puts it into little little electric ZPS on your tongue. And people learn to see that way and they can walk around. It's a, it's a device for, for for vision loss.


Michael Levin (01:07:24):


So, so, so, yeah, I'm just gonna, instead of staring at the screen, I don't need this. I don't, I don't need to pass all this through a computer and my retina, I'm just gonna PA patch your brain directly into my brain. And then I think, well, maybe, you know, these wire, this wire interface probably losing a lot I'm just gonna, we're just gonna directly fuse our brain together. And, you know, you could say that while mammalian brains don't fuse together, that's just the detail. That's just because right now they're not pretty, they're not that regenerative, but you can, people have done it with animals that, that do this quite nicely. For example, Axel, lottle salamanders. You can, you can do this. You can make you fuse brains, you fuse almost anything you want. So at that point right, so you're progressively going from a purely third perspective, a third person perspective.


Michael Levin (01:08:08):


By the time we fused our brains, two interesting things happen first, this is no longer third person perspective. I, to whatever extent you feel what's going on in your brain, I can now feel what's going on in your brain, right? For whatever, whatever. And, and I'm not saying we don't, we don't have any idea what that is, but, but whatever it is that allows you to have conscious states with respect to the states of your brain. I now share those. So, so sharing mind states is possible. However, there's a, there's a price to pay, which is that neither you, nor I exist anymore. What exists now is a new creature. That used to be part you part I, but, but that's gone now. We have, there's a, there's a new being because we can no longer for the, for the gap junction, for the same reason as the gap junction story, we are no longer, we no longer have the individuality that allows us to say, there's me and there's you.


Michael Levin (01:08:55):


We are. And, and there are, and there are human examples of every part of the story. And those are conjoin twins. There are conjoin twins that share different parts of the body, different parts of the brain. There are, there are twins that share brain regions, like all of this is completely biologically reality. And so, and so so, so you can, you know, you can in fact do this it, it leads to an interesting and interesting kind of kind of phenomenon which is that if you do, if you do third person science, you can, you can learn a lot about neuroscience, about cognition, about behavior, about all, all those kinds of things. If you want to learn about actual consciousness. And I know there's a lot of people that, that, that work supposedly consciousness. I think that, that the only way to really work on consciousness is to become part of the system.


Michael Levin (01:09:43):


And this is something that, you know, this is something that oddly enough hundreds of years ago, the old, old Alchemist talked about this, a difference between alchemy and chemistry was supposed to be that chemistry you can do in a third person perspective. You don't change when you're doing chemistry, right? The, the material changes, but you're still you. And you take measurements and it's public, and it's objective. And it's observable when you're the, the alternative to this is you're studying something that is only really understandable from the first person perspective. And the only way you're going to do that is to change. You're going to become part of something you're going to open up boundaries or, or do, or, or fragment, or do something else where you will find out what it's like. That's the answer to your question about what does it feel like to do to, to be X, Y, Z, but you're no longer the same person there you can't be.


Michael Levin (01:10:25):


Right. So, so, so you cannot remain the same. And, and so, and so I think, I think that's particularly interesting and it's, it, it, it, it also, it also links to this issue of you know, people and, and I don't talk much about consciousness at all. I, I don't have that much use, you know, interesting things to say about it, but, but, but one of the things I think I, it is interesting that I don't hear people talk about much is if, if you had a let's, let's say, let's say we reached a a point where somebody had a correct theory of consciousness. Okay. And so this, this, this goes back to your question of, you know, can, can, can you really feel what another, what another being is feeling right. If we had a correct theory of consciousness and I, and, and what, what do, what do correct theories do they, they make predictions about specific cases.


Michael Levin (01:11:09):


So now I ask you, okay, well now here's, you know, I've made, made this brain in, in, in my laboratory, it's got three, you know, three hemispheres, and it's got some cells from different kinds of animals who knows what it is. I've, I've made this thing. What is it like to be that creature? Right? What, what is the, nevermind, nevermind. What the answer is, what, what is the format of, in which your predict in which your theory makes predictions? We know, we know the format of every other theory in science, it numbers, right? It gives you numbers of various things. What is the, in what format does a good theory of consciousness give you answers? I mean, poems, I don't, you know, to me, it's because numbers aren't gonna do it. Can't be numbers, right. It can't be numbers. Well, what can it be? I, I suppose it can be poetry, but I think in the end, all, all, all the, all, all that poetry is in that context is a device to to mimic what you really wanna do, which is fuse your brain to it.


Michael Levin (01:12:01):


Right. If you really want to know what it's like to be something the closest you're gonna get is to become one with it, basically to fuse your brain in some way. And then the new collective will have some, some, you know, some, some mental states that are sort of similar to, to, to what you were looking for. But, but that's, that's the problem, right? Is that, is that third person theories give objective answers in a way that is, is not going just the format isn't, isn't, isn't appropriate for what you're trying to do.


Oshan Jarow (01:12:28):


Yeah, that's interesting. I was just queuing up a paper. I haven't read it yet, but it's a paper basically trying to set up a computational basis for phenomenology. And I was really curious as whether that's possible or not, cuz you know, rendering, phenomenology computationally you, what is the medium that you can do it? Do you lose too much in the process? Yeah. Man, that's a, that's a rich area, but alright. So one thing we haven't really focused on in your work is bioelectric networks kind of as, as the software of intelligent systems and in part you've explained this elsewhere at length I'll have links to, to other episodes. You've done this on the show notes, if anyone wants to listen, but I was thinking and wondering about whether bioelectric networks provide a similar mechanism as gap junction, unification, but at larger scales.


Oshan Jarow (01:13:13):


And then the first question I would have then would be, can basically, I just don't know this can cells be a part of a bioelectric network without having gap junctions between them. And is that sufficient to achieve that kind of system wide integration, right? Is bioelectricity sufficient for scaling cells or do we need those bridges? Because what I'm really getting at here is if we don't need the hardware connection of gap junctions or literally fusing our brains as you just described and bioelectricity is sufficient, would, bioelectricity be a more useful metaphor to think through these kind of intelligence agency, higher scale system questions when, when we're talking about humans as the component parts.


Michael Levin (01:13:51):


Yeah. Let's see, but so, so, so gap junctions are really critical portion of the functioning of electrical networks. So I'm not gonna say it's impossible to have electrical networks without gap junctions, but I think much of the computational power of, of, of electrical networks arises because, because of these, these gap, junctional connections, I mean it's really critical even, even, even, I mean very early on in, in cells, but, but also in brains, you know, general anesthetics that's one of the things that general anesthetics usually do is they, is they decouple gap junctions. So, so, so yeah, it's very interesting you, so you walk into, you walk into surgery and there you are as a centralized intelligence and you say to the Dr boy, I really hope this goes well. You know, I've got some plans for afterwards and then, and then here comes the hall OFAN and the gap junctions decouple, and then you're gone.


Michael Levin (01:14:43):


The cells are still there. Every, you know, all, all the cells are still are still there, but they're not coupled. And you as that centralized kind of kind of nexus are, are gone for a couple hours while they do whatever they have to do. And then afterwards the most amazing part of this actually is that it ever comes back to normal when, when the, when the, when the, when the gas is gone and the Gaab junctions get to reform that you ever come back to the same state, that's, that's actually extremely interesting and it doesn't happen right away. Right? If you, if you watch some of these YouTube videos of people coming out of general anesthesia, you know, there's a good chunk of time when, when people say they're pirates and gangsters and all kinds of crazy stuff, as your, as your brain is trying to settle back into the correct attracts.


Michael Levin (01:15:22):


But, but, but, but I think these, these gab junctions are really important for, for proper bioelectrical signaling electricity. Again is not the only thing that can do this. I'm sure somewhere on some other planet, there's some other way of doing this that has nothing to do with gap junctions and, and iron channels. But, but, but here from the time of bacterial biofilms evolution basically noticed that electrical dynamics are a great way to do computation, right? That's not an accident that, that brains use it, that our computer technology uses it evolution figured it out a long time ago. Electricity is just, is just a, a, a phenomenal way of integrating information across distance. Having feedback loops that give you historicity and memory all that kind stuff.


Oshan Jarow (01:16:05):


Okay. So to think through this, this space is kind of more than human space in terms of systems. I wanna bring in that paper paper, you recently co-authored integrating Buddhism biology, AI, because I think it does that. It provides a way to think about, you know, how we might continue that evolutionary story of advancing intelligence in terms of a variety of systems at multiple scales. So here's a little abridge snippet from the abstract you wrote. We show that Buddhist concepts offer a unique perspective and facilitate a consilience of biology, cognitive science, and computer science towards understanding intelligence and truly diverse embodiments. We propose that the scope of our potential relationship with, and so our moral duty towards any being can be considered in light of care, a robust, practical, and dynamic linchpin that formalizes the concepts of goal directedness stress and the scaling of intelligence. And so there's this relationship that you elaborate between goal directed activity, intelligence, stress care we've covered intelligence and goal directedness. I wanna quickly touch on care and stress and we should probably start with stress. Cause there's a really nice way to think about this. So in this context,


Michael Levin (01:17:08):


What is stress? Yeah, so, so this, this goes back to the idea from, from earlier in the discussion of, of, of stress as part of the cognitive glue, that binds pieces together. I mean, you can, you can, you can start off with a, with any any homeostatic agent. You can start off with the idea that stress, for example, cellular stress is a reflection of the current error, it's the Delta between where you want to be and and, and where you are now. Right? So, so that measurement, that measurement error. So, so, but, but, but stress has, has, has a, has two, two interesting components to it beyond that one is that stress scales. So if you tell me what it is that you're stressed about, I have a pretty good idea of your intellectual sophistication, right?


Michael Levin (01:17:53):


Again, like before, like we were saying before, if you tell me that you're stressed about the local sugar level, well, you might be an E coli, but if you tell me that you know, you're, you're stressed about the fate of you know, humanity with, with the rising you know, earth temperature or whatever I'm gonna say, you're probably at least a human. And if you tell me that I am only able to be stressed about the welfare of the three people that are directly in my household. That's one kind of human. And if you tell me that I, I can actually feel, I can viscerally be stressed about all of the living beings you know, in the universe, then you're some sort of, you know, like, like some sort of Buddha like creature that, that, that may or may not exist yet.


Michael Levin (01:18:33):


And, and if you're somewhere in between, you might say, look, I I'm stressed about everybody, but when I hear about something terrible happening to a hundred people, and then the same thing happens to, to, to a hundred thousand people, I'm not a thousand times more stressed. I can't be, I don't, I don't have a linear response there I've, I've sort of flattened out. Then I say, you're, you're probably a, a, a current modern human mm-hmm <affirmative> right. So, so the ability, so, so the question is, what is it that you're stressed about and what, what are, what are the, what are the, what's the size and scope of the things you're stressed about? And so that, that leads to one of the ways, one of the ways to think about it is can you, I mean, the, the, where, where one, one place where that paper started was in thinking about we, we all know what diminished capacity is, right?


Michael Levin (01:19:18):


So you go to court and somebody did something, and then they say, somebody argues, well, this person does not have the capacity of a standard human to care about things. Therefore they're not really responsible, right? So that's diminished capacity. Well, there should also be of course and there should also be examples of enhanced capacity because current modern humans are, are in, in no way arguably the top of this, this, this food chain. And so what does a greater being look like? It's important because as we talk about things things changing, so people are gonna have implants of various kinds in their brain, and they're connected to, you know various devices on the internet and they've had organs swapped out and census swapped out and all this stuff at some point, right. We need to, we need to really have a kind of a, a, a generic scale of, of, of the cognitive capacities of, of all these different kinds of creatures and the capacity to care.


Michael Levin (01:20:12):


What is it that you care about? How big is your horizon of concern? And, you know, if if it, actually, somebody asked me this on another, on another interview, somebody said, look, if, if you're willing to swap out body parts, genetics you know, body materials say, what is a human like, like what do you think a human really is? Because, because I don't think it's the genome, and I don't think it's your anatomy. And I don't, you know, what, what is it really? And it was a great question. And as best as I can tell a human, what, what, what human really refers to is a level of moral capacity. I don't care what you're made of. I don't care how you got here. I don't care if you were made in a factory or, or, or if you've were, you know, shipped to some other planet where you can, you know, swim in their methane oceans.


Michael Levin (01:20:53):


None of that matters. What, what really matters is being a human is a certain bandwidth, a certain width of capacity to care about about others, about, about other types of beings and what happens to them and you, and, and I'm sure that it, that it, that it's possible to create creatures that are way, have, have way bigger capacity than humans. Maybe it is possible for somebody to actually care in the linear range about every being on, on the planet, for example, and maybe it's possible for today's humans to become that. Right. And so I think, and I, and I'm, no, I, I should say I'm, I'm no Buddhist scholar, right? The, the, the, all the, all the specifics were, were written by my coauthors, as far as the Buddhism stuff is concerned, but as but, but my understanding of it is that it's focused on exactly this, it's saying that the way to progress and to increase your own moral capacity is to turn that outwards and to start actively caring about the about the experience of other beings. And this is the concern with everybody, you know, with, with the, with the suffering and the kind of you know, the status of, of, of other sentient beings.


Oshan Jarow (01:21:59):


Wow. Yeah. And, and it's so interesting. It was such a clever setup, right? The way that you define intelligent systems is by that common ingredient of, of goals, the very existence of goals implies that there's a gap between the systems present and desired state. That gap is the stress. Yep. And so intelligence is almost in necessary coexistence with stress, right? You can't have one without the other agree agree, agree, no stress, there's no gap. So on and so forth agree. Yep. And, you know, with an eye towards Buddhism, we can say that, you know, two exist as an intelligent system is to experience stress in all intelligence. That is stress. And of course, this is why in the paper, they go on to say that actually in these terms, stress is a good translation of Duca of, of what's otherwise begrudgingly translated as, as suffering, you even wrote in the paper that in this world of stress, existence equals dissatisfaction.


Oshan Jarow (01:22:46):


And so Duca is a continuous state that compels beings to act. So I, I thought that was such a wonderful framing. And, and this idea too, of the relationship between the size of your goals that are represented and the complexity of the stress, that those then larger goals are dealing with. There there's a lot there, but in the interest of getting to questions, I really wanna ask to ask you, I'm gonna zoom by them. So we've defined stress and care would care would be literally care for that, that gap, right there, stress is the gap care is to actually have feel this moral duty towards it. Right.


Michael Levin (01:23:17):


Yeah. And, and to, and, and, and, and care has another, I mean, care does a lot of work for, for example. And I don't, I don't know about you, let's, let's try this on and, and see what you think. Maybe, maybe other people will disagree, but, but let's say that let's say that you know, again, again if, if, if your spouse goes off and has a bunch of stuff replaced and comes back as a different kind of being what's, what's, what's really the question you ask to figure out, if you can remain married, let's say, right. Like, what do I really want to know? Do I really want to know what kind of liver you have? No, I really don't care. You know, do I really like, what do I really care? What do I really need to know? What I really need to know is what you still care about.


Michael Levin (01:23:56):


Because if you, if you, if you go off, right, if your spouse goes off to some procedure and comes back and says, I, I, my IQ has just been raised to such a level where all of your worldly concerns are of absolute no relevance. To me, I'm now only thinking about the shape of the universe, you know, at, at the Parex scale and whatever, I don't think we can be married anymore. That's not gonna work mm-hmm and, and right. And conversely, if, if you meet somebody and, and their their, their level of care is such that, you know, they're pretty much only interested in, in, you know, what they're gonna eat for breakfast. And it doesn't really go further than that. That also isn't gonna work care is pretty much what would that, at least as far as I can, I can feel out. That's the thing that, that the, the asset test that you're gonna use, if we are still compatible in, in that particular relationship, is what is it that you care about? Everything else is kind of up from grabs.


Oshan Jarow (01:24:46):


Yeah. Yeah. That makes sense. So we have these three ideas, right? Stress as a gap between present desired states care is concerned for that gap. And intelligence is the capacity, the competencies to close that gap. Now, the one of the questions for me then is, okay, what can this framework do if we accept these definitions, what now? And the paper gets in this really fascinating work of, you know, creating mathematical mathematical representations of cognitive horizons and relating this to the Bodi sofa vow. But there was one bit I wanted to focus on in particular. And it was this quick little paragraph about mind control and kind of in line with the Buddhist idea of no self that there's no singular enduring control agent. It, it gives a nice little proof as to why control over one's mind. And the short term, as any meditator will quickly learn is impossible.


Oshan Jarow (01:25:28):


Your thoughts arise from the deep I can't control them, but the paper suggests that mind control in the long term actually becomes increasingly feasible and is actually bound up with our ethical duties, right? That, that it's almost a, a moral imperative to design environments, uphold the conditions and undertake strategies that can change these statistical distribution of the kinds of thoughts we may have in the future. And I would expand this to the kinds of minds that tend to emerge. And so in the paper where I got this from, it said that mind control is logically impossible in the short term time scale, but may be coherent on a very long time scale. As in I've undertaken practices to eventually change the statistical distribution of the kinds of thoughts I will have in the future. This in turn underscores the importance of long term strategies, such as a vow to expand cognition.


Oshan Jarow (01:26:13):


So this strikes me as a strategy that applies to the economy just as well as anything else that we can undertake strategies that eventually change the statistical distribution of the kinds of thoughts, the patterns of phenomenology, the kinds of consciousness that arise as a product of developing within a given socioeconomic environment, being one arm, among many, and to, to make it more concrete. You know, I see the generalization of precarity over the past 50 years in the us as take an example as something that has altered the statistical distribution of the kinds of phenomenology that people will develop towards anxiety towards insecurity towards a restricted self-interest and ultimately towards cognitive rigidity, right? There's a lot of really interesting research on how experiencing economic scarcity actually makes mind less flexible. And I think a, a, a commitment to expanding cognition, which as you, as you've shown is achieved by making it more flexible and exploratory and adaptive has this economic dimension, whether we're talking about, you know, unconditional provision of resources, allowing people to have, I don't know, less of their lives dominated by the imperative to earn a paycheck.


Oshan Jarow (01:27:14):


And this might be guaranteed income, universal healthcare citizens, dividend. We might Ize public resources, land broadband, whatever, but to spend all this into a question in the paper, you explicitly state that much work remains to identify policies for informational coupling of subunits that optimize a, the potentiation of collective intelligence and care. These policies will be as relevant to establishing thriving social structures as to the design of novel general intelligences. So, given the caveat that you're not an economist, and these are speculations, I wonder if you have any speculative thoughts on how we might apply the conditions that you've pointed out and we've spoken about for the past hour and a half in biological systems, how we think about adapting these to think about social structures.


Michael Levin (01:27:59):


Yeah. I, I I'm, I'm not gonna name a specific social structure that I prefer, you know, that, that I, that I, that I would think we ought to be going towards, although I can name some that I definitely don't think we ought to be, we ought to be fooling around with anymore. But this to me, this is a very much an open a problem, an an open research problem that we, we ought to be, we ought to be investigating what these, I mean, it, it goes right back to the thing you said, right at the beginning of the, of the talk, which is the, the biggest, the most interesting thing of all of this is the scaling problem. It's the question of how do you predict what it is that a collective is going to want to do once a collective intelligence is form it's very difficult.


Michael Levin (01:28:39):


We don't have a good science of it yet. We, we, we cannot predict when we get cells with a particular genome, we cannot predict yet from that genome, what kind of body they're going to make? We have no idea. In fact, we, we learned recently that that cells with a perfectly good frog genome can in fact, make Zen Botts, not instead of frogs and, and taps, and, and people have known this, this in, in other context for, for a long time we do not have a science of predicting or managing the goals of collective intelligences. We do not know where these goals come from selection is, is only one of the answers is, is only part of the answer. It's not really even the biggest part, I don't think. And that, that, you know, what you just laid out long before. I, I think long before we can specify social policies based on this stuff, we have to understand emergent collective intelligence at in, in model systems, things like robotic, swarms, simulated, you know, artificial life, Xenobots, all these kinds of things.


Michael Levin (01:29:36):


We have to develop a science of the emergence of collective minds, given that we are all collective minds. So that that's a, that's the research program. And we are, we are nowhere near being able to to specify policies. We don't, we can't do it for cells. We can't do it for robotics swarms. We can't do it for anything reliably really, right. We, we, we barely understand what, what answer doing in, in, in, and how they scale up. This is, this is, this is a, a, a task for the future of humanity. It is just beginning, it's an existential existentially important task, because I think if we don't figure this out, we're gonna be in massive trouble for many reasons. And and I think that's the question that has to be cracked is, is how do you predict and control the, the emergent mind that you, that that is going to arise in, in various ways.


Michael Levin (01:30:28):


And, and, and I hope, you know, hopefully what, what MyLab and and other people have done is to contribute to the first part of that, which is to learn, to even recognize it and UN unconventional gues. You're never gonna get to the point of being able to predict and control it. If you don't even believe it might be there in the first place. Right? If you're, if, if you, if you don't know how to recognize these things. So I'm, I, you know, that what I'm trying to do is, is kind of dissolve these false binary categories and to sort of twist all the knobs on all these concepts to show the deep symmetries, such that we can start to even even understand that what we're dealing with is, is, is a vast collection of interpenetrating minds of various degrees of competency. And then, and then we can get to the task of being able to predict and control towards a, you know, some kind of a life positive outcome.


Oshan Jarow (01:31:14):


One point I think your work makes in the realm of biology that has a lot to teach us in other realms is kind of thinking about the relationship between top down strategies and bottom up. So, for example, in regenerative medicine, let's say we have a human lost an arm. You know, it was cut off in a horrible slamming of a door accident in the next few decades. You've said this, even if we get really good at gene editing and so on, we're not gonna decode the step by step process or how to rebuild a human arm. That's impossibly complex not impossibly, but it's very complex. So what we might be able to do as you've done with smaller organisms is learn how to trigger the relevant bioelectric pattern, which communicates and instructs for the goal grow and arm. And once that goal's been planted into the armless person's bioelectric network, the body will then get to work doing it because bodies know how to do this, even if we don’t.


Oshan Jarow (01:32:01):


And the point is that if we think about bioelectric triggering as a kind of top down strategy, right, programming a process by triggering a goal, rather than programming a goal by triggering a process, it affords us a kind of way of generating particular and intentional changes in a system that bypasses a lot of the complexity that makes it otherwise ins surmount insurmountable from a kind of bottom up approach. And I think this logic is really interesting as a way to think about, for example, economic policy policy, being a sort of bioelectric programming of the software of society, which communicates and instructs particular patterns and outcomes. So to look at an example, I always gravitate towards poverty, right? The question of how to reduce and eventually eliminate poverty, both domestically and abroad, you know, being this kind of centuries, old Chestnut in economics. And it looks to me like we've largely taken a bottom up approach so far, and it hasn't worked right.


Oshan Jarow (01:32:54):


Caveat being that global poverty is, is certainly on the decline. And yet they're persists this relationship between progress and poverty, right? Even as far back as, as the 18 hundreds, Henry George pointed this out and domestically today, we're, we're certainly still seeing it, but we've approached trying to end poverty in the same way that we might approach trying to instruct a body to regrow an arm or an eye from the bottom up by giving it instructions for every step of the process. And we don't know every step of the process of how to regrow a limb. And evidently we don't know every step of the process of how to eliminate poverty. We focused on strategies like education or subsidizing wages, innovation skills, training, you know, all things that we then hope will kind of spiral up and lead to these long run outcomes of reducing poverty.


Oshan Jarow (01:33:44):


Now, another approach which I would call top down in the same sense that we can't bottom up, instruct a tad pole on every step to regrow a limb, but we can just go in and trigger the relevant bioelectric pattern, which instructs for the goal grow a limb here after which the system gets to work, achieving that goal in ways that escape us for poverty. We can program the end of poverty as a goal into the system with something like a guaranteed income pegged to the poverty line. And, and certainly there are other ways that goal can be encoded, but the point is that similar to the bioelectric approach with limbs, we don't know the precise formula that will trigger a step by step achievement of the ends of poverty. And we've shown that we don't know how to do that. So instead we could top down program that goal into the system and it'll take up that directive and the rest of the system will adapt. So I wonder how you think about the extrapolation and, and more specifically, are there any guideposts through your work in biology, wherever that can help us think about what kinds of problems are best approached via top down approaches bioelectric or, or whatever, and, and which ones are best approached bottom up?


Michael Levin (01:34:58):


Yeah. Yeah. Very, very, very deep questions. I, I, I don't believe that anything that we've done or anything anybody else has done in biology is, is directly translatable into into the cor into the some kind of correct strategy. I mean, people have certainly tried, you've seen what people have done with this kind of with the idea of evolution in social policy that that's, that's all,


Oshan Jarow (01:35:17):


Hasn't always gone great. Right.


Michael Levin (01:35:19):


That, that goes very poorly. And and, and, and by the way, the, you know what one can I, I, I, I was born in a country where they tried the, the other option, which is to, to basically at least in theory to kind of tie everybody together and say, everybody's gonna be forcibly nailed down to the same kind of level. Right. So, so we're gonna gap junction, everybody to the same level that goes, that goes even worse than the other version. So I don't like that at all. You


Oshan Jarow (01:35:43):


Grew up in the USSR, is that


Michael Levin (01:35:44):

Correct? Correct. Yeah. Yeah. So, so that goes even worse, but, but, but I'll tell you I'll, I'll, I'll, I'll tell you this. I think that the reason that the reason that, that, that this that this top down control and, and let me just give a quick, a quick example of of, of what I mean by, by, by this top down control, that will make sense to people. Imagine you have a rat and you want the rat to do a circus trick. You know, let's say it's gonna put a little ball and a little hoop or something like that. You got, you got two basic options. You, you can, you can do the bottom up approach, which means you're gonna identify all the neurons that run its muscles. You're gonna play it like a piano. You're gonna literally like a puppet.


Michael Levin (01:36:20):


You're gonna, you're gonna, you're gonna fire off all the right neurons to make the thing walk and put the ball in the hoop. Not saying that's impossible, at least in theory, that should be possible. I don't, I don't know how long it'll take us to do that. Probably a really long time, but, but at least in theory, it should be possible. Or you can do what people have been doing for thousands of years before they knew what brains were or what was in our heads and that's you train the rat, it's very easy. Right? And, and the reason, the reason that works, it's much less effort. It's much simpler. You don't have to know much about your system. Other than that, it's a goal directed agent. You have to know a little bit about what it likes and doesn't like, so rewards and punishments. So the whole, what, what, what what's freeing and important about this


Oshan Jarow (01:36:56):


Training? Do you mean just giving rewards for the behaviors you like? Yeah.


Michael Levin (01:36:58):


Yeah. Behavior shaping. Yeah. The way, the way you get the way you get, you know, circus, circus rats to, you know, to do fun things. So, so what's, what's, what's freeing and useful about this, this concept of, I, I, I call it the the spectrum of persuadability, which is, you know, you have, you have very simple machines on one end that, you know, clockwork, and the only way to control that is to rewire the physical hardware. Then you have some slightly more simple, some, some slightly more complex machines, things like thermostats, which actually have goals. And then you can change the goal by changing the set point. You don't need to rewire it. You don't need to really know how the rest of the system works. You just have to know to be able to read and write the goals. Then you have more complex systems that have rewards and punishments.


Michael Levin (01:37:37):


You don't even need to know how they store their goals. You just need to provide feedback, right? And then you have even more complex systems like humans, where you don't even need to provide rewards and punishments. You might be able to give them rational reasons for doing things, and then walk away, having spent very little energy, and then they'll go off and do massive things, right? So, so as you move rightward along this, this, this continuum what's, what's cool is if you identify correctly, what kind of system you're dealing with, you can optimize how much energy you put in for the output that you get. If you're, if you make a massive mistake, then, then you end up you know, arguing with, with, with, with clocks or, or treating humans like, like you know, like, like machines, like simple machines. And, and that of course is problematic.


Michael Levin (01:38:17):


The idea is to find the right level for whatever system you're dealing with. But the reason any of that works is, is because if you've identified the right system, you have basically characterized what it is that that system is already capable of. In other words, the reason that our regenerative, you know, when we, when we tell cells in the, in the, in the gut of a tap pole to make an eye and they do it, and we, and we just do it with a, with a, with a particular ion channel in, in MIS expression, the reason that works is because we've already identified that that system already knows how to make an eye. And there's a particular sub-routine in there. So, so you have to understand the causal structure of your system. I have absolutely no idea. And I don't know if anybody knows what the causal structure of an economic system <laugh>.


Michael Levin (01:38:58):


It really is. What does it already know how to do? I have no clue. And, and, and I've, and I've said this before, one of the interesting things about taking this per my perspective seriously, is that instead of philosophy where people, you know, can have feelings about what should, and shouldn't be, you do actual experiments. So, so for example, if I, if I said to somebody gene regulatory network, they'd say, oh, that's a dumb mechanism, right? That's the, you know, that's a simple machine. All these genes do is turn each other on and off. Well, guess what it turns out. It, it, it can do, it can do five or six different kinds of learning. So you wouldn't know that if you just made an assumption, because you kind of feel like that's what chemical networks should be. You have to actually do the experiment.


Michael Levin (01:39:36):


So is it, is it, and, and I've, I've, I've wanted to do this for a long time. Could we take models of things like economic systems of weather, patterns of energy flow through a city of, of you know, all, all these kinds of weird, you know, kind of, kind of complex systems. Can we try training them the way that we did with gene regulatory networks and ask, what are they actually capable of? Is there notions of habituation? Is there notions of sensitization associate of learning instrumental conditioning you know, may maybe, maybe symbolic representation who, who maybe counterfactuals, who knows what they're capable of the point, the point is, if you assume they're not capable of anything, any of it, you are never going to find out these are, this is a com this is not philosophy. This is an experimental program, and somebody needs to do the work.


Michael Levin (01:40:24):


Now, maybe someday we will get to the point where somebody can look at a system and go, oh, I know what this is. This is roughly between, you know, a two and a three scale on, on the spectrum of persuadability. And, and they'll be able to guess right now, we can't even, we we're, we're nowhere near being able to guess that stuff we have, we have, we have all kinds of ridiculous preconceptions about these things that, that are not based on anything. So, yeah. So when, when we get better at this then it may be possible to say, you know, what economies know how to do they know how to do this and this, and I mean, mean just like, just like in in, in bioelectric networks, we found out that if you're, if you have a an oncogene in one part of your body, and it may or may not make a tumor this is, this is, this was in, in TA poles, not humans, of course whether or not they make a tumor is in large part of function of voltage patterns.


Michael Levin (01:41:10):


On the other side of the body, far away, you would never think that if you hadn't done the work, you, would've no idea that impacting the voltage on one side of the body will impact whether the tumor comes out on the other side. And so that kind of non-local intervention may also be true in all kinds of systems where you find out that, you know, if we just P if we just made a rule over here about this, it actually ends up percolating into this, this other you know, desire. And, and of course, everybody knows the other version of this, which is UN UN you know, undesirable consequences. When you do something, you say, oh my God, I never knew that it was gonna lead to this. So that's everywhere. And, and the only way to deal with that is to really understand what the causal structure of these of these systems are.


Michael Levin (01:41:51):


And I, in general, am kind of very leery towards doing, towards making any changes in these systems, doing more regulation in these systems before we really understand how this stuff works, because I, I, I fundamentally think that some of the worst some of the worst in some of the worst behavior patterns and instincts in, in, in humans come out when they're given power over other humans, right? Mm-Hmm, <affirmative> that, that's just, that's just a of recipe for, you know and it's a recipe for really, for real problems. And so for all the kinds of economic things that you talked about as, as an outcome, it, it all sounds great, but oftentimes the, you know, in terms of you, you know, universal universal income and all that stuff, the outcome sounds, sounds wonderful. But when you actually think about what the implementation steps are at some most of these things boil down to, well, we're gonna take this thing that people are, are doing, and we're gonna force them to do something else and right.


Michael Levin (01:42:54):


And whether, whether that be, you know, we, we want you to farm more because everybody needs more food and, and food prices are too high. Like somebody at some point is gonna be telling you to get up early and farm more at some that that's what almost all of these things tend to boil down to something like that. Now, now in the end, this is all gonna get, I, I hope this is all gonna get resolved with, with, with robotics and, and, and, you know, and, and free energy and things like that. All of all of this will go away, whether, whether we will survive to that point, I don't know, but I, I'm just, I'm just very leery of applying any of these things as, as economic or social policies before we know what, what we're doing, because it's much easier to make mistakes. And and, and we're very prone to to doing it wrong.


Oshan Jarow (01:43:32):


Yeah. And I think your work gives us a basis to, to start understanding these systems better. Final question. I'll let you go. We've been talking about how individual parts get United into, you know, their sub components into higher level systems. And there, while there might not be literal gap junctions, or maybe there are, that might meld individuals together. There are a number of arenas in which it looks like we're becoming swept up into parts of higher level systems. We can talk about social media, a city. I mean, I recently moved to New York city and the prospect that it has its own consciousness feels more correct every day. Yeah. And so your work leads me feeling relatively optimistic about that, that doing so scaling up systems is how we increase collective intelligence and agency. But I think there's also a kind of interesting pessimistic look at this, right.


Oshan Jarow (01:44:15):


And I was just reading Eric Hoel’s newsletter, a colleague of yours at, at Tufts Eric being pass guest on the show. And I thought he put the perspective really well. Right? So he, he was writing in the context of, you know, Tahar de char Dean's idea of the news sphere and omega points and how small consciousnesses scale up all the way to a singularity. But I, I think it applies more generally to any part that gets integrated to a higher level system. And this is the way that he put it. He wrote, we should take away that if you're a subset of a larger system that is conscious, you might end up enslaved to it in an unconscious manner. There's even some evidence that the evolution of multicellularity led to a complexity drain on individual cells, where they outsourced much of their internal biochemical intelligence to the whole, and this idea of complexity drain, I found really interesting and also kind of discomforting, right.


Oshan Jarow (01:45:02):


That rationale from the research she cited was that higher level systems render parts and capabilities of the lower levels redundant. And so those low, lower level parts get rid of those capacities in favor of efficiency for the system on the whole. But if we think of those low level parts as us, as individuals, as you and me, yeah, yeah. What we're talking about are individuals, you know, enmeshed in planetary systems and those systems rendering our capabilities redundant and incentivizing us to kind of strip 'em away. And I don't know if the trade off of, you know, what we gain by streamlining a collective consciousness is worth what we lose in the complexity drain. So I wonder how this strikes you, if you see any concern in this idea of complexity drain


Michael Levin (01:45:40):


I, I definitely see a, a there definitely is a dark side to any kind of collectivism. I mean, it's, it's not entirely about the the drain, because the other way that it might go is that oftentimes these complex systems exploit the, as I said before, the the competency of the individuals of the, of the subparts and actually learn to rely on them in, in terms of, so, so, so evolution, doesn't try to get the eye to be less competent in doing what it's doing. It's actually cranking up the the capacity of the parts to do more because EV because in, so, so in my view, the ratchet actually goes the other, the other direction. Mm. But, but, but that doesn't, that doesn't matter in, in either way. I think, I think he's completely right for the following reason.


Michael Levin (01:46:24):


We, we do not have any clue what the goals of a novel collective intelligence are going to be. We don't know where goals come from. We don't know how they're set in, in systems that don't have an evolutionary path that you can sort of cheat and just say, well, of course, it's this way. It's because everybody else died off. And that's why it's this way. We don't, we don't we don't know where these goals come from. So that means it means two things. It means one if you are part of a larger system, the goal of that larger system may, may operate in, in a space that in which your, your goals are not, not only, not only too small to matter, they don't even make any sense. So for example you, as a human think, nothing of going rock climbing and rubbing a whole bunch of cells off of your palms you know, and because you have goals, you're going to be fit and you're going to you know, impress somebody and you're going to have this hobby.


Michael Levin (01:47:14):


The cells in your skin are just going to get rubbed off and, and die. You have zero, right? You have, you have zero concern over it. You didn't spend any time worried about it. You just don't care. You were operating in a completely different space. And so it is, I think it is, it is just as likely to be part of a large system that doesn't care at all about the kinds of things you think of are, are important. Now, whether that's worth it is again, an interesting question from, from what observer's perspective. Right? So for, from the perspective of, right, from the perspective of a, of a good ameba or a, you know, a Laramy or something, your body cells are really, really kind of dumbed down, like they're like white lab, lab, rats who are, you know, just pale versions of, of wild rats, right.


Michael Levin (01:48:00):


They're just really, you know, not, not really competent at all. They wouldn't survive a minute, you know, out there on their own, but from the perspective, but, but you spend any time losing, losing, sleep over this. No, from your perspective, forget this, forget what the cells can do. Look what I can do. I, you know, I'm, I'm, I'm this much bigger thing. So, so yeah, it's real easy to become part of a bigger system that has none of your interests really at heart. So that I think is a, is a major risk. And, and the other thing is and I've, I spent a lot of time thinking about this and I, I don't know what the answer is, but I don't know if, is it, is it possible to know whether or not you in fact are part of a system with bigger goals?


Michael Levin (01:48:36):


Like, how do you detect that? That would be, that would be just, just re realizing that you, in fact are just a skin cell in the, you know, in, in, in this much larger structure is a it, it's just, it's, it's a kind of horror that I, I don't know. I don't know if you know, if anybody's, if anybody's really, really written about it is, is this idea when you, when you realize not just that there are gigantic systems that are, that dwarf you, but you're actually part of one, right? That's, that's ACU that's a particularly car kind of, you know, kind of existential dread. And, and is it possible to, to recognize that, and is it possible to know what they want?


Oshan Jarow (01:49:13):


Hello, this is Oshan from the future of this conversation. I am chiming in because I cannot resist with something that I wish I had said on the spot, but I didn't, and this is my podcast. So I had the power to add this in now, but Michael's talking about the, the dread and the horror of discovering that we are not the end of the line of systems with goals, right? But that we are embedded within larger systems, in which case we're just being trained by them to be good parts, laboring towards the goals of those larger systems, which are invisible to us. And I think that's partially right, right. There is a dread and horror in that, but conversely, there can also be a sense of, of salvation in that, right. I, I actually don't think it's too far of a, a stretch to say that religion is reaching for the Telogy of the systems that we are embedded in that extend beyond us, right. Beyond both our comprehension and our perception. And for a lot of folks, I think there is a deep existential relief and, and believing in having faith that we aren't the end of the line of agents, that actually we're part of a, something larger than us, right. Literally, and that something has, and exerts a form of intelligence that gives meaning and purpose to all that we're doing. Now, there's caveats on both of these perspectives, of course, but I just wanted to add it in. Okay. Back to Michael.


Michael Levin (01:50:45):


I, I actually, I, I spent a bunch of time last few weeks thinking about what it's like to be a collection of a node or a collection of nodes inside of a neural network. So if you're, if you're, what, what do you see if you're, if you're a node inside of a, of an artificial neural network, that's being trained, right? When you look around you, don't see if, if assuming, assuming that you yourself were, were complex enough to have this, to have this thought, you don't see a dumb uncaring universe in which you can learn and do whatever you want. You see a place with mind everywhere you are being trained. You're not learning, you're being trained. The universe rewards you and punishes you for various things as a right, as a, as a node inside that, that network, you are you, you are the target of this, of this, of this giant mind.


Michael Levin (01:51:28):


You're absolutely part of it. You would be, you would be wrong if you took this kind of minimalist approach and said, ah, it's a cold unfeeling universe. Nobody cares what I do. No, I absolutely. You you'd be, you'd be factually incorrect. And so from, you know, from, from that perspective, now, now you wouldn't, you wouldn't be able to fathom what the goal actually was. So let's say it's a neural network. That's like trained to recognize, you know, cat cats of, you know, cat faces or something. All you would know is that for some reason, the universe loves it. When you pick out eyes, that's what it wants you to do. Now, what the hell that's for what a cat is? You, you have no clue. This is, this is, this is what people say when they say, you know, the universe moves in mysterious ways.


Michael Levin (01:52:07):


Sure does. It's it's inscrutable. And and you as the subsystem probably I, I bet. I mean, I can't prove this, but I bet there's some sort of girdle theorum about not being able to understand anything, you know, about what the actual goals of the, of the system in which you are embedded is. Right. but, but, but maybe there are some mathematical tools to even tell you that you are part of such a thing. And you know, and I don't know. And, and yes, I think, I think, I think simply scaling up is definitely not the answer. You know, gap junction ourselves all together is not a fantasy. It's not a road to optimal social structures. We need a lot more work to understand what, what that might be if there, if there even is such a policy.


Oshan Jarow (01:52:48):


Yeah. <laugh>, I've been thinking a lot about, I was reading about the idea that valence is the intrinsic utility function of the universe. And in, in terms of what you were just talking about in, instead, I'm thinking valence are the pellets that some larger system feeds us in order to train us in a particular direction. You can never


Michael Levin (01:53:03):


Know it, it, it may well be, it may well be right. And, and we all know people who feel, people who will tell you that, you know, I, I don't know why or how, but in my life I've noticed these large patterns, these things happen. And I don't believe stuff happens just, you know, for a reason there's a, there's a, there's a cycle and all this kind of stuff. Oh, who knows? It's, it's very hard to know, but it, it, I, I think it would be, it would be foolish to rule that out. We, we, we do not know.


Oshan Jarow (01:53:25):


Yeah, I'll leave it there. So hopefully that'll, that'll motivate listeners to, to get into these questions cuz they're so rich and important, but Michael, thank you so much for coming on the podcast.


Michael Levin (01:53:35):


Was a blast. Thank you so much. Yeah. Thank you for the conversation.


Oshan Jarow (01:53:52):

All right. If you made it this far, thanks for listening. If you wanna learn more about Michael's work, you can head to the show notes page that's at musing mind.org/podcast, click on Michael's episode, you'll find links to his writing, other interviews, he's done his website and so on. If you want to hear more from Michael, as I mentioned, there are a few extra audio snippets that are available for Patriot supporters. I'll also be releasing in a week or two, an episode reflection. So if you'd like access to any of those, while also supporting the show, you can go to patreon.com/oon Jarrow and imparting. I wanted to touch on something that Michael said, we may well all be, you know, nothing but the equivalent of skin cells to some larger collective intelligence that is as massive compared to us as we are to our own skin cells.


Oshan Jarow (01:54:49):


And, you know, we may well be it, it may be dragging us along in pursuit of its own goals. And, and even if this life that we experience is the equivalent of being dragged across a jagged face of rocks and left to die while the larger system enjoys its rock climbing. We, nevertheless we have the capacity to make this experience worthwhile and a, a good Buddhist would say that it already is. And we just can't see that. And I'm sympathetic to that in at least in part, but in the spirit of emancipatory, social science, the economy provides us a way of altering the environments that produce us and altering that statistical distribution of the likely kinds of minds that emerge from our society, society itself, being understood as an evolutionary environment that produces minds. And if we take that to heart the next few decades, then I can't help, but, but feel excited to be alive and participating in that process. And I think that'll do it for today. I hope everyone is well. And I'll talk to you next time.