How algorithms undermine consciousness

Eran Fisher
Or, listen elsewhere:
The Apple Podcasts logo buttonThe Spotify podcast button
Guest Introduction.

As algorithms rise to play larger roles in how we interact with the world, how are they recursively acting upon us to play larger roles in how we experience ourselves? What, in short, does an algorithmic society do to consciousness?

Eran Fisher is a professor of sociology at the Open University of Israel, and has a recent book out titled: Algorithms and Subjectivity: On the Subversion of Critical Knowledge. In it, he digs beneath the more obvious conversation around how algorithms are changing our worlds, to ask how they're changing our-selves.

In the conversation, we discuss:

  • How do algorithms change the promise of freedom society offers?
  • What does it mean for algorithms to "undermine" subjectivity?
  • How do algorithms pose different threats to freedom than mass media of the 20th century?
  • How much of the threat of algorithms derives from their for-profit deployment in a world with insufficient mechanisms for democratic data governance?

Plus tangents into psychedelics, the politics of subjectivity, and all that sort of good stuff.

Enjoy!

Time map.
  • 12:45 - How do algorithms promise a new kind of freedom than that traditionally sought by enlightenment values?
  • 21:20 - What does it mean to think of “subjectivity” as a political project, or a dynamic horizon?
  • 30:00 - What is “critical knowledge,” and how do algorithms undermine it?
  • 60:00 — Already in the 20th century, critics of mass media were concerned over how it was producing ‘one-dimensional’ humans, and mass producing the mind. How do algorithms present a different kind of threat?
  • 55:50 — Drawing a connection between algorithms, predictive processing, and psychedelics; what is the role of surprise/free energy/uncertainty?
  • 1:04:20 — Is the problem with algorithms a problem with capitalism, or is it intrinsic to the technology itself? What is the role of more democratic regimes of data governance, public-owned AI services, etc?
  • 1:11:10 — What difference could giving users more transparent control over the design of their algorithms make? Can we design our own algorithms that help make us more of who want to become?
  • 1:16:40 — On the relationship between economic environments and human development.
Links from the conversation.

Eran's academic site, and book: Algorithms and Subjectivity: The Subversion of Critical Knowledge.

Salomé Viljoen's excellent paper, "A Relational Theory of Data Governance", plus a non-academic write-up.

Andy Clark's paper on predictive processing, cultural evolution, and the adaptive value of surprise.

Hartmut Rosa's book, Resonance: A Sociology of our Relationship to the World.

We didn't discuss this, but an absolutely spot-on paper about predictive algorithms as "extraction of discretion."

My conversation with Christian Arnsperger on economics & emancipation

And my conversation with Emma Stamm about digital capaitalism and acid communism.

Support the podcast!
You can support the podcast by sharing on social media, with a friend, or leaving a rating &review on Apple Podcasts!

Receive new episodes & related musings by joining the newsletter community. If you’d like to get in touch with me, you can reach me on Twitter, reach out to join the Discord, or contact me directly through this site.

If you’re really interested in helping the podcast exist, consider becoming a Patreon supporter with a small monthly donation of even $1! Your support means the world, and goes directly towards improving the podcast’s audio quality, equipment, research, and overall experience.

Thank you!
Become a Patreon supporter button
Transcript

Oshan Jarow:

Eran Fisher, welcome to the podcast. Thanks so much for being here. 

Eran Fisher:

Thanks for having me.

Oshan Jarow: 

So if I were to put my Charles Darwin hat on, I could say that if you fundamentally transform an environment, the evolutionary entanglement between organisms and their environments, that evolutionary logic will tell us that all the living systems bound up in that changed ecosystem will also undergo a process of transformation as they continue to live and adapt through the changes to their environment. And you've written a book, well, a couple of books actually, about how the introduction and profusion of algorithms in our environments is leading to transformations in human beings. And in particular, your latest book looks at how living in increasingly algorithmic environments transforms and even undermines subjectivity or consciousness. So for anyone who's listened to the podcast before, you'll recognize these questions as ones that I'm really deeply interested in. But I want to pause for a moment before diving in there just to humanize this conversation and these questions a little bit. I'm curious what it is that brought you to be interested in these kinds of questions and this larger constellation of topics that you focus on. 

Eran Fisher:

First of all, thanks for putting me in one row with Darwin. That's a first and probably a last. So you mean what was my interest specifically now in algorithms or more generally in this the link between humans and technology? 

Oshan Jarow 0:07:43

Right. Maybe let's start with the broader view with humans and technology, then we can narrow into algorithms. 

Eran Fisher 0:07:49

So actually, my interest was or an accident, I would say I was actually interested in globalization. I was starting my Ph.D. in the beginning of the millennia in New York and moving from Israel. And one of the big topics was globalization. And so I was really interested in kind of political culture. And and I started and I actually wrote this very preliminary paper, which I called Technology is ideology. So thinking about how this technological device doesn't just do something in terms of, I don't know, transferring files, but also allows us to imagine new political constellations. And then one of my mentors told me, you know, there's someone already wrote this paper. It's Habermas. He has this piece, you're going to have a mass called Technology is ideology. And so that was sort of my entry point into this idea of how technology works as an ideological factor, meaning how technology is not just you know, it's not just a thing that does something, but it really partakes in in what it's constructing in a way. So how it allows us to, as I said, to imagine new forms of life, how it legitimizes new forms of life so that, for example, you know, new new ways of new modes of employment could be legitimized by saying, hey, you know, this is how the Internet works. So that was, I think, my entry point into this subject. 

Oshan Jarow 0:09:46

Yeah. And so how did then what was the transition from that kind of broad constellation to bring you to look at algorithms specifically? 

Eran Fisher 0:09:54

I started out as a sociologist and looking at, let's say, the social context of media, of digital media. And so suddenly I started shifting also a little bit my gaze and started to look a little bit more looking at different facets more internally. I moved from looking at, let's say, the big questions of technology as ideology to looking at sort of into the mechanics, let's say, of media. And this was, I think, also my entry point to algorithms, because I was really curious about this new, you know, new thing called big data and what it can do in the world. And my sort of initial question was pretty simple. I mean, there was, again, this idea that if you have a lot of data about a phenomena, then you can know it, you can understand it. Then I thought, well, what is the nature of this knowledge? And I think especially I was interested in what does it mean to know? What does it mean for a platform, for example, Netflix or any digital platform to know its users? Because this was the big thing, personalization. So, again, there's a very simple answer, right? They just have a lot of data, so they know who you are. But how do you translate data into knowledge? It's really not that straightforward. And so that was, again, my entry point into this. The last few years of my work has been to really try to find out or to grapple with this new kind of knowledge. 

Oshan Jarow 0:11:53

Yeah, I really like the way that you frame this kind of overall question, which is the way I saw it for the past few centuries, at least. There has been a relatively consistent theme of society aiding individuals and vice versa in expanding the scope of our freedoms. And as you argue, a critical ingredient in that has been a self that is involved in the production and the pursuit of the various kinds of reflection and activity and self knowledge that go into that. And algorithms and in the era of capitalism that they really usher in, you suggest really shakes up this whole project that this kind of age old maxim of know thyself, which was always taken as the North Star towards wisdom and freedom. 

But that now algorithms seem to promise a different kind of freedom. They uphold even a different justification for capitalism, as you've written about the promise of society to individuals. So there's a lot of defining terms that I want to get into. But before we do that, maybe just at a high level, how do you think about this different kind of freedom that algorithms and this kind of networks society promise? 

Eran Fisher 0:13:01

I think it's really hard to nail it down, but I think there is something about, let's say, freedom from without freedom from the outside rather than freedom from within. So that's a different conception of what it means to be free, which we know, you know, to be free doesn't mean to be happy or to be satisfied. It's really something different. Sort of the type of freedom that we are promised with algorithms and these machines that know us is not what we let's say during modernity, during the enlightenment. I'm not sure that's the same kind of freedom, which is about being able to transform ourselves or to better the human condition, not just in terms of having better material conditions, but read ourselves of the things that hinder us, the things that sort of... Yeah, I mean, I have a very, very sort of traditional Kantian modernist perception of freedom. And I think the freedom we're offered with digital media, with algorithms, with those machines is actually quite different.

I mean, I think one of the things that they promise us, let's say freedom in the sense of reading ourselves from the toil of choosing and not just choosing a film, which film to use to see, to watch, or what kind of, I don't know, what diet to pursue during your daily life, but the burden of having to choose between, let's say, right and wrong. So I would put it this way, eventually, I think the real toil of modernity is that we had no external answers to the question, what should we do to be decent human beings? It's a question that you had to put yourself every day, all the time and answer it yourself. And I think that puts modern human beings at a very unease. So freedom in that sense is not about leisure and it's not about being content, but about actually it's a form of anxiety, really, to make a choice, what should you do? And that's always at the end of the day, there's always a political question behind it. How should we act, not just as human beings, as individuals, but as a community, as a society, as a country, et cetera. 

Oshan Jarow 0:16:15

Yeah, I really like that phrase and this kind of shift. If you think about the promise of, let's say 20th century politics, and it's something even on this podcast, I've looked at the US labor movement throughout the 20th century and the transformation of the demands they made. And it's really interesting that you can see during that time, the promise was to rid us of the toil of labor itself, shorter working weeks, more technology would automate production until the point that we all leave these wonderful lives of leisure. And that vision declined and that's a fascinating story in and of itself. But I like this framing of what switched was from ridding ourselves of the toil of labor to ridding ourselves from the toil of choice. That makes a lot of sense to me. 

Eran Fisher 0:17:01

Beautiful. 

Oshan Jarow 0:17:02

And of course, it's all bound up in the transformations in our information environments, and you can call it the move from scarcity to abundance or whatever you want. And even thinking back, there's a great talk, I think it's very short, but this idea of the paralysis of choice that we have this notion that the more choices we have, the better because that means we have more optionality. But it is interesting because at a certain point, once that crosses a threshold, it does get difficult. I do think it manifests as a sort of labor to even in a very concrete form to be sitting on, I get home from work, I sit on my couch and I could learn about 8 million different topics. I could watch 8 million different shows. There's so much you can do.

And it is, I think, a form of labor to then have to act in that moment. That's a really interesting switch. I like how you framed that. So OK, with that, I want to move into your specific terms a little bit because I think they're really helpful to excuse me, to make our way through this. I think we should define three terms to kind of set the table for the conversation. And these are algorithms, critical knowledge and subjectivity. So let's start with the easier one. Let's start with algorithms. We've touched on this a bit, but this is a broad category. A lot of things fall into this. So when you talk about algorithms, are you focusing on recommendation algorithms in particular or what do you mean? 

Eran Fisher 0:18:22

Yeah, so I mean, I'm using really this very broad term and I could have used different terms such as big data or just platforms, digital platforms. But the reason I used algorithms was really to focus on this switch from, let's say, one type of knowledge to another. I mean, we have different names for it. One is called data, which is very, very, let's say, low intensity information to translating it to knowledge, which is, let's say, high intensity information, very, very thick. So algorithms are just, let's say, the mathematical tools that are making this switch. So that's why I was really fond of this term. I think I'm using the term already in the book of interface algorithms because, yeah, because big data is now producing knowledge and algorithms are producing knowledge of all sorts in medicine and a criminal justice system, etc., etc. But I think what is really interesting was, or what I focused on was what's happening on media platforms where the user is really, let's say, I'm doing like an error code, but like is the king, right? 

Yeah. That's where it's all focusing on, giving the user what she wants. So, yeah, that was my focus. My focus was on those machines that not only are making knowledge about us, but in some way reflect this knowledge back to us so that when you're seeing the Netflix recommendation or the YouTube recommendation or the kind of posts that you get on your social media, on your favorite social networking sites, right? Twitter or Facebook, etc., it also reflects to you how this platform sees you. So it's a kind of a mirror, I would say, that you don't know exactly how they got this information or how they got this idea about you. But I'm using Amazon a lot. I don't know if to buy books, but mostly to actually discover new books. So I'm using those machines too, right? So I have this idea of, I can look at what they're suggesting and get a sense of how they think about me. 

Oshan Jarow 0:21:24

Yeah. Let's go on to subjectivity next, because I think that the way that you write about it is really interesting and I think very particular as something reaching back to the Enlightenment. So how do you think about subjectivity? 

Eran Fisher 0:21:38

So subjectivity for me is a self that's, let's say, very aware of itself. How would I say that in English? Of its selflessness. Does it make sense? No. Of its ontology, of a self that's aware of it being part of the world, as an object in the world. In a way, the way to subjectivity goes through being an object, noticing that you, what you feel, it's also an object in the world. But let me actually go through a different route to explain that. So I'm thinking about subjectivity maybe more or less as the self, as how we see ourselves. But I mean, you can say that human beings always had a sense of themselves, but subjectivity is more sort of a historical project that we have never reached. So subjectivity too, I would say, is a promise or kind of a horizon that was set up during modernity where, let's say, the center of freedom, the way to reach freedom was not through salvation, some transcendental force, but actually through human beings, through thinking. Yeah, you know, sometimes it's just really hard to get into those. It's really like when your daughter asks you, what is love? And you're like, what? I know what it is, but do you really want me to explain that now? 

Oshan Jarow 0:23:23

I wonder, well, I wonder maybe a helpful, something that I was thinking about in particular that I found really interesting in how you defined it. You looked at subjectivity not as only a basic descriptive fact about our minds, but as you mentioned, as a project, as a basis for political agency, as this kind of historically and socially constructed structure of feeling, as Raymond Williams would have it, this kind of site for the dialectic between freedom and domination. And I think I found this so interesting because it's so different from how I reflexively think about it, but it also fits really nicely. I tend to use the word subjectivity just as a synonym for consciousness, not as maybe a historically specific project that you can trace to the Enlightenment, but as just a descriptive feature of the phenomenology of being a living system.

Subjectivity is what it feels like to be me. It's the texture of my mind in any given moment. But on the other hand, when I was reading your chapter on this, I was thinking of the book by the German sociologist, Hartmut Rosa, and it's called Resonance. In that book, he talks about seeing subjectivity as a kind of seismograph or even a thermometer. Each of us, with our own subjectivity, are like these thermometers plugged in not just to the earth, but to our social and political and economic worlds that we inhabit. You can see the qualities of consciousness and subjectivity as almost readings of how all those fears are coalescing into a relationship to the world. So I'm really curious how you see this relationship between subjectivity as a political, historical, socially constructed project. And you mentioned this a bit too, as a kind of biological, just basic understanding of being a mind in the world. 

Eran Fisher 0:25:11

I think the difference for me between, as you say, an objective thing, the phenomenology of self-knowledge and the way I use the term subjectivity is the dynamics of that. Because when you, in contrast to other objects, so when you look at a stone, your gaze doesn't actually change anything. Well, physics would have it otherwise, but let's say in a very, very, yeah, I mean, that's actually where it's coming from, I think, this way of thinking. But you look at a stone and nothing happens. You look at yourself and the very fact of looking at yourself changes you. So one thing I would say is subjectivity is a dynamic process. That's why I think about it as a project and a horizon. You never reach that, but it opens up a terrain that you're supposed to walk towards. And actually, we have a history of, if you're thinking about technology, which is what I'm interested in, we have a history of technologies that were supposed to help us.

It was either intentional sometimes or unintentional, but all kinds of tools that actually helped us enlarge our subjectivity so that become more aware of ourselves. And through that, change ourselves, the diary, writing, letter writing. So all these technologies where you were supposed to reveal yourself to yourself. And the idea was not just to discover who you really are, but to at that moment also change who you are. So there's this, I think with subjectivity, what I try to capture is this kind of dynamics of being always on the move. And that's, I think, part of the argument about algorithms is that they actually hinder or undermine our ability to look at ourselves. And when you outsource it or when you externalize the gaze to these machines that look at us, you get a little bit more like a still picture, something which is a little bit less dynamic. And more just telling you, hey, this is who you are. You really like to watch cat videos on YouTube. That's who you are. And it could be that I would also tell that to myself, hey, you know, I've been watching too many cats lately.

How about doing something different for myself or a little bit more fruitful? So yes, subjectivity for me was kind of a human in the making. Always, always looking at yourself, not only to discover who you already are, but to try to move towards who you could be, to move forward to who you could be, which is basically a more emancipated human being. So I mean, just think about something like a therapy session or psychology, psychotherapy, not just the actual therapy, but as a movement, as a cultural and social movement. The idea is you're not going there just to find out, as some people might mock it to find out that your parents did this to you. And no, you're going there in order not just to know who you are, but in order to change who you are. So this form of this, yes, subjectivity, it's almost like a procedure or how would I call it like a workshop for the self. 

Oshan Jarow:

I really like this framing of subjectivity as a dynamic horizon, right? If you think about the self-help bookshelves in a Barnes & Noble, there's this idea that you have a single true authentic self that's waiting to be discovered. And I find that misleading because there is no static substantive thing that is oneself, right? Instead, as I think you're saying, what we experience as ourselves is an ongoing dynamic process that's always under construction. So there's no self to be found. There's a self to be created, to be designed, to be nudged towards whatever horizons align with our values. And that ongoing process of construction is one that is always environmentally woven. So the idea that you can change yourself, on one hand, I think has merit. It's important. We do have agency. On the other hand, it's kind of silly if you restrict your view of yourself to the boundaries of your skin.

All right, so we've done algorithms and subjectivity. Let's get to critical knowledge, right? This is one of those elements that you write about where algorithms are directly affecting change. I mean, the subtitle of your book is the subversion of critical knowledge. So what do you mean by that? Yeah, so I think knowledge really ties up both subjectivity and algorithms, right? Algorithms are, as I said, are about creating knowledge from, or a new type of knowledge from data. And then subjectivity requires what I would call critical knowledge. So let's start to kind of deconstruct what do I mean by critical knowledge? 

Eran Fisher 0:31:10

Wow, that's another tough one, right? 

Oshan Jarow:

Would it be something you can jump off of, but something that stuck out to me was really whether the self is participating in the process or not. Is that one of the key elements? 

Eran Fisher 0:31:26

Yeah, I think that's really, that's one of the keys, the idea that knowledge itself changes reality, because we are subjects, because we're not just, I mean, for a stone to know its conditions or its phenomenology in the world would really not change anything about its phenomenology. Obviously, I'm kind of piggybacking on Hegel and Marx and the Frankfurt School. So maybe go back to Marx, because I think he put it very, very neatly when he was talking about class and about historical materialism. And he made this distinction between class in itself and class for itself, right? So class for him, it's an objective reality, right? If you look at the history of human beings, you can step back and as a researcher, as a scientist, you can say, I can show you that the best way to explain this history is through understanding it as two classes, blah, blah, blah, changing. Okay. But then he said, in order for that to change, which is what he was interested in, people themselves, the subjects themselves, the working class, they have to acknowledge this reality. They have to have this knowledge themselves, right? So if they have a concept, the day that their reality is structured by them being part of a working class, that might actually change the reality, that might change the course of history. So I think this is where it's actually coming from. It's actually one of the examples that the Frankfurt School give to critical knowledge, critical theory is Marxism, the other one being psychoanalysis, which I also kind of mentioned.

And so I think it's the application. I think critical knowledge is really applying tools of knowledge to knowledge construction itself, to how knowledge comes about. And yeah, I think one of the key issues is for the subject or for the human being to be part of that process of forming. So I mean, if we go back to this metaphor of psychotherapy, of a session, or not even a session, just being by yourself and kind of applying these ideas of psychoanalysis or psychology, what happens in a session is not the therapist telling you something about yourself. They might already know something, but really the idea is for you to reach a point where you yourself can express this kind of knowledge. Because how would I put it? Just forming this knowledge is really part of changing it, part of changing the reality. Hope that gives at least a preliminary answer. 

Oshan Jarow 0:34:56

Yeah, no, I think it does. And there's a related idea here I wanted to touch on. This is one you also trace back to Jurgen Habermas, the idea of humans having what he called an emancipatory interest, because this is something you wrote about gives rise to critical knowledge. You described it as an interest in overcoming externally imposed dogmatism, internally induced compulsion, and interpersonal and social domination. This is what gets rise to critical knowledge. So is the claim that all humans baked within us carry this emancipatory interest, that it's a kind of innate thing? 

Eran Fisher 0:35:36

Wow, innate. I don't know if I want to call them innate. Going back to really the beginning of the introduction to this conversation, you mentioned Darwin, which I loved because I never thought about it in terms of when I'm thinking about technology, but I loved it because my assumption is that we don't need, I don't want to argue whether there's let's say human nature or not, but I think we don't really need to kind of a conclusive answer. But yeah, my proposition is that humans are really so flexible and we're such a historical entity that from my perspective is that emancipation is not innate. And I think that's what gets me scared actually or worried. I think before trying to emancipate ourselves as a species, we sort of had to have this belief that there is something like being an emancipated human being. And I think maybe there isn't, but this belief, this kind of theology of freedom and emancipation gave us so much of what we live with now, such like democracy and human rights. I mean, this idea that we are able to somehow, we're not just an imprint of nature. We're not just an imprint of our parents, not just an imprint of our nation and of our culture. We have some kernel of, I don't want to call it individuality, but some kernel of freedom within us.

So first we had to believe in that. And I think that's in and of itself a modern belief, right? When people lived in kind of pre-modern societies, it was really more about belonging to the clan, belonging to the community and being one with the community. I think my fear is not just that these machines undermine our freedom, but that in a way we're not going to miss it. In 50 years time, yeah, sorry, it sounds a little bit like the metrics, right? So you have to, in a way, yeah, I mean, that's sort of, it's kind of, yeah, I have to be wary not to go in these directions, but we do have to kind of think about where it's going, right? I think freedom, this idea of, yeah, freedom in the sense that I described earlier is something that you have to believe in and to fight for and to know that, yeah, you're never going to achieve that. It's not like algorithms are the first enemy of freedom, right? I always give the example of, let's say, publicity and marketing so that, you know, when my daughter tells me that she really, she has to get the new Nike shoes, I mean, I know that, I mean, I know that she thinks that she needs it and she wants it, but let's say as a sociologist objectively I know that this want, this very personal want was implanted into her and I know how it's done, right? I know we know a little bit about the mechanics of that. So yeah, I don't know if we're going to miss it if we don't have that. I don't know if we, I don't know if it's that innate in a way. 

Oshan Jarow:

Yeah, I think this is a really interesting framing and it does resonate with me that an emancipatory interest or maybe even, you know, this interest and desire for the kind of freedom you're talking about is not innate, right? So for example, in the last episode of the podcast, I was speaking with the economist Christian Arnsperger exactly about, you know, what economics would look like if it were built around this idea of fostering people's capability to act on an emancipatory interest. And he defined emancipation in a way that I think is relevant here, right? He defined it as a process of revealing that what often comes to appear to us as fixed and objective, you know, facts about how the world is and how our lives thus have to be are not objective at all, but are very often social constructions. They are things that have been molded through historical, political, social, cultural processes that are always ongoing.

And I think you're right that the capacity to see things that way is not in line with the mind's kind of natural slope, right? The cognitive tendency is to see things as things, right? As concrete, discrete, objective structures, nouns, you know, rather than processes or verbs that are always in flux. The mind is like a knife and it carves up the world into these concepts so we can think with them. And I think that this emancipatory interest is actually one that has to labor against that cognitive tendency, right? Against the tendency to reify things. And I think that this connects really nicely to another theme of the podcast, which is what's going on right now in the world of psychedelics, right? One of the leading ideas and kind of the cognitive philosophy of psychedelics is that what they do is they relax our concretized beliefs about the world, right? They loosen the rigidity with which we've carved experience up into concepts that we can build these predictive models with.

And even the very experience of being on psychedelics, you know, I'll speak from personal experience here. When I look at organic matter, like a tree or grass or even a stone, the visuals aren't hallucinations, right? That's a really misleading word, I think, especially for mushrooms. What I see is something I often describe as a thing that is ongoingly becoming itself, 

The matter that constitutes the tree appears visually as being literally in motion, almost like a volcano that's erupting, but the lava that's being erupted is then looping back into the bottom of the volcano just to cycle through that process again at very kind of fine-grained levels. And everything that I've kind of previously perceived as a discrete and unmoving object, like a stone, comes to appear in my visual perception as a process of always constructing and reconstructing itself in real time. And so I think that this idea of emancipation is actually very similar, that the concepts we have about the world and ourselves and the ways that we've come to believe the world simply works and things that are entertained as possible and susceptible to change or not, these can all be upended and revealed to be in flux. And that, to me, kind of opens up this space of freedom to participate in nudging and intervening in that process. 

Eran Fisher:

Yeah, wonderful. I think just to add that I think it goes back to something that we talked about earlier about this, I would call it after Freud, emancipation and its discontent. So that there is something troubling about critical theory or critical knowledge is that at the end of the day, it undermines its own basis, right? Because if knowledge is constructed, then the knowledge about this construction is also constructed. So there is something very scary about it. And I think you're right to say that the tendency, and we see that in the last, yeah, I mean, throughout modernity, this idea of objective knowledge, that you can know not just the world, but you can know human beings objectively with some machines or with some tools like surveys. You can just know what the public wants.

You know, you have this, you have intelligence scores that can tell you who is more intelligent than another person or which group is more intelligent, et cetera. So that there's a way to really kind of know reality. And I think critical knowledge really tells you that the knowing, this knowing is in and of itself social, in and of itself constructed. And this fantasy, you know, this fantasy of, wow, you know, if a Martian came into earth, we could just ask them something about beauty or about justice or whatever. No, there's no Martian. It's all upon us to actually make a decision. And yeah, that's where in a way, that's where, and I think Habermas deals with it beautifully, but that's where politics and knowledge and politics and science kind of touch a little bit each other, right? Where it gets a little bit risky because we do want to sort of keep them apart, but there is something, if we as political entities, as political subjects also form our knowledge, there's always something a little bit risky. I don't know. I kind of went overboard, but let's hope it was a little bit understood. 

Oshan Jarow 0:45:50

No, that's great. It's great. So it's interesting. I think we've touched the three terms, I think pretty well. And I wanted to try to pull them together. And I think we have, so I might just piece together what I've taken as an answer and see what you think of it. But, you know, as I see the basic claim of your book, right, algorithmic art environments usher in a new kind of knowledge that undermines this enlightenment project of critical knowledge. And so doing it undermines subjectivity. When I think about what that means in practice, I actually think your example of psychoanalysis is really interesting in terms of pointing out that which is being subverted. So tell me if I have this right.

But, you know, you described how in the process of speaking with a therapist, the therapist can't just tell you what they have learned is the thing, you know, is your neuroticism. That doesn't quite work. What the therapist tries to do is help you come to that conclusion on your own, right? You are involved in the process of that realization, the production of that knowledge, and what algorithms are doing. And I think we have to acknowledge it in large part because of the abundance of information we can no longer sort through manually on our own. But what algorithms are doing are they're doing the process for us. They are the therapist telling us what we need rather than the one bringing us to reach our own conclusion. Is that somewhat reflective of the argument? 

Eran Fisher 0:47:12

Yeah, absolutely. You know, the book was translated into the Hebrew and they offered the publishing house offered a really good title, which is thinking for you, right? So so it kind of neatly captures this this thing. I think what is what is I mean, what is unique in our situation is that we cannot interfere with the process of thinking of the algorithms or the these machines so that, OK, let me put it this way. I'm not against having all these aids, all these tools that help us know ourselves. Right. So that, for example, the therapist.

That's that's another human being external to you that is actually helping you to understand yourself and the diary writing was externalizing yourself and then reading it and coming to new insights about your experiences and who you are and what, as I said, what hinders OK, but what is happening with this machine and what is really new about it is that it's not communicative. I mean, it just it's just giving you the results. Actually, the machine itself or the people who are handling it, they don't know exactly how they know you. So they just know they should get as much info or as much data as possible. A lot of those algorithms are already written by another algorithm so that the way to validate this knowledge that they know about you is by practice, whether it works or it doesn't, whether whether the recommendation actually is helpful for you. So we know that, for example, 80 percent of the movies watched on Netflix are a result of their recommendation engine.

So we know that people are reacting to that, but we don't know how they're doing it. And probably they don't know that. And you as definitely as an individual, you have no idea how they reach this conclusion. So you cannot sort of deconstruct it. You cannot read through the text and say something. You cannot participate. As you said, we cannot participate in the production of this knowledge. And so we cannot critique it. We cannot sort of defend ourselves because it doesn't happen in natural language. It happens with digital language, basically. 

Oshan Jarow 0:50:00

Yeah. And I think that's really because one of the questions that and you touch on this frequently throughout your book, one of the questions that I was kind of carrying with me was really thinking about how different the kind of algorithmic threat to subjectivity here is relative to just plain old mass media in the 20th century. Right. I mean, the Frankfurt School had a whole thing on homogenizing humans and producing one dimensional man and through the culture industry and this and that.

And it seems to me that the crux, the difference is that the mass media people watched when the entirety of the country sat down at 7 p.m. watched the same news program. That media was not tailored to the it was not giving you knowledge about yourself. It was a single one dimensional, one to many kind of production broadcast. Whereas it seems to me that what differentiates algorithms is these are tailored to the individual. And I think that ushers in what you described before as this kind of reflective relationship where we now engage in kind of thinking about how we define and see ourselves in relationship to the to this media. How do you think about that, that difference between the mass media model and then the algorithmic model? 

Eran Fisher 0:51:10

Yeah. So I want to make actually another distinction within the mass media history, which I think is really interesting and really shows you very well the difference in how algorithms see the audience or users and how the mass media look at them, because the mass media actually, I mean, when it began, let's say the beginning of the 20th century in the West, you're right. It really looked at the audience as one homogenous bulk, mostly thinking about the audience in terms of its producers. So, you know, white male kind of mid upper class people, blah, blah, blah. Okay. But then the mass media actually went through a process of thinking, who are those people watching us and starting a process of, you know, we might call segmentation, right? Segmenting the audience and saying, Hey, what about, Hey, 50% of our viewers are women. They might have a different interest.

So there is so that in magazines or even TV networks that they were starting to produce different content for different audience segments. But look at what happens when you have these women's magazines. Then let's say as a woman in the fifties or sixties in America, you would look at that. And because all of that is happening again in natural language, and you would say, this is what I'll do circle interpolation, right? You look at the magazine that is saying to you, Hey, woman, look at me. You're a woman. So you're supposed to be really interested in pampering yourself and cosmetics and taking care of your husband and kids. And you could critique it and you could resist it, right? Because it approaches you as a woman telling you who you are and then feminism kicks in and you resist it. So there's kind of a healthy conversation. This is a political conversation. So the mass media treats you one way, tells you what it thinks about you. It thinks that you're a woman that's supposed to do ABC and you might resist it. What happens with contemporary digital media is first of all, that segmentation is no longer in bulk. It's not about political categories like race, income, class, education, et cetera, but it's actually much more fragmented. So you and I might find ourselves in more or less the same rubric on the Amazon metrics because we kind of interested in the same topics, the same types of books. Now the media, first of all, I don't know how they see me. I mean, they don't interpolate me as a white person or a Jewish person or right. Or a man. I'm interpolated as, I don't know, N3764, which is like one of thousands of rubrics of how they kind of catalog different people. In a way, even if you have the mass media, even if it was racist, you could resist it. So it was a political conversation and people did resist it.

That's what the Frankfurt School was all about. That's what cultural studies in the UK was all about. But now it's kind of depoliticized. It's completely personalized. You don't know exactly how they see you. And in a way, the algorithm doesn't really see you in those categories, in those categories, which I would say are in natural language. It's how we see society as comprised of these social demographics. I think one way to describe it, and that's not my terms, is kind of post demographic society, right? Post demographic knowledge. 

Oshan Jarow 0:55:49

Wow. Man, there's a lot there. I think that's right, though, focusing on that shift. That clarifies it in a lot of ways. The post demographic idea, I hadn't heard that one, though. That's interesting. I'm going to have to look into that. You know, it just makes me think of that. We touched on this a little bit. There's a kind of increasingly common critique of algorithms, which will focus on how repeated interfacing with them actively makes us more predictable, trains them to better understand our behavior, but also trains us to trust their recommendations more and the spirals into this ever more algorithmically predictable humans.

And I find that interesting, and I find it compelling, but I also think it gets even more compelling when you zoom out from individuals and you look at culture. Algorithms, you're just talking about this. I always find it interesting when, for example, my father will say something like, you know, this algorithm, it wants to know this about me or whatnot. Algorithms don't care about individuals. They care about demographics, right? That's where they make their money is being able to predict the demographic. It doesn't care about O'Shanjaro in particular. It only does insofar as I feed into its demographic predictability. I think that this dovetails really nicely with an idea from the predictive processing realm of cognitive science. I want to set that it's a bit of a tangent, but I think it might be worth it.

The basic idea, if anyone isn't familiar, that in the Bayesian brain predictive processing or free energy realm is that the brain is ultimately trying to minimize surprise or uncertainty that, you know, subjectivity is this internally generated model and it updates its predictions based on sensory data, fine tunes that model. And it does this in order to render its environment more predictable and ultimately to carry on existing. Now, somewhere in the literature on that, someone asked, well, what if the brain got so good at internally modeling the outside world that it just never got surprised? What if it succeeded in keeping uncertainty or free energy to an absolute zeroed out minimum? And it turns out that there are a number of really undesirable effects you can draw out from that. It would actually be a pretty bad picture. And so Andy Clark, who's this philosopher of mind and he's a big player in the predictive processing world, he wrote a great paper on this, I thought. And his point was that that won't ever happen because of cultural evolution. He kind of framed the function of cultural evolution as to shake up our environments just enough so that our predictive minds never get too good or too perfect at the kind of success rate of its predictions.

So there's something very adaptive and fundamental about thwarting that prediction perfection. And on one hand, I think you could argue that cultural evolution will do the same thing for algorithms as we're talking about. It might thwart efforts to render humans perfectly predictable. But on the other hand, if algorithms start shaping not only humans but culture itself, and they sit upstream of that process of cultural evolution, which I think is a very simple case to make, right? They control the movies, we watch the music, we listen to the information, even the people we date now. Couples are more and more forming through algorithmically matched platforms. So that kind of paints a bit of a troubling picture. I'm curious how you see both kind of the process and the prospects for how algorithms not only affect individual human development, but that larger kind of cultural evolution. 

Eran Fisher 0:59:27

First of all, it was really fascinating. I love these thoughts. And the connection between the mind and those machines that really try to mimic the mind and, well, I would say suppress it and supersede it. Yeah, it's interesting. I don't think I gave a lot of thoughts about that, about what it's what it's doing to our culture in a way. So I'm still I'm kind of really fascinated by the prospects that you were painting, but I'm not sure I have something more intelligent to say about that.

 

Oshan Jarow 1:00:14

That's totally fine. I think it's something I forget where there was somewhere you touched on this lightly in your book, but on one hand, it's very clear the ways in which algorithms seem well poised to be determinants of culture. On the other hand, and I think this is actually a good doorway into the next kind of question I wanted to ask. I don't think the picture is as dark or fixed, maybe as it might seem. I think that the question it raises is in what ways can we participate with algorithms such that culture is not something that is thrust upon us, but there's more of a democratic deliberation process.

And so this leads into maybe one of the biggest questions that I felt I took away from your work or the question that lingers with me as I'm reading it, you know, is whether algorithms inherently or by definition, supplant and undermine subjectivity and exclude the self from the process of knowledge production, or if it is possible for algorithms, even recommendation algorithms like the ones we've been talking about, if these can augment and enhance and enrich that process, perhaps under different structural conditions or more specifically different regimes of data governance. And I have a few questions on the scene. So let's start with the question of how do we point to what determines in your mind whether a form of media enhances or subverts subjectivity? Because you do acknowledge in your book and you have a couple of really interesting historical examples. You already mentioned the personal diary. There are ways in which forms of media can be kind of complementary along this road of self-production. So is there an example you'd like to point to and kind of how media can support us in that process? 

Eran Fisher 1:02:02

Yeah, I think, you know, I'm sort of already off to my new subject, my new interest, which is really kind of the materiality of media and how it changed our way of thinking and actually our way of thinking of ourselves. And so, you know, I'm doing a lot of readings about now about oral culture and the move from an oral culture to written culture. And I mean, this is a really good example when people started writing. So this was sort of really a new machine for thinking. You could suddenly, you know, words were not so ephemeral anymore. You could just put them, you know, into a piece of paper and, you know, inscribe them. So it totally changed how we remember things and how we can recall things later.

And it also had a very, very important effect on objectivity because suddenly words were not tied simply to a person that pronounces them, but they were sort of, they had their own existence in the world. So those things are, I mean, my short answer to your question, how do we know is, unfortunately, is that we wait and see. I mean, it's really, you know, yeah, in a way. So, I mean, I would put it this way, you know, there's, let's say, a kind of distant, cool way of looking at the world as a scientist, what that, you know, is kind of, and I have that as well. So I look at it really fascinated, really trying to figure out what's going on. And really knowing that there's very little chance that I'm actually, I don't have any tools of actually knowing exactly how it will play out. But then I'm also a political being, and I don't want to wait for that to happen. I want to interject, I want to, right? So I'm also trying to figure out where it could go. So sort of outline the possible futures. I think that's what you've been asking me. So is there a future for algorithms that I can see which is actually liberating or emancipatory?

And it's really, it's so hard to answer that. It's a little bit because you already set the stage, right? You said, maybe if there's a different data governance, if platforms were public rather than, you know, owned by private companies, if they were, I mean, there's so many ways. So it's almost like thinking, you know, when someone asks you, you know, if you were an elephant and then they're, wait, but if I'm an elephant, do I know that I was, I mean, it's so difficult for me. No, I mean, so many things would have to change. And yeah, I don't know how to answer that, to be honest, really, because I mean, that's one of the key critiques that I get is that, let's say, Marxists would tell me, yeah, you know, it's not the technology. It's because of data governance. It's because all this is being structured by those corporations. And yeah, they have a point. I agree with that, but I think there is something more there. 

ou know, I kind of made a shift in my interest, you know, studying the internet for, I don't know, 20 years or internet culture, however we call it. This shift, you know, from many years, I've been really sort of interested in kind of Marxian theory and using that to understand how the internet works. But really in this last work, I don't think I even, I don't think I even cite Marx. So I was really kind of shifting away, not to say that suddenly I forgot about capitalism, but I said, I kind of, I was really interested to think about, can I really just focus on the machine itself? And of course there is, you know, you're, you're this, you leave some blind spots, but I was really interested to figure out something about epistemology, something about how we create knowledge, you know? So it was a shift for me, not to say that, that I kind of forgot about the other axis of analysis that you have to implement, but I think there is something about how this technology works, which goes beyond, let's say ownership of data. But, but I mean, yeah, I mean, I could imagine something like that. I could imagine, for example, I think one of the problems that we have is that to really use a Marxist term, the, the means of production are totally in the hands of, let's say five companies.

They're the only one, even if they let you have the data, your data, it would be very hard to actually process it in an intelligent way. You really need those huge machines to, that are able to process so much data and engineer new algorithms that would make sense of this data. So it's really like a huge factory. It's really, really hard to think about, let's say community data centers or something, if that's what you're kind of maybe hinting to. 

Oshan Jarow 1:07:51

Yeah, it's interesting that that was really clarifying to hear you talk about that because yeah, when I got towards the end of the book, the question on my mind was data governance. And I was, I've been very influenced. There's a woman named Salome Biljoan, who I think just wrote a phenomenal paper on this. She had a really interesting frame where she said, look, the two critiques of data and algorithms in this whole world so far, she outlined them as one was the Propertarian critique, which effectively, as you mentioned, pointed at the ownership, the critique kind of lended itself towards, we should own our own data, right? Apply private property to the data we produce, so on and so forth. Maybe ask some data dividends, give people like 20 bucks a year. Okay, the other camp, which we've already talked about, too, is kind of the Shoshana Zuboff, Douglas Rushkoff world of, I think she calls it the Dignitarian critique, but you know, this idea that algorithms can only imperfectly see humans and all that is left out is very crucial. It's important that which defines our humanity.

And so it kind of lends itself towards people's right to not be datafied, you know, so it's privacy rights, it's the right to opt out, all these kinds of things. And these are both important, but she kind of, she feels that they're both insufficient to the task at hand. Her perspective is to see data as social relations and the way in which they modulate them. And she looks to regimes of democratic data governance as a way of kind of trying to bypass the shortcomings of those two other critiques. And I found that really interesting because, you know, when I got to the end of your book, what was on my mind was, I mean, maybe I'm an optimist or maybe I'm a Marxist according how you put it, but the idea that, yeah, I don't think it's inherent or intrinsic to algorithms that they will exploit and subvert subjectivity. I would like to think that it has to do with the environments in which they're deployed, you know, which is one that the AES, they're privately owned, operated for profit.

They're also employed in an environment with just woefully inadequate data governance regimes, no privacy, the whole thing. And I do feel that I can imagine a world where not only, so on one hand, we have democratic data governance. On the other hand, there's transparency, knowing the construction of the algorithm. But I think the thing I'm even most interested in is user control over the contours of the algorithm. And this is something you touched on with a few examples, too. But if we can play a part in fine tuning the kinds of nudges that the algorithms give us, I can imagine that they can maybe function as tools. And the example I want to ground this in, you know, you touched on this, was Netflix.

There are folks doing a kind of DIY act of rebellion on Netflix where, you know, they'll click on a movie not to watch it. They click and click off, click and click off a whole bunch because they know that in so doing, it'll nudge the algorithm in a particular direction. So they want to play. They want to exert agency in the kind of person that the algorithms see them to be and also the kind of person they nudge them to become. So I think it's interesting. I understand what you were doing now, trying to step back and just see the technology in and of itself. I wonder, I guess to my mind, the question is, can we formalize the same logic in that micro Netflix rebellion at scale? And maybe that's too optimistic. Maybe corporations or public entities won't hand over that kind of control. But how do you see user control? Do you see that as a site of optimism or do you think it's a little insufficient to the task? 

Eran Fisher 1:11:31

Yeah, well, I would now try to kind of sabotage your optimism. I don't want to end on an optimistic note. No, I'm joking. But yeah, I want to say something which is quite speculative. I think what you just described, this kind of, well, let's call it kind of sabotaging or trying to sabotage the Netflix system by lying to it, is actually really a good instance of a very kind of non-communicative form of engagement with another entity. I'm not saying it's not going to work, but it's also turning us, it also really changes the terms of what for me would be a good society, but also a good, better, let's say, well-being.

So let me try to kind of push something forward, which I mean, just is to say, you know, so you mentioned Zuboff saying data as a form of social relations, and I see your points, but I think what's happening with big data and algorithms as a machine and regardless of the governance is there, let's say, that we sort of getting further and further away from, let's call it language or natural language. It is a form of language, but it's not something that humans can interact with. And I think we kind of need to hold and think about that for a second. This movement, I mean, you talked about, you know, kind of this difference between, let's say, technology and tools, right, which is kind of a Heideggerian perspective, you know, something that like a tool is something that you are the agent you're using and a technology is more like it's kind of using you or working more independently.

So I think we're moving away from when data and algorithms are used to understand ourselves as human beings or to, as you said, to create culture at large, then you're starting to engage with entities that you have no way of really speaking with. So you might find a way to kind of sabotage here and there. You know, I have a lot of people telling me in one of my chapters, I talk about ways that the navigational app, a lot of people tell me, I just had this radio interview and the host told me, you know, I have this thing that I want to resist it. So I'm not listening to it. I'm going to, it tells me to go right. I go left and boom, it goes, there it goes. Like it actually gives me two minutes more. I mean, two minutes less in my ETA. And so I, again, well, you know, doing my job, trying to kind of sabotage his optimism, I'm telling you, well, you know, that's part of the plan, really.

The way that these navigational systems work, I mean, they don't have eyes. Their only way of knowing what the situation is, is that they're using you as a control group. So I actually, I found this technical paper of ways, I think it was either Waze or Google's engineers talking about that. They're always doing like AB testing. So they will always move. They will always move some of the traffic through. They know in advance that it actually takes two minutes more to reach through that line, but they have to move them because at some point it's going to be, you know, things will change. Traffic is also always very dynamic. So what I'm saying is that these forms of resistance are maybe a way for these people to really get like better recommendations from Netflix. But there you have it. Now they're going to get better recommendations. So they're really being part of how it's supposed to work.

And Netflix is just going to say, hey, thank you for letting us know that you actually want to have, you want to watch more serious drives rather than silly comedies. So I'm not saying there's nothing to do, but I think if we're going to do something, it should really be on a more sort of political level, you know, policy, et cetera. But I don't know, interesting times. I don't know where it's going, really. I, it's really funny that you say that you found some optimism there. I'm happy for you. I really finished writing that and felt so pessimistic that I said, okay, my next project, I don't, I don't want to deal with it anymore. I'm just going to something completely different. 

Oshan Jarow 1:16:54

That's fine. It's a good balance. But here's my other shot at optimism to the other thing that I think maybe at a higher level, but I also took this away from your book. I'm curious what you think. You know, one of the things that this conversation around data governance and algorithmic environments really drives home to me is this reciprocal relationship between our environments and subjectivity, or maybe more broadly human development. You know, how in designing our economic environments, we're designing for the kinds of selves that we want to become and that we want our fellow humans to become. And this is something that, you know, the classical economists, Carl Smart, Carl Smart, Carl Marx, Adam Smith, too, you know, we're very deeply attuned to and concerned with, you know, you have Marx on alienation and what alienated labor does to the mind of the worker.

You have Adam Smith on how the division of labor gone too far. He has a really harsh words for what that does to the torpor of the mind. So, you know, for me personally, too, this goes very much for anti-poverty policy, the forgotten effort to reduce the working week. But I also think it's more explicit in these conversations around algorithms. You know, as we design and decide and regulate or not the incentives that guide our media environments, what we're doing is we're designing the informational ecologies that we're going to exist both in and through. Now, I wonder if you see this realm of, you know, media theory meets economics kind of as bringing back this maybe old political economy focus on the relationship between our environments and human development, because that's something I would love to see that to come back to the fore. Or am I being a little optimistic and you deflate that? 

Eran Fisher 1:18:40

Say something, say more about that, because I'm still trying to grapple with this, because you put those two things together and I want to have a more sense of what you're thinking 

Oshan Jarow 1:18:52

of. Yeah, I guess what I'm thinking of is if we go with the Karl Marx, Adam Smith time, you know, the notion, Smith, of course, was a moral philosopher. He was deeply concerned with these kinds of questions. And in their writing, you find a very explicit care given to the kinds of humans that we become by virtue of existing in our economic environments. And then in the period after the classical economist, we can look at the 20th century, you know, excluding folks like the Frankfurt School who were on the fringe. I think they were still making this case in much more even forceful terms. But you have kind of mainstream classical economics became something where the question of designing an economy was separated from the question of designing human beings. Right. We came under for all of these categories of efficiency and growth. And we use them as proxies for everything that we still felt we held dear.

But I think we got so enchanted by proxy that the underlying connection between how those environments were having these deeper impacts on humans themselves. I think we're kind of put into the back seat to two places like the Frankfurt School. So what I would like to think is that what the media environment does, what algorithms do, what engaging with Netflix every day and then dating algorithms, what these do is really thrust back in our face this relationship that, you know, if we do not take action in designing these environments in ways that we feel are conducive to the kinds of humans we want to become, they will do that for us or, you know, it will happen in this kind of behind our backs, so to speak. And that's probably not best. So I'd like to think it brings that connection back to the fore. 

Eran Fisher 1:20:41

Well, I'd like to join your optimism. Yeah, I think it's so interesting because you have these, you know, economies in the 20th century. They really had to imagine quite a stable human being, right, that, you know, wants to make profit, blah, blah, blah. So very stable. And so that, you know, the theories would work out. And I feel the same about what I find out in those, you know, media or informational environments where, as you said, I mean, they really want to. You talked about prediction, so they really want to get to the point where I have this quote from HBO CEO saying, you know, there's going to come a time where you're just going to sit in front of the screen and we'll just know exactly what you want to watch. Right. Just according to data that we get from your day and what you've done, what day of the week it is and how many people are sitting there, etc. So this kind of total control. And I think as you put it very beautifully when you talked about culture, culture being always on the move and always unexpected. So I think the type of human being that they would want to create is really not a subject, but an object, someone who is like a stone that is totally predictable, that if you know, if you have the data, you know the results for sure.

You know exactly what he's going to do. So I think, yes, I think, well, if there's any because I'm really not dealing with it, but if there's any political contribution to my book is to kind of raise awareness to that and to be part of that, let's say movement that you wish would rise over marriage that would say, hey, if we are not engaging now, they would actually be successful in creating this human being. Yeah, where we're alienated from our data information, etc. I mean, however we want to put it. 

Oshan Jarow 1:23:02

Yeah, I think you put it, I think you hit the nail on the head with the first thing you said there. I think that when we think about the image of the human that arose in the post-classical period in the 20th century, I think it's exactly right that economists adopted a static model, both of the economy and of the human. You can trace this back very literally with the on walrus, the kind of marginal school of economics went into the physics textbooks and took the equation for a system at equilibrium. And that equilibrium equation became the basis for all of neoclassical economics.

Of course, equilibrium being the opposite of a dynamic system. And there's a lot of, this is changing now, we have complexity science who are bringing in non-equilibrium systems and how we model economies. You had the economic portion, we adopted a static model of the economy, and you also had the human portion, we adopted a static model of the human, which was a purely theoretical construct that listeners will know well, this utility maximizing rational human being whose preferences are totally separate and unrelated from the structure of the economy. So I think you're right to point to the static nature of the image of the human they adopted. Wow. We've covered a lot of ground. I like, I feel like this has come to a pretty good place to come to a rest, but I do wonder, if anything is still lingering on your mind that you wanted to throw into the mix? 

Eran Fisher 1:24:33

No, I think really we covered so much and actually you surprised me. So I had to come up with really, it was wonderful, actually. It was really good, really interesting. 

Oshan Jarow 1:24:46

Thank you so much. I appreciate that. That's good. That's one way to measure success. I think of a podcast is, can you surprise your guests? 

Eran Fisher 1:24:54

Exactly. You did. 

Oshan Jarow 1:24:56

Wonderful. Well, Eran, thank you. This was, this was such a pleasure, both getting to go through your work and to speak with you here. And are you working on another book? You mentioned a little bit. What's your, what's your next focus? 

Eran Fisher 1:25:09

Well, my next project is about calendars, about actually daily planners that really picked up only from, let's say mid 19th century where people really started to kind of plan well, predict and plan the future and write it down. And yeah, so I'm really going back to, of course, we're still using calendars today, Google calendar and all these digital ones, but I'm really interested actually in the beginning of that and how it changed perception of time. And yeah, so that's what I'm working on right now. 

Oshan Jarow 1:25:52

Oh, that's fascinating. I look forward to reading it. Great. Thank you so much. It's been a pleasure.