Acessibilidade / Reportar erro

Dialogues on Human Enhancement: An Interview with Nicholas Agar* * We thank Fundação de Amparo à Pesquisa do Estado do Rio de Janeiro (FAPERJ - JCNE: E-26/201.377/2021) and Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq): APQ/PRÓ-HUMANIDADES (421523/2022-0); Bolsa de Produtividade em Pesquisa/PQ (315804/2023-8); and Chamada Universal (421419/2023-7) for their support for Murilo Vilaça.

Diálogos sobre melhoramento humano: entrevista com Nicholas Agar

Abstract

In this interview, philosopher Nicholas Agar answers questions about his most recent book, Dialogues on Human Enhancement. Agar comments on the challenge of writing a book in dialogue form, what the process was like involving his students, and the relevance of using an ancient method of doing philosophy. In addition to genetic technologies, Agar discusses digital technologies and brain-machine interface technologies (BCIs). He also reflects on what it means to be human in today’s technological society.

Keywords:
Human enhancement; Digital technologies; BCIs; Humanity

Resumo

Nesta entrevista, o filósofo Nicholas Agar responde a questões sobre seu livro mais recente, Dialogues on Human Enhancement. Agar comenta sobre o desafio de escrever um livro em forma de diálogo, como foi o processo envolvendo seus estudantes e a relevância de utilizar um método antigo de fazer filosofia. Além das tecnologias genéticas, Agar aborda tecnologias digitais e as tecnologias de interface cérebro-máquina (BCIs). Ele ainda reflete sobre o que significa ser humano numa sociedade tecnológica como a atual.

Palavras-chave:
Melhoramento humano; Tecnologias digitais; BCIs; Humanidade

Introduction

In this sixth interview in the series conducted by the Transhumanism and Human Bioenhancement Research Group (GIFT-H+/CNPq), we are pleased to interview Nicholas Agar. He is a Professor of Ethics at the University of Waikato, New Zealand. As one of the leading researchers in practical ethics, Agar is known for developing sharp and up-to-date reflections on technologies, their applications, limits, and implications.

Agar is credited with introducing the concept of liberal eugenics into the debate, a term used to title an article published in 1998. In 2004, he published the book Liberal Eugenics: In Defense of Human Enhancement, causing significant impact and controversy. The book quickly became a reference in the debate, for better or worse (given eugenics' past, it became a target for critics of Human Enhancement). The fact is that Nicholas Agar, from this point on, became a central author in the debate on Human Enhancement.

Throughout his career, Agar published several articles and books. Of the books, in addition to the one already mentioned, we highlight Humanity’s End: Why We Should Reject Radical Enhancement (2010); Truly Human Enhancement: A Philosophical Defense of Limits (2014); The Sceptical Optimist: Why Technology Isn’t the Answer to Everything (2015); How to be Human in a Digital Economy (2019). These books show part of the breadth of Agar's contribution to the contemporary debate on technology and that he continues to review his positions critically. Additionally, Agar has contributed to the public debate by publishing media pieces.

Recently, he surprisingly innovated his approach to the Human Enhancement theme. In his most recent book, Dialogues on Human Enhancement (2024), he develops a dialogic approach. Through constructing characters representing all positions in the debate, Agar continues to offer innovative contributions to those interested in the topic.

In this interview carried out by Murilo Vilaça (Fiocruz), Murilo Karasinski (PUCPR), and Léo Peruzzo Júnior (PUCPR), the focus of this new publication. However, other points from Nicholas Agar’s essential publications should be addressed. With this, we hope to offer readers a rich opportunity to learn about the author’s thoughts.

*******************************

Interview

Murilo Vilaça: First, I’d like to say what an honor it is for you to give us this interview. Thank you very much for accepting our invitation so kindly.

My first question focuses on the method that was used to approach the subject of human enhancement. We’re talking about a text that was created as a dialogue, and we can also talk about a maieutic method, so they were both ways of doing philosophy when Western philosophy came about, let’s say. But these forms were quickly abandoned. Over time, doing philosophy has become an almost solitary reflective activity. Thinkers interact, but the interaction is, to use a term that has become common in the pandemic, an asynchronous communication. We talk to the thinkers but not directly with them. So, with fictional characters representing real lines of thought, your most recent book recovers this dialogical way of thinking and doing philosophy. And then I ask you: how did the idea or invitation to write the book Dialogues on Human Enhancement come about? I imagine this is a new experience for you. What was it like to tackle such a familiar topic, one you’ve written so much about, but in such an unusual way these days? What potential and limits did you identify in this way of approaching the subject?

Nicholas Agar: Thank you! I mean, I love what you said, by the way, about what I tried to do. Your commentary on it is very interesting. Yes, it does seem like a back to the future or forward to the past, and it’s partly. I’ll tell you what, I had a conversation, and I probably shouldn’t say this, but the Routledge Philosophy editor confessed something to me. You know, Routledge is part of Taylor & Francis. It’s a business. And they are having difficulty selling traditional philosophy introductions. So, the idea of a dialogue was something that they tried kind of as an act of desperation. Because they discovered two things: one thing, which is bad for them because they are a business, is that students are no longer buying their books. And the second thing, which is kind of bad too, is that they’re not even stealing them. You have these good philosophy books being written and students are acting like they are not there, because they don’t feel they need to know them. So, this is sort of like Americans would say “a Hail Mary pass”. Let’s try something. And I said, “I’ve never written a dialogue and I’m not a playwright”, I mean, but once I started doing it, I kind of fell in love with it. It’s exciting! I mean, it felt a bit like madness because the characters in the dialogue are all based on different versions of me. So, if felt like I was writing a dialogue in which I was getting to call an earlier version of me an idiot. It sounded insane, but it was fun! And then when the first group of students I taught… None of them read the articles that we assign them, so telling them to read five academic articles… give up! But I said “here’s a dialogue. Your fist task: invent a character. Who are you?” So, I sort of introduced them to my characters, and I said “what’s the role?” Well, you can get angry. I mean, you can’t just abuse my characters, but you can certainly disagree with them and call them completely wrong. Is the character you invent you? Well, maybe, or maybe the character you invent you hate. I mean, is your character based on Elon Musk? Who knows? I mean, what would he [Elon] say? And when you give students that, they really love it! And they invented some crazy characters, and then they wrote dialogue. I sort of said “well, ok, right now you don’t know much about the debate, but to write the dialogue accurately, you may have to read some articles, you may have to do some research”. But the research they did was very much in the service of them telling their own story. They did it kind of willingly. Because I wasn’t saying “read ten articles, and then you can think”. It was “think first”. And then, yes, so I loved writing it. By the way, I went to give a talk in China, and I sort of was wondering what characters Chinese students would invent. What characters would Brazilian students invent? I bet they would invent very different characters from the characters my students in New Zealand invented. So yes, I loved writing it. And I hope that comes through.

Murilo Vilaça: Perhaps the most prominent character in Brazil is a bioconservative.

Nicholas Agar: There are Brazilian students who come to New Zealand. There was an excellent student. She wasn’t studying in my program, but she was from Brazil, and she spoke about Brazil, and she expressed a sort of optimism about the future. Maybe, sort of a “yeah, the past not so good, but…”. Well, that’s different! She wasn’t in my class, but I don’t know what character she would invent.

Murilo Vilaça: It’s very difficult to speak for Brazil, given the multiplicity and size of Brazil, but there is a lot published on this by people who aren't exactly specialized in the subject, and who defend perspectives that we usually classify as bioconservative, especially using an author that many people know, obviously you know, which is Hans Jonas. None of the three of us here classify ourselves as bioconservatives, but there are a significant number of published texts endorsing perspectives that I think are very fragile, which I'll mention briefly in the next question, but the bioconservative is perhaps an important character in the Brazilian scene.

Nicholas Agar: Yes. Well, I think we all have elements of that, and in sort of like a dialogue, I mean, you can’t just say: here is the truth. If there are other characters there, well, what will they say? It’s almost like the bit of the debate that occurs in your head. I mean, if you give me the power to make the world, I’ll just make it according to my preferences. But if you say “look, you can argue for a bioconservative position”, but people won’t just say “well, you’ve got a gun, therefore you’re correct”. They will respond. And often I think in philosophy, just imagining how people will question you… So that’s why, in my dialogue, I wanted to make my characters as rude as possible to each other.

Murilo Vilaça: That’s right. I’ll move on to the second question then, and it begins by highlighting your extreme courage in publishing the book Liberal Eugenics: In Defence of Human Enhancement in 2004. Perhaps defending a type of eugenics would be unthinkable and even foolhardy. You mention this at the beginning of the book, e.g., that your friends and colleagues were even a little surprised and afraid of people’s reactions to a book that defended something that has been labeled over time in a very negative way, this idea of state eugenics or Nazi eugenics. Since the publication of that book in 2004, the debate has obviously advanced. Books have been published whose claims are not, in my opinion, the most consistent, such as the books by Habermas and Sandel. On the other hand, an author like Julian Savulescu defended, for example, the Principle of Procreative Beneficence in favor of liberal eugenics and was the target of a lot of criticism. Unfortunately, in my view, the GATTACA dystopia continues to be present in the debate, although there are good arguments to challenge the usefulness of science fiction works for an academic argument. Hitler remains a central character in some of these arguments, and so does the Reductio ad Hitlerum fallacy. Important technological innovations have also taken place. I think CRISPR-Cas9 deserves to be highlighted in this field of technological development for gene editing. So, in summary, the debate has maintained certain characteristics, but it has also become more complex and more mature, if we can put it that way. One movement that seems interesting to me is what I’m going to call, and I don’t know if you agree, the progressive depolarization of the debate, these transhumanist and bioconservative types as characters who are on opposite sides. So, it seems that a more moderate position has emerged. In this sense, I would like to talk to Eugenie [a character in the book] who, from my reading, is a representative of moderate eugenics, or moderate human enhancement, and in this fictitious conversation with Eugenie, I’d like to ask her three questions. The first is whether she agrees with the proposal of a very important philosopher called Nicholas Agar, which he developed in 2004AGAR, N. Liberal Eugenics: In Defense of Human Enhancement. Malden, MA: Blackwell Publishing, 2004., so whether Eugenie would agree with the 2004 Agar. The second question comes from a statement that Agar himself made in that 2004 book, which is “some distinctions are clearer in principle than in practice”. So, I would ask Eugenie if there is now more clarity in practice between these types of eugenics - the eugenics that you proposed back in 2004, the liberal eugenics, and the eugenics that most people want to avoid, which is state eugenics. Finally (third question), Eugenie argues that there will only be public genetic selection facilities, a kind of state monopoly on the tools of genetic manipulation. Therefore, parents will have to ask the state’s permission to use a genetic enhancement technique. My question to Eugenie would be: how will the liberal state intervene in free parental choices? How will the liberal state regulate the use of these techniques? Based on which criteria will the agents of the liberal state evaluate and respond positively to a request from, say, two parents, but deny another? How will these criteria for authorizing or denying a request be established? Eugenie also talks about the creation of a genetic enhancement panel, and I would like to know which people could aspire to be part of this panel and how they will be selected. Will they be elected? Are they going to be appointed? Or will it be a mixture of the two? The question was a bit long, I know, so if something wasn't clear, I can repeat, okay?

Nicholas Agar: Yes. I mean, it’s a great question, and I think it’s a great question because it’s even been better left as a question. So, if I was to tell you I know the answer to all of those things, I mean, and I don’t, because I know that there are, there must be some role to basically regulate the choices of individuals. So, if I decide I want to carry a gun around, I hope the state stops me, even if I say I don’t feel safe without a gun. And maybe there’s got to be an analogous way of deciding how the state… because, well, I want to apply whatever genetic technology to my future child… I want my future child to be colored purple. There has to be somebody that steps in and says “well, that’s what you may want, and I know there are lots of debates about how to understand the harm, but it’s not good for a child to be produced under those circumstances”. Now, how would that be done? I think that’s the big question. Here’s another question. We will have to do it. I mean, this stuff is coming. So the idea that we just say “oh, that’s too difficult”. Well, these technologies are coming, and we can either give up on them and just say “well, whatever”, or the liberal state can say “well look, we are sticking up”. Well, we want individuals to be free to make choices, but we also want individuals to conform to certain requirements. Now, how would that happen? If that was happening in Brazil, would it be different from the way it happens in New Zealand or Australia? I don’t know. But I think that it’s a problem that all societies, I think, will need to solve. You know, the way bioconservatives tend to talk, they tend to say “well, we’ve decided this technology is wrong” and that’s enough. Well, that’s not very useful if you’re going to end up living in a world with that technology. So, I might say guns are wrong. And you say, well, you live in a world in which people have guns. And if all I’ve said is “guns are wrong”, I haven’t been very useful. So, in a way we’re moving towards a world in which people will be making these choices, and I would like for society to have some role in how I might choose to genetically modify my children or what cybernetic technology to apply to them. It can’t just be my free choice. Somehow you must have some input into what I do because I might decide to modify my children in ways that make your children miserable. Did that come through?

Murilo Vilaça: Yes, that was perfect. While you were talking, I was thinking about the regulations and authorities (groups of experts and decision makers) that already exist in some countries to decide whether parents can use embryo selection technologies for so-called therapeutic purposes, medical or preventive purposes, if we want. So this is a question whose answer is perhaps contextual, because it also depends on the institutional framework of each place.

Nicholas Agar: Well, perhaps I could just say one more thing about that, but here’s a suggestion about the future. So, some of the old distinctions that we rely on, I don’t think will work in the future. The distinction between therapy and enhancement is an example. We’ll need something better, whatever it is. We’re entering an age of enhancement. We’re going to be living with these technologies. And to have someone say “well, that use of gene editing may have enhanced, may have added IQ points, therefore it’s bad”. I don’t think that will work. We need a new distinction.

Léo Peruzzo Júnior: In the book How to be Human in a Digital Economy, published in 2019, you state in the introduction that the book illustrates the long-term perspective of the digital revolution, a perspective that allows us to ask ourselves where digital technologies are taking us. According to what you said, and here I quote from your book, you said “the details of today’s digital technologies, which are much discussed, do not interest us so much. From a long-term perspective, the Mac, the social networking platform Twitter and the virtual device Oculus Rift will only be indistinguishable from the confusing sense of imminent progress in digital technologies”. And here’s my question: in this sense, how do you understand the incorporation of technologies into the human sphere? What role do these technologies play the moment we accept that they are somehow incorporated? When you talk about thinking of a “long-term perspective of the digital revolution”, what do you really have in mind with that expression? What kind of epistemic conception do you use to evaluate the digital revolution?

Nicholas Agar: That’s a great question. It’s something I’m writing about right now. In machine learning, there’s talk about overfitting. So, I’m training a machine to recognize cats, and I give it a lot of cat pictures. It identifies those 100,000 cat pictures very well, but there’s a danger of overfitting. So here’s a cat with three legs. All the pictures I showed the machine learner had four legs, and it doesn’t recognize this as a cat. Of course, we would look and say yes, that’s a cat with three legs. And I think there’s another problem, that when we look at the specifics, because the specifics of these digital technologies are very exciting, but they do tend to pass. So, think about the digital technology that’s coming next year, in five years’ time. I mean, that’s a good mental, and that involves, I guess, not thinking that ChatGPT will be the end of it. I mean, not sort of saying “well, here’s something that works”. We don’t want to overfit to ChatGPT. We want to have some sense of the way humans… and I’ve even just nearly begun to ask this question: how do humans relate to these technologies? So, in a way, that’s almost like, you know, Twitter will probably be gone soon because of what Elon Musk has done to it. And if you say “well, I’ve got something that works perfectly for Twitter”, well, well done! Maybe Twitter is going to be broke soon. But you need something that works for digital technologies like that, that are coming next year or in five years’ time, so you need sort of in a sense not to philosophically overfit. We are philosophers. You know, philosophers pretend to be like we’re computer scientists, and we’re not.

Léo Peruzzo Júnior: In this same sense, when you think about technologies in the digital field, that they would finally dominate part of the discussions in general, and making a certain approximation to this idea that digital technologies dominate part of the dialogues in your most recent book (Dialogues on Human Enhancement), at the end of night 4, you say that Eugenie seems annoyed by the overdose of digital technology to which the transhumanist Olen subjected his interlocutors. Is there a realistic and prudent (not sensationalist) way of dealing with digital revolution, especially in view of this technological hype?

Nicholas Agar: Well, I think that’s a great question. We are subject to digital hype. It’s like, you know, in a way, the question that we’re asking, interestingly, about open AI: is that going to be the world’s first trillion-dollar company? I mean, we know we can see the immense value in this. It’s almost sort of like if I tell you in the future AI will cure cancer. You will probably believe me. And maybe it’s true. But I think there’s a problem when we believe too much in that. We get caught up in the hype cycle. And when philosophers end up writing sort of like marketing… You know, sometimes people selling products come to philosophers, and they say, “I want an assessment of that”. But a fact is that what they’re asking is for you to help them sell that. They don’t want criticism. They just want marketing material. Could you write an opinion piece saying that Twitter will save the world? Well, you can write that, and people will pay you, but that’s not necessarily what we need. Well, wouldn’t it be good if these digital technologies normalized. Yes, they are here, you may find them unpleasant but, just thinking that they’re going to be here for a long time! That’s the world we live in.

Murilo Karasinski: Professor Nick, thank you. I feel very honored to literally talk to a person who is ahead of our time, so thank you for this dialogue and discussion. I have two blocks of questions, then I’ll let Leo finish with his last question. My first question has to do with part 4 of the book Dialogues on Human Enhancement, in which the character Olen is in favor of radical human enhancement. Throughout your texts, I believe that you would disagree with Olen's thinking to some extent, defending a moderate approach to enhancement and maintaining that a lot of criticism of enhancement should be interpreted as concerns about the degree of enhancement, rather than questions about the enhancement itself. This way, moderate human enhancement would effectively meet the need to promote truly human enhancement. On the other hand, by drastically boosting human cognitive capacities, we would run the risk of inadvertently creating entities (known as “post-persons”) with a higher moral status than people. If we were to generate beings with more comprehensive rights and protections than those afforded to non-enhanced people, this would naturally represent a disadvantage for the latter group of people. So moderate human enhancement, in your view, would emerge as a more attractive prospect for the future and for our interaction with technology, if I understood your texts correctly. Here comes the question: looking at the present and the future, I would like to ask if you think that there is still the possibility of moderate human enhancement, or if (economic, political, ideological, or other) forces have moved in the direction of actually allowing enhancement in attributes and abilities that exceed the capabilities of current humans? In this regard, how do you see the role and potential of artificial intelligence? Could artificial intelligence be a mechanism that, at the limit, would allow radical human enhancement?

Nicholas Agar: Yes, great question, and you’ve certainly understood my argument very well, so thank you. And I think… I mean, AI, when you add it to anything, just seems to be a multiplier. So, when you’re looking at sort of the Neuralink technology, in a way, if you’re looking at anything like that, and you think what happens when I apply AI to that, well, you’re going to get something radical. So that in a way is interesting, isn’t it? I mean, that these technologies don’t incline to moderation. Because you’ve got more computing power, you know, the chips shrink and the power increases. I mean, if you look at regulating gene editing, you can easily imagine someone saying, “we’ll permit this modification, that modification”, and by the very nature of that, if you’re just putting in, splicing in some extra DNA, well, there’s kind of a limit. You can put 2 copies, 3 copies, but you can’t put a billion copies in. But it’s in the nature of these digital technologies… I mean, you can’t just say immediately “I’m going to make my iPhone a billion times more powerful”, but it’s in them that they become much more powerful, and so I think that’s one of the big questions that we need to ask. I don’t know how you would, but that’s the world in which we live. So, when people look at large language model AIs, and they think “what can they do?”, I think people in the human enhancement debate should ask that in terms of human enhancement. I mean, so what happens when whatever or a version of AI that’s more powerful than large language models, what happens when that gets put into our heads? So, thank you, Elon Musk! And I don’t think the answers are simple. But I think it’s so exciting to ask them! Because we’ve had so many surprises! And it’s much better to go into the future recognizing that we’ll have amazing things, and we’re also going to have problems, but to go in with our eyes open. So that “oh yes, there’s problem” because, I mean, if you don’t recognize a problem, you can’t mitigate it. You’ve got to say “oh, there’s a problem here. We need to address that”. If you’re just blind to the problem, then you can’t do anything.

Murilo Karasinski: Excellent, and I think this is related to my next question, because in “Night 7 - How do we decide which aspects of human nature to preserve?” of your book Dialogues on Human Enhancement again, the character Wilson worries that surveillance capitalism, at least from Shoshanna Zuboff's perspective, could be replaced by an even more disturbing enhanced capitalism. I mention this because in an opinion piece published in 2023, “Will the decline of Surveillance Capitalism herald a new era of Human Enhancement Capitalism?”, you argue that there is no logical contradiction in a future scenario in which Neuralink ensures the inviolability of our brains. This would stem from the enormous profits that could be derived from merging the innovative methods of surveillance capitalists with Neuralink technology. You even exemplify the possibility that commands for the instant purchase of the latest Tesla model bypass the controls of the orbitofrontal cortex and can directly reach the emotional centers of the limbic system. There could also be a political advertisement transmitted by a specific neural connection that has access to the amygdala, therefore activating the brain region responsible for fear. And so we have a series of questions such as "if you discovered such a hack, what would its market value be?” What if a hacker gained direct access to our hypothalamus, the brain area that governs sexual desire? Considering all this, how do you see the next few years of brain-machine interaction? Is regulation of neurotechnologies possible? Would they be possible? Ultimately, do you believe that the merger between surveillance capitalism and human enhancement capitalism is inevitable, and if so, how should society prepare for the ethical and social challenges that may arise?

Nicholas Agar: Well, the main thing I can say is that’s a great question. And when I asked that very question that you asked me, [when] I asked my students in New Zealand, and they were initially a bit shocked, but then they had suggestions. I mean, is that alarmist to say that maybe, once, I don’t know, some company [manage to put] digital technology inside someone’s head, and it can communicate with the rest of our brain tissue, then it might occur to someone: “how can I make money by activating certain parts of the brain”? How can I ensure something by accessing the fear centers of the brain? I mean, in a way that sounds alarmist, and I’ve certainly never seen any technology [like this]. No one has said they want to do this. But when I asked the students, they said “I can see that might happen”. And no one is saying they’re going to do it, but it’s best that in a way even starting this conversation is important, because if no one bothers to do it, then we’ve just had a good conversation. But if someone does, at least we’ll have sort of thought it through. AI and large language models have struck the world. We’re in shock. And being in shock with a novelty is not the best state to make the best decisions about it. So we can actually do some advance thinking. I’ve asked my students “well, what would you do”? And they had some suggestions. I mean, they didn’t know. They had some suggestions. Elon Musk is not very popular in New Zealand. So the questions they had were mainly about stopping Elon Musk, and I said “ok, but if it’s not Elon Musk, it will probably be someone else”. But that’s a suggestion.

Léo Peruzzo Júnior: Well, it’s very interesting to listen to you and also the questions that come up, not just in fiction, but really that if we don't think today about a prognosis of tomorrow, perhaps that tomorrow will ultimately depend much more on random issues that may be beyond our control, which is why I think that thinking today about the implications of this... These are extremely important issues, and your thinking has really led to this attempt to anticipate this whole future and the way in which technologies act on it. I'd like to ask one last question, perhaps a slightly more philosophical provocation, which goes a little further than the last book, but which is also somehow prompted by the last book. In this book Dialogues on Human Enhancement, you seem to take up, particularly in chapter 7, one of the old questions of anthropology, at least in one of the sections you say: after all, “how do we decide which aspects of human nature to preserve” if, fundamentally, we don’t know what we really are, let alone how our relationship with evolutionary forces occurs. For example, a quick bioconservative reading could indicate - actually, one of many could indicate - that in the face of an unlikely scenario, recklessness would be the best of the alternatives. In the Brazilian philosophy scene, this type of argument is very strong in philosophical discussion. On the other hand, there are those who advocate that there would be a natural incorporation of biotechnologies into the human body (see, for example, extended cognition authors, such as Andy Clark and Chalmers, who have taken on this perspective). Hence my somewhat philosophical question, perhaps philosophical in the most traditional sense of the term: after all, do you assume some anthropological conception a priori in order to then think about human enhancement or, on the contrary, is it enhancement itself that shapes your conception of what it means to be a human being?

Nicholas Agar: Oh, but, see, that’s a great question in itself, and in a way when you say what it is to be human, I mean, if you would have characterized, I don’t know, the humanities, which in so many parts… in my part of the world is under threat, there are all these people trying to work out what it really means to be human, what are human things, and it would be weird for me to say “I’ll tell you what it means to be human”. “Here’s my definition of what it means to be human”. But in the spirit of the dialogue, now that I’ve got you brilliant gentlemen here, I can easily start a conversation. I mean, I know, I have a sense of what it means. The things I value about being the father of human children, and I know something about that relationship, and if you tell me “Well, I’ll be better off if you turn your kids into cyborgs”, I’d probably say “oh no, I don’t really want that”. So, I have a sense there’s something that can’t be trespassed on. Now I don’t know how to characterize it, and I think that’s been the whole project of the Humanities, it's to work out and to give me “I’ll tell you what it means to be human: It’s to be a member of the biological species Homo sapiens”, and you say “no, that’s not it, because people have been talking about being human well before the biological species concept came along”. So yes, when I invited my students in New Zealand to join this debate, I mean, it’s great to begin a debate, and they did come away with more questions. But I think I wouldn’t have gotten those questions if I had said: “here are five books on human nature”. “These are the leading scientific texts”. Because in a way, demanding that is a way of no conversation ever to happen, because you just say to the person who’s got questions: “go away and read these gigantic books”. And the academy is producing more and more and more books at an exponential pace, so you’ll never get to say anything.

Léo Peruzzo Júnior: Thank you. Thank you so much for your answers.

Murilo Karasinski: Thank you for your kindness. It’s wonderful for us to be able to hear from people who think about this really in a very honest and transparent manner, because sometimes we look at these problems and we think “I can’t find solutions for them”.

Murilo Vilaça: Many thanks, Nick.

Nicholas Agar: Thank you. Well, from this conversation, I have a sense of philosophical progress. So, I always hear there are huge problems, but when you have a conversation like this, if fills me with optimism… I think the problems are huge, but the sense of progress is real. Thank you.

References

  • AGAR, N. Dialogues on Human Enhancement. New York: Routledge, 2024.
  • AGAR, N. How to Be Human in the Digital Economy. Cambridge, MIT Press, 2019.
  • AGAR, N. Truly Human Enhancement: A Philosophical Defense of Limits. Cambridge: MIT Press, 2013.
  • AGAR, N. Liberal Eugenics. Public Affairs Quarterly, v. 12, n. 2, p. 137-155, 1998.
  • AGAR, N. Liberal Eugenics: In Defense of Human Enhancement. Malden, MA: Blackwell Publishing, 2004.
  • AGAR, N. Will the decline of Surveillance Capitalism herald a new era of Human Enhancement Capitalism? Available at: https://www.abc.net.au/religion/after-surveillance-capitalism-comes-human-enhancement-capitalism/102082876 Accessed on March, 27th, 2024.
    » https://www.abc.net.au/religion/after-surveillance-capitalism-comes-human-enhancement-capitalism/102082876
  • *
    We thank Fundação de Amparo à Pesquisa do Estado do Rio de Janeiro (FAPERJ - JCNE: E-26/201.377/2021) and Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq): APQ/PRÓ-HUMANIDADES (421523/2022-0); Bolsa de Produtividade em Pesquisa/PQ (315804/2023-8); and Chamada Universal (421419/2023-7) for their support for Murilo Vilaça.

Publication Dates

  • Publication in this collection
    09 Aug 2024
  • Date of issue
    2024

History

  • Received
    02 Apr 2024
  • Accepted
    02 Apr 2024
Pontifícia Universidade Católica do Paraná, Editora PUCPRESS - Programa de Pós-Graduação em Filosofia Rua Imaculada Conceição, nº 1155, Bairro Prado Velho., CEP: 80215-901 , Tel: +55 (41) 3271-1701 - Curitiba - PR - Brazil
E-mail: revistas.pucpress@pucpr.br