the quiet living room - by quietsocialclub

AI, Wellbeing & the Future of Humanity with Marianna Ganapini

August 17, 2023 Quiet Social Club
AI, Wellbeing & the Future of Humanity with Marianna Ganapini
the quiet living room - by quietsocialclub
More Info
the quiet living room - by quietsocialclub
AI, Wellbeing & the Future of Humanity with Marianna Ganapini
Aug 17, 2023
Quiet Social Club

We are back from the summer break with a special extended episode on AI with AI ethicist and researcher Marianna Ganapini. Since the launch of Chat GPT earlier this year, we have all had some questions, big and small: 

-  How will the future look for AI and humanity? 
-  How will AI impact our daily lives? 
- Can we ignore the elephant in the room? 
- How can we make sure the future is bright? 

Marianna Ganapini is an educator, philosopher and ethicist, Co-Founder of the instructional design consultancy Logica Now and Faculty Director at the Montreal AI Ethics Institute. She has contributed extensively to the research on responsible Ai, just recently published a course with the Alan Turing Institute for Fairness and Responsibility in Human-AI interaction in medical settings. Together we talk about fairness, why doing the right thing is a good thing and how we can make sure that together we create a better future. 

Show Notes Transcript

We are back from the summer break with a special extended episode on AI with AI ethicist and researcher Marianna Ganapini. Since the launch of Chat GPT earlier this year, we have all had some questions, big and small: 

-  How will the future look for AI and humanity? 
-  How will AI impact our daily lives? 
- Can we ignore the elephant in the room? 
- How can we make sure the future is bright? 

Marianna Ganapini is an educator, philosopher and ethicist, Co-Founder of the instructional design consultancy Logica Now and Faculty Director at the Montreal AI Ethics Institute. She has contributed extensively to the research on responsible Ai, just recently published a course with the Alan Turing Institute for Fairness and Responsibility in Human-AI interaction in medical settings. Together we talk about fairness, why doing the right thing is a good thing and how we can make sure that together we create a better future. 

Iliana 00:09
 My guest today is Mariana Ganapini. She's an educator, philosopher, and ethicist, co founder of the instructional design consultancy Logica Now and faculty director at the Montreal AI Ethics Institute. 

Iliana 00:21
 She's contributed extensively to the research on responsible AI. Recently published a course with the Alan Turing Institute for Fairness and Responsibility in Human AI Interaction in Medical Settings. 

Iliana 00:33
 So I'm excited to have her with me today to talk about the future of humanity and technology and how to promote well being in an increasingly digital world. Mariana, thanks so much for being here. Thank you for having me. 

Iliana 00:48
 Tell us a little bit more about your journey. How did you get into the field of AI? How did you become a researcher and educator and philosopher? So my background is in philosophy. I did a PhD at Johns Hopkins and philosophy on the nature of the mind and human rationality and irrationality. 

Marianna 01:14
 So my background is, as I said, mostly philosophical. I've been an academic pretty much all my life. At this point, I focus in particular on questions about what is the mind? Are humans rational? Can we trust others? 

Marianna 01:31
 What are the ways in which we reason? Why do we reason? So these are those are the questions that I focused on in my graduate work. And after that, I was able to secure a tenure track at Union College liberal art college in upstate New York. 

Marianna 01:49
 And my sort of journey at one point started sort of focusing on AI more and more because I was sort of fascinated by the growth of AI, the rapid growth of artificial intelligence. I was fascinated by the. 

Marianna 02:07
 The challenges and the questions that sort of AI poses to humans and what it means to be human. And so I was sort of really sort of struck by how AI was posing sort of this new frontier was this new frontier for the human thinking, and in particular for philosophy. 

Marianna 02:30
 And then I thought, naturally, philosophers should be started thinking very deeply about AI. And then when you start thinking about AI from a philosophical perspective, you can really ask a number of different questions. 

Marianna 02:43
 So you can ask questions about does AI think? Does AI understand what we say? Does AI produce any meaning? So these are sort of questions related to the moral philosophy of mind. You can ask questions about whether AI is trustworthy or not trustworthy, or whether it makes sense to trust AI at all. 

Marianna 03:07
 There are questions concerning more the ethics of AI. So is AI really producing well being or not producing wellbeing for us? Is AI harmful. Is it a force of good? So these are sort of the areas that I started being interested in. 

Marianna 03:24
 And so my philosophical journey brings me kind of closer to this field. And then instead of just going all academic about it, I kind of tried to take a different approach. And I started looking outside at the real world. 

Marianna 03:41
 I started collaborating with nonprofit organizations, with companies, and with other institutions. And I sort of tried to spread my wings as much as possible to really understand what was going on and what were the challenges and needs of this space, not just from a philosophical standpoint, but from even technical standpoint. 

Marianna 04:06
 A sociological standpoint, the psychological standpoint. So really looking at society at large. And this is where sort of then I started zooming in on specific topics that I kind of really care about in this space. 

Marianna 04:18
 But I really try a different tack. Instead of going just on the academic sort of through the academic world, I try to approach things from the outside. 

Iliana 04:29
 In a sense, it's really interesting that your path kind of started within the philosophy area, and then you moved very much to applying this to the field of AI. 

Iliana 04:44
 What would you say ethical means or what it means to you in the context of AI? Like, what do you think are the considerations or the questions that are increasingly asked now as AI becomes more prominent in our daily lives? 

Marianna 05:04
 So ethics, as I see it, is really two things. I mean, on the one hand, you have ethics in the more theoretical sense, which is the study of the principles and norms that should guide our practice. Okay? 

Marianna 05:19
 And this sort of has to do with the kind of work that philosophers have done for centuries now, trying to really think about this larger principles that you can then apply. Now, on the other hand, you have the more applied ethics, which is really trying to figure out how these principles, these ideas, these concepts like fairness, justice, rights, well being value, how they apply to the AI space, the actual sort of things that we're trying to do. 

Marianna 05:51
 And I think that two sort of normative spheres that I think are most more interesting at this time, there is the well being issue. So. The question is, is AI really producing well being for us and for the environment, or is it producing harm and what kind of harm? 

Marianna 06:12
 So this is one sort of area, and the other area has to do with rights. So what kind of rights can AI preserve and what kind of rights does AI violate? And there you have things like specific worries about privacy, specific worries about autonomy, for instance, which I think is something that people don't really focus on as much, but they're very important. 

Marianna 06:34
 You see, there you have the issue of fairness. Is AI a tool that discriminates? So I think you can take these two different, somewhat parallel ways of looking, either in terms of well being sort of more realistic and the other in term of rights in particular. 

Iliana 06:55
 Is there a specific question? I mean, I'm always wondering, for AI to kind of find its place in our society, we have to answer certain questions. Do you feel that some questions are more easily answered than others? 

Iliana 07:10
 I remember one of the first things that obviously everyone who was interested in this field heard about was the trolley example and how you had to find certain questions in order to actually say, okay, the algorithm has to decide this or that. 

Iliana 07:24
 So how do you feel about the questions that we have to answer? 

Marianna 07:30
 Yeah, so that's a good point. I don't think there are very easy questions. Maybe there are questions about things like, well, cybersecurity that are going to be more straight, orally, sort of answered by using the right technology to avoid leaks of data. 

Marianna 07:51
 But then even privacy is. Very thorny issue because every time you talk about things like privacy, you might think, well, if we preserve privacy, then maybe there are going to be other values that we don't embrace or protect. 

Marianna 08:06
 So there seems to be always kind of a trade off. And technology in particular, AI is pushing us to make these sort of kind of trolley case decisions in which both things seem to be bad, right? Like, on the one hand, you want to preserve privacy, but on the other hand, maybe you want to make sure that there is enough data for medical research, or you want to preserve privacy, on the other hand. 

Marianna 08:33
 And so you use differential privacy as a tool, but then there's always like a trade off with accuracy. So there are a lot of these instances in which you have sort of competing values and it's extremely difficult to make decisions. 

Marianna 08:49
 Now, that being said, there are things that are fairly straightforward that we shouldn't do. Like one thing that we should, I think, avoid is are things like spreading misinformation or using technology to spread misinformation and disinformation. 

Marianna 09:07
 So that's one thing that seems to be pretty straightforward. Now, of course, there are questions about freedom of expression, but I don't think that the two freedom of expression and sort of this misinformation are necessarily always sort of in contrast. 

Marianna 09:23
 So I think we have a pretty good reason to avoid the spread of misinformation. Another example is when protected categories are involved, I think that there are instances which the decision to make is pretty straightforward. 

Marianna 09:40
 When there are children involved, I think we have a duty to protect them, and other considerations might kind of be less central. So manipulating children. Using AI to push children to make a certain decision that might be harmful for them. 

Marianna 09:59
 These are things that need to be addressed as soon as possible and for which I think the answer is easier than other sort of cases. Yes. So I think that it's a complicated there's a lot of trade off to be made. 

Marianna 10:13
 But on the other end, there are issues that I think are sort of pressing and need to be addressed and I don't think they've been addressed as quickly as they should. I think that's an important question. 

Iliana 10:33
 How is it that we are even at this point now? Because I think a lot of people know about the social media example, or for example, how Amazon gives you their suggested products based on an algorithm. 

Iliana 10:50
 How are we even in a situation how did we come to this situation where we have to discuss the ethics of AI in such a serious way? What went wrong or what was overlooked along the way? Yeah, I think a lot of stuff has been overlooked. 

Marianna 11:08
 It's a combination of two things. The technology goes really fast. Improvements and capabilities of these technologies are sort of increasing at a very rapid pace on the one hand, and we are being very slow to catch up. 

Marianna 11:27
 So now it's kind of a true obvious thing to say from a legislative standpoint. We've been very slow to catch up. The European Union has tried to do as much as possible, but we see that they're still sort of behind the curve massively. 

Marianna 11:43
 I guess the situation in the US. Is even worse. So there is that legislative sort of. Um, aspect that has been very slow. But it's not just that. I think that as human beings, we find hard to really kind of get a real sense for how this technology works and the kind of things that it can do. 

Marianna 12:07
 On the one hand, we got this hype and we think that, oh, this technology is going to do fantastic things and solve all the problems. I was talking to a radiologist lately a couple of days ago and it was like, yeah, I'm kind of disappointed because they told us that AI would solve all our problems and that would help in our work, but as a matter of fact, has been quite disappointing so far. 

Marianna 12:28
 So there's this promise that now people see in AI and sometimes their expectations have been let down. Okay. On the one hand, on the other, though, as human beings, I think we have a hard time grasping the complexity of the tech. 

Marianna 12:47
 And that brings us to always kind of go back to the categories that we know right. And that has not helped us and not served us very well. The categories that we know are built on a kind of reasoning patterns that we have inherited from our ancestors and long evolutionary history. 

Marianna 13:08
 So we are slow to catch up anyway. But now that the technology is actually growing so rapidly, we are actually even in a worse sort of situation. What I mean in particular is the fact that we don't understand that when a company is able to track your data, ah, make inferences about where you volunteer the data, that you can that yourself. 

Marianna 13:38
 Volunteer maybe online. Okay. And make those inference in such a way that they can actually make a lot of predictions about yourself. Even this like a basic notion I think is very hard to grasp for a human being because we don't do that. 

Marianna 13:51
 I mean, we see things, but we don't make this like we don't have all that data, we don't have all that kind of computing power as humans. So we don't expect that there is going to be something able to make those kind of predictions. 

Marianna 14:04
 And so we are kind of naive in our approach to AI high and we go in with the categories of human interaction, but we shouldn't. And so I think we should be more and more aware of what's going on. And the thing that really worries me is that doing that may really kind of slow some progress on the one hand, and bring also harm slow progress, for instance, in the healthcare sector. 

Marianna 14:31
 I mean, I'm really worried that doctors, nurses and medical personnel dealing with AI without knowing how it works, without knowing all the sort of possible biases and ethical challenges of AI. They're just going to adopt these technologies as they're adopting any other technology and that's going to be potentially very harmful and also slow the kind of progress that we want in the healthcare. 

Marianna 15:00
 On the other hand, I worry about children, teens and parents going in dealing with these technologies with this naive approach and not being able to protect their data, protect their safety and security. 

Marianna 15:15
 And so I think that really is where education really should step in and be like, you guys need to know these things because it's coming or it's already here. You're dealing with it even if you don't know about it. 

Marianna 15:29
 Please be careful. This is part of the work that I do. I try to and you do too, right? We are in the business of trying to. Spread awareness because that's one of the ways to try to, in a sense, slow AI. 

Marianna 15:46
 Not in a sense that we want to ban AI or impose a six month ban or stop, but what we want is that humans are going to be kind of producing kind of constant friction to AI development in the sense they're going to start pushing back a little bit, making choices that are sort of intentional with respect to the tools that they are using and how they're using those tools. 

Marianna 16:13
 So intentional use is very important to kind of slow things down, in a sense. 

Iliana 16:19
 Do you think that we'll have to make a trade off maybe in the short term future? I'm always so fascinated when I have the two camps on. 

Iliana 16:31
 For example, my LinkedIn thread where some people might say this is not moving fast enough, especially in Germany. You might say there's too many regulations on this. We're moving too slowly. We are not going to catch up with, I don't know, other countries like China or the US or whatever. 

Iliana 16:48
 On the other hand, you have people hypothesizing that it is already the end of the world. Do you think we need to make a trade off between responsible tech and leveraging all the benefits that AI can also offer us maybe in the short term or how would this look? 

Marianna 17:10
 Yeah, that's the key question, right? Like how do we deal with this? I would say extremist positions are tend to be usually wrong in the sense I think that those who are saying that we are facing the end of the world in a few years, I think they are sort of being too pessimistic. 

Marianna 17:33
 But also those people would just think that it's about. Regulating, regulating, regulating slowing things down. That won't work either because well, for various reasons regulations can be extremely valuable. 

Marianna 17:49
 So I'm all for regulations but it is very difficult sometimes to follow these regulations to conform while also sort of promoting progress. I mean, this is a fact we can't boil like oh no, ethics is going to necessarily promote the kind of advancement that we want. 

Marianna 18:13
 No, ethics is also a matter of slowing things down, impose some constraints. Not because we are at the end of the world, not because we are facing sort of a terrible future if we don't do that, but is because of the society we want to be in. 

Marianna 18:32
 I mean, if we accept that the use of certain technology will be promoting things like discrimination, for instance, what does it say about us as a society? What does it say about us as a humans? I mean, it's part of our ethos to try to go against those things and now we want this wonderful world in which technology delivers these wonderful things except though that it's going to be discriminating and so sort of reinforcing systemic injustice. 

Marianna 19:02
 No, I don't think we want that. So I think I'm going to be offering maybe not very satisfactory, but yet ecumenical view in saying let's find a middle ground, let's find different ways to adopt these constraints. 

Marianna 19:17
 The legislations and standards are important, auditing is important, having internal ethics committee in those companies is important. Outside pressure from consumers is important. So I think. There is no one way to solve this. 

Marianna 19:37
 It's going to be a multiplicity of factors. I mean, I do hope that at one point this company is going to realize that doing the right thing, because it's the right thing may be at the end of the day better for their bottom line. 

Marianna 19:50
 They don't seem to be there yet, unfortunately. But sort of that's the hope. Keeping losing trust at one point will backfire. That's sort of my prediction. Trust happens, however, only if there is an infrastructure for it. 

Marianna 20:06
 I mean, the person who is on Facebook, on any other social media, they don't know, right? They don't know the ins and outs. They don't know the risks they need to kind of trust the system to work. And so the infrastructure of trust, right, what they said, like regulations maybe sort of stronger, maybe laws, that's also important. 

Marianna 20:34
 But a system of auditing, internal and external, that's important. And then social sort of pressure from the bottom up, I think that's also key. And if there is this infrastructure trust, I think it's going to be better in the long run, better also for the bottom line of those companies. 

Iliana 20:54
 Let's talk about the pressure from the bottom line a little bit more. How do you think that has evolved over the last couple of years? 

Marianna 21:04
 I think that awareness is growing, thanks also to a lot of nonprofit organizations. 

Marianna 21:13
 You mentioned the Montreal AI Ethics Institute. They've been doing a lot of work to grow awareness. Another nonprofit is for humanity, which I also collaborate with. And that's a different thing. But it's trying to. 

Marianna 21:33
 Build awareness for creating, as I mentioned, infrastructure of trust based on auditing. So there's a lot of work being done to grow awareness. And now of course, every day there is a newspaper article on AI chat, GPT, this and that. 

Marianna 21:56
 So people are are freaking out. I mean there are still people who don't know who tragedy is, but a lot of people tend to now be aware that there is this thing coming. And then of course, you have very prominent figures in this AI space who are going all out saying oh, it's the end of the world, we have to stop this. 

Marianna 22:22
 So there is a lot out there. And so I think that people now are finally kind of waking up to the idea that AI is here, is here to stay and it's going to have an impact. That being said, I don't think that there's enough focus on the ethical risk. 

Marianna 22:45
 I think there is more like this kind of sense of panic and excitement. Companies that sort of were skeptical about using AI two years ago, a year ago, now want to jump on it right away. They don't know what they're doing really. 

Marianna 23:02
 And so they're creating this sense of fear of missing out. They don't want to miss out on this thing. The public is scared, but also excited. But I think what is getting lost in the situation is the fact that there are ethical risks not about the end of the world, but about very common and actual and practical stuff about how you get. 

Marianna 23:29
 Your loans, whether you get a job where you see a job, when you search for one, where you can buy a house. All these things are very practical. And I think there is still the sense that people don't understand that they are being impacted now and the decisions have been made now and they are still living in this. 

Marianna 23:50
 A little bit of confusion about whether AI is going to take over or is not going to take over. I don't think that's the issue. I think the issue is, are you going to get your loan? If not, why? Because an AI chose that and so you need to be aware of that. 

Marianna 24:07
 And so we need more work on sort of spreading awareness on the ethical risks, not just about AI in general. I think it might be quite easy to feel a little bit helpless when you phrase it like that as an individual, because it really seems that at the end of the day, there are decisions that are made for you and you have no power over the outcome. 

Marianna 24:34
 And suddenly you're kind of almost like a victim to the AI. 

Iliana 24:38
 How would you say a person can move? Because we talked about also the very real life and how AI impacts a person's life. How do you think we can move ethics and perhaps even some sort of actionable ethics into our everyday lives? 

Iliana 24:58
 What are kind of some of the starting points or what can a person do to make sure that the impact on their lives is not negative but rather neutral or perhaps even positive? 

Marianna 25:12
 Sure. I mean, I don't want to give the impression that I don't think the I can have positive effect. 

Marianna 25:17
 I think AI can be tremendous positive effect. So as sort of an individual. You have a certain discretion concerning the things and the applications and the websites that you can choose and choose to engage with. 

Marianna 25:43
 We are sort of, in a sense, powerless right now because there is not this infrastructure of trust and have yet the tools to defend ourselves. But they're coming. I think that in the EU in particular, but also in America, finally, things are starting to mean there are various things that you can do, but the one thing you can do is make an intentional decisions about the application you're going to use. 

Marianna 26:15
 Okay, that might be not easy sometimes, but maybe the easier example is with social media. I mean, if there is a social media company that infringes on your trust, infringes on your rights, violates those rights, I think there is an argument to be made for the idea that you can stop using it or you can reduce the use. 

Marianna 26:41
 I know I'm fully aware that sometimes it's extremely convenient, especially on some of the platforms, to have suggestions and recommendations, and I use them myself, especially if I want to know about events in New York City. 

Marianna 26:57
 I use this platform so they can give me pointers to concerts. That's all great, but I should be aware of the fact that as I'm doing that, I am feeding the system. I'm allowing the system to decide for me. 

Marianna 27:14
 It can be helpful, but it somehow has sort of long term implications about the ways in which we. Trust a certain type of technology to decide for ourselves. Okay? It has to be a decision that you make, okay? 

Marianna 27:35
 So you're not forced to use all social media, you're not forced to accept all recommendations. And then when it comes to children, I think if you have children so you're dealing with children, you hard. 

Marianna 27:52
 You should really get to know this technology a little better. Know that there are things that you shouldn't do or shouldn't allow your kids to do in terms of privacy concerns. So there are things that, for instance, parents can do to protect their kids. 

Marianna 28:11
 Inform them, tell them about the risks. Tell them not to use their names when they do the login, like even various basic stuff. Avoid putting your real address or your real sort of name and so on and so forth. 

Marianna 28:28
 So there are small but very impactful things that we can do just to get that kind of attitude of distrust or at least sort of vigilant trust with respect to this technology. And later, when the regulations are coming in and there is more sort of infrastructure of trust, then I think that we should embrace that. 

Marianna 28:54
 Embrace that even as citizen. Embrace that idea. And yeah, maybe even vote for those who actually understand the risks of this technology. So we can do sounds like there's. Quite a few people and stakeholders coming together to make sure that this has a positive future direction and everyone has to work together. 

Iliana 29:19
 It's really, I guess, a challenge we face as a global society. Um. Is there to turn this around a little bit to the positive side, do you maybe want to share an example or I guess a case study where you say this was a really successful application of AI. 

Iliana 29:39
 You mentioned also in the medical example, if there's no awareness from the users of the technology that there are biases, it's not of miracle solution that will give you everything. So there are obviously a lot of things that have to come together for an AI to be applied successfully. 

Iliana 29:59
 Do you have an example that you can share? 

Marianna 30:03
 Yeah, I mean, I have a couple of examples in mind that can perhaps sort of illuminate and explain how AI can really be a force of good. So as I mentioned, as you also hinted at in the medical sector, maybe it's the obvious thing, right? 

Marianna 30:23
 The medical sector object recognition is technology that now very well develops and so recognize things like cancer can be one of those areas in which AI can really make an impact. So radiology, as I mentioned, or having to make fast decision in the AR triage I know, is one of the things that AI can work really well. 

Marianna 30:49
 So really, especially in the medical sector, I think this can have a tremendous impact also kind of bringing down the costs, bringing down the human costs without, however, cutting a radiologist and the medical person not out so that they cannot sort of have any supervisionary role on the technology. 

Marianna 31:12
 They got to be in the loop, they got to be there. But there are a lot of ways to cut costs. There are a lot of ways to improve performance in healthcare. There are a lot of ways to speed things up and save lives. 

Marianna 31:24
 Healthcare says there is one place where we absolutely need to make this work and work fast. Even education. That's another sector where I think it could be helpful to have an AI personalized education. 

Marianna 31:39
 The idea that there are students with special needs, there are students in areas of the world where it's very hard to have a well functioning educational system. Maybe if you can bring the eye there, of course that's a big if, but that can be a really helpful tool. 

Marianna 31:58
 In particular in relation to the education AI and the gaming industry, I think they are coming together to find solutions to make learning fun and interesting. So Gamification is a way to teach in a way that is usually helpful and it's very easy to absorb because it doesn't require you to spend a lot of time on reading books. 

Marianna 32:29
 So Gamification could be a fun, interesting way to deliver content and educational content. And so, if these two worlds as they are, they are coming together more and more, it could be a mix of Gamification but also personalization of education, which is, I mean, I'm an educator, so I'm all for it. 

Marianna 32:49
 Again, I don't think that we should eliminate the role for human educators though. But that being said, I think that I can definitely help. But even in the very fraught issue of manipulation, which has a very bad reputation of how AI can sort of personalize content, so that is really targeted at you as an individual with your needs. 

Marianna 33:15
 And so people tend to be very skeptical about the fact that manipulation can bring any good. We just don't call it manipulation. We don't call it nudging. But nudging is also for good. I mean, we can nudge people to be better. 

Marianna 33:29
 We can nudge people to be less biased. We can nudge people to make better choices for themselves, for their community and the environment. And these techniques are already in place without AI. So AI would be scaling this. 

Marianna 33:44
 And if it's done well, it can be a tremendous this element of improvement in a variety of sectors. Education is one of them, but doesn't have to be that health care. You can sort of push people to be more, to eat better or to exercise more, make better choices. 

Marianna 34:05
 So I think that there are sort of tremendous ways, like multiple ways in which AI can have a great impact in a good way, but we just need to put the guardrails in place so we don't get too excited and then we end up making some big mistake. 

Iliana 34:31
 The nudging. And you also mentioned that in the context of the medical industry decision making, faster decision making would be a benefit that AI would offer us. I'm quite curious in which context you would happily have like, are there specific contexts where you would be totally happy for an AI to make the decisions for you and areas where you're like, oh, no, maybe this could be on a personal or on a societal level based on your research. 

Iliana 35:01
 Such essentially where is it handy to have someone make a fast decision for us and where is it you don't want that? 

Marianna 35:13
 Yes. So we are. We're studying this with the team at IBM led by Francesca Rossi. We're really thinking about the ways in which AI can not just push you to do things, so nudge you to do things, but can also maybe even make decisions for you. 

Marianna 35:38
 These kind of things really have to do with an assessment about potential risks of the decision likely that those potential risks are going to be happening. So probability and third performance. So how well did I do versus how well does the AI do in this context for this decisions? 

Marianna 36:09
 These are the three factors that I think are important, that's the three factors we really care about. So if the AI has proven to be excellent at this particular task and the risks that this task entail are not life and death, then I think there is an argument to be made that maybe I should use the AI. 

Marianna 36:36
 If the AI is better, faster, more efficient, why not? But on the other hand, if the risks are extremely high, okay, so the European Union has the idea of high risk, medium risk, low risk. So if the risk is very high, it can be a life and death thing. 

Marianna 36:58
 It could be a question of someone's life, someone's well being. Then I think that there is an argument to be made that the human got to be in the loop no matter what. So the AI can make a suggestion. 

Marianna 37:11
 That's what our approach is like the eye can make a suggestion, the eye can even nudge the human to be more reflective. But then at the end, if it's a high risk situation, the human needs to be involved in other situation where maybe be like very kind of simple trivial things, then yeah, why not using the don't? 

Marianna 37:35
 I'm not against it necessarily. Do you think that as we spoke about policy and regulation but also just I guess all of us could ask ourselves whether we're like the CEO of a tech company that uses AI or whether we're applying it. 

Marianna 37:55
 We could exercise some sort of personal ethics right in the way we approach it. 

Iliana 38:02
 How much do you think it's like kind of an awareness from the individual versus regulation? Where do you think the biggest impact will be made? 

Iliana 38:12
 I know we spoke a little bit about that things have to come together. But I keep thinking that if an ethical person kind of decides we're going to use this but we're going to really try to use it as a force for good, obviously they're going to approach it a little bit differently than someone who's just following a regulation. 

Marianna 38:34
 Very much, yes. One way to answer your question is to say especially if you're a comp money and you want to do the right thing, there is a values of that speaks to the fact that ethics is not just a matter of compliance. 

Marianna 38:55
 So I did talk about regulations and I think they are important and I think compliance is important. So I'm not going to be. Um shy about saying that we need compliance but ethics is not compliance. So it's not about checking boxes. 

Marianna 39:14
 Ethics is about doing the right thing because it's the right thing. So I definitely think that trust and infrastructure trust as I mentioned will allow us to have have an attitude of trust with respect to this technology. 

Marianna 39:31
 But if these companies are just doing things just because they want to comply but not because if they are really moved by or motivated by an ethical ideal I think that at the end of the day won't be able to secure the kind of trust that we want. 

Marianna 39:52
 So maybe parallel could be the financial sector. I think that in the financial sector there is third party audits which kind of push for compliance. But we might be a little skeptical that companies in the financial space, financial world are really ethical. 

Marianna 40:13
 I think that companies in the tech world want to do things a little should want to do things a little differently in the sense they should try to actually get to gain people's trust because they're doing the right things, because it's right thing. 

Marianna 40:31
 And so if they can market themselves as companies who are really trying to promote social good and not just the bottom line then I think that that will help them and will help the sector more widely. 

Marianna 40:52
 I have one more question but I feel otherwise we would be talking forever. How do you think that organizations or individuals. Can use AI as a force for good for themselves or within the company. What opportunities do you see in the future, in the near future, or far future? 

Marianna 41:11
 Feel free to. So. The most difficult thing, of course, for a company is, on the one hand, as I mentioned, gaining trust. On the other, of course they have stakeholders. And the bottom line to think about and I'm not going to be naive about that I think that as sort of generative, AI and other new exciting technologies are developing, I think that instead of stopping these technologies or pausing them for six months, I think we should try to encourage these companies to think a little bigger. 

Marianna 41:58
 It's not just a matter I know it's attractive, but it's not just a matter of coming out first with the newest and shiniest tool, referring to Chat GPT in particular lately. But I know that it's attractive, it's a lot of money, and that's all good and well, but there is more than just trying to be the first at sort of producing the new cool gadget or your new cool tool. 

Marianna 42:26
 If I was a big company in tech, I would ask myself what is my identity and how do I want to position myself in such a way that I can be recognized? Not as just once again, the tech company that tries to make money and uses people's data and it's reckless. 

Marianna 42:51
 How can I kind of position myself as being the good guy? Let's put it like that. The. In other words, the kind of company that has the bigger picture in mind of how these tools can then be brought together to make humanity better, to make people's lives better, to help people in need in various sectors and also start shying away from the kind of tools and the kind of new technology that it's not really helpful. 

Marianna 43:33
 For instance, in medicine there is this idea that I think is very important. You can come up with the newest cool thing, but if it's not helping anybody, it's not worth it. So who are the beneficiaries? 

Marianna 43:44
 We talk a little about stakeholders, but let's introduce the notion of beneficiary. Who is benefiting from these new tools that I'm sort of putting out in the world? And of course, we also need to ask who is not benefiting? 

Marianna 43:58
 But even just focusing on who is supposed to be happy with the stuff that I'm putting out in the world as a company, it cannot be, oh, people are going to be writing emails faster because now they got this job. 

Marianna 44:19
 It cannot be. That right. And so who are you trying to help? What kind of gaps in welfare are you trying to fill? And don't just do things because, oh, well, that's cool, let's do it. It sounds amazing. 

Marianna 44:41
 Yeah, it sounds amazing, but for whom? Really? And so I think that's what I would do if I had a company that was in that position to really shape this space in a new and really exciting way. 

Iliana 44:58
 So. I think this is a really beautiful summary of kind of how AI really challenges us to think about our human identity and our identity as a society. 

Iliana 45:12
 Are there any parting words that you would like to leave with our listeners today? 

Marianna 45:18
 Yes, I already said a lot I'm concerning the risks of AI. I also mentioned some potential good things about the new technology. 

Marianna 45:31
 So one thing that I would want to convey is that though we shouldn't be afraid of AI, it's not going to kill us. I think that, as with anything, we want to make sure that the use that we make of it and the way we look at it are guided by ethical principles. 

Marianna 45:58
 And so as we have regulation for drugs and medicines, we have regulations for cars, we have regulations for all sorts of things. Let's embrace the idea to have regulations for AI as well, and let's start asking questions that's, I guess, the most important thing that we should be doing as consumers and users of this technology. 

Marianna 46:21
 Let's start asking questions about what is this technology for? How we use my data, do I really need it? What are the potential harms? And again, I want to insist that this is particularly important in the healthcare sector because I think that some hospitals are rushing to get into adopting this technology without really knowing the potential harms. 

Marianna 46:51
 And so let's put doctors and nurses in a position to make good use of. And so also for the rest of society, which you are doing with your work as well. 

Iliana 47:09
 Mariana, thank you so much for coming on the podcast. 

Iliana 47:12
 It was amazing to have you. 

Marianna 47:13
 Thank you for having me.