Unconscious Bias: Is AI dividing us?

Erin Young (00:02):
We found persistent structural inequality and data in AI. We found that men's and women's career paths in the field look very different. So women, for example, are more likely than men to occupy a job associated with less data's and pay in AI, such as data preparation and analysis, as opposed to the more prestigious jobs in the field in engineering and machine learning, which men are more likely to do. And we also found that women working in AI in the tech sector have much higher attrition rates than men. So they lead the industry in much greater numbers.

Aubrey Lovell (00:42):
AI has been on a slow burning trajectory for most of its existence, but in recent years it has exploded. Since 2015, the AI industry has grown from being a half a billion dollars to an estimated $137 billion and is expected to grow by 30-40% per year until hitting $1.5 trillion by 2030. We probably never notice it in our daily lives, but AI is ever present. It can decide which drivers our ride share app calls for us and how much they're charging. It'll advise banks on the best credit limit to give us, or whether we get a card at all. When we apply for jobs, AI will likely be screening our application to see whether we're a good fit to work for that company. In short, AI is everywhere. Except the AI experience we get can be very different depending on who it's looking at. After all, AI is trained by humans and those humans have biases and preconceptions, and if we're not careful, those biases can find their way into an AI, and that's having a tangible and sometimes devastating effect.

Michael Bird (01:51):
That is what we're exploring this week, the world of biased AI. We'll be meeting the people fighting to fix the problem through education, legislation, and perhaps most importantly, treating the issue at source by getting more women into STEM fields. I'm your host, Michael Bird. And new for Season 4, I'm delighted to now be joined from the other side of the Atlantic by my new co-host Aubrey Lovell.

Aubrey Lovell (02:14):
Thanks, Michael. So good to be here.

Michael Bird (02:30):
From Hewlett Packard Enterprise, you're listening to Technology Untangled, a show which explores the rapid evolution of technology and unravels the way it's changing our world. Okay, so just how much of an issue is biased AI and how real is the problem? Aubrey?

Aubrey Lovell (02:51):
Well, it's very real, especially when it comes to women. A joint study by UNESCO, the OECD, and the Inter-American Development Bank found that AI enhanced HR software routinely rejects or downgrades applications for women. Elsewhere, women are, on average, given credit limits on their cards, 4.5% lower in the US for identical credit scores when AI-based decision making is involved. Now, discriminating in this way is illegal, but if the AI has been trained by a certain type of person or persona, that's more common in the tech industry, the AI can make mistakes when it comes to the data which sits outside of its expected norms.

Michael Bird (03:31):
So this is our Chapter 1. Why is AI going a bit biased?

Fidelma Russo (03:37):
I'm Fidelma Russo. I am the Chief Technology Officer at Hewlett Packard Enterprise. We're really beginning to see, I think, the dawn of the era where every company is going to use AI to help in their operations. And so as a tech industry, we have a huge obligation to make sure that that technology is used effectively and wisely. And the difference in AI from other technologies is really that there is this dependency between the humans who develop it and the technology that's developed, and in such a way that can lead to bias. And it may be the first time in our history that human interaction dictates the outcome of the technology.

Michael Bird (04:26):
Yeah, so Fidelma raises quite an interesting point now. This is a technology that changes its outcomes depending on who's making the inputs rather than just what those inputs are. So there's well, a dependency there, which makes sense, I guess. But surely, this has been known about for ages.

Aubrey Lovell (04:48):
Well, yes and no. Dr. Erin Young is a research fellow working in the public policy programme at the Alan Turing Institute in London, which looks at examining and solving big issues in society with data and AI. Erin co-leads a project at the institute called the Women in Data Science and AI Project. I wanted to know how serious the problem of bias in AI was, so I asked.

Erin Young (05:10):
So yes, there is a problem and it's pretty serious, but I think to begin to understand why it's serious, we need to first explore why it happens and for how long it's been happening and for how long it's been a problem. Gender bias in AI hasn't been known about for very long, mainly because AI systems, as we understand them at the moment, are relatively so new and recent advances in machine learning and AI as well, have thankfully brought with them some increased discussion around AI ethics. And so as part of this, is a discussion around gender bias in AI. But it's a huge and complex issue. But to simplify it, at its core, the underrepresentation of women and marginalized groups that we see working in the data science and AI field, so roughly around 32% of the global AI workforce are women, which when we're about 50% of the population is a huge underrepresentation. So there's the underrepresentation of these groups happening in the sector alongside issues such as gender data gaps. So when I say gender data gaps, simply put, that's the failure to collect quality gender disaggregated data.

(06:40):
So these issues, among many others as well, can lead to bias being built into machine learning systems. And these create harmful feedback loops that then further discriminate against those not involved in the technologies creation and the process goes on. So in other words, existing offline inequalities in the world are being built into AI systems. So a simple way of thinking about this is bias in, bias out. This is happening because technology isn't neutral or objective. It's not gender neutral, not race neutral, age, ability, socioeconomic background neutral and so on. It's shaped by the people who build the technology. And so of course, even unintended, it reflects their history, priorities, values, preferences. So technology isn't intrinsically good or evil, but being built by humans who are inherently biased, AI systems are biased. So we can see how inequalities in the world and the AI workforce and data inequalities can result in biases, indeed gender bias in AI systems.

Michael Bird (07:53):
Yeah see, what I found really interesting with what she said was technology is inherently biased. Technology isn't neutral, which I found quite interesting because I don't know, I don't think my laptop has a bias, but I mean, it makes sense. Bias in equals bias out. It's so determined by the people that are building these systems. Even my phone or my laptop, if you think about it, probably has been designed by a certain group of people that, I guess, there are maybe tiny little minor things they've done which add an inherent bias.

Aubrey Lovell (08:25):
Yeah, I mean, I think it makes sense. You think about products, you think about the technology and the technology is only as good as the maker. And so when you're looking at the inputs and the influence that you're having on technology, obviously that's going to impact the outcome. And obviously there's a lot of really smart people working on this, tons of scientists. Hopefully, they're working to figure that out and try to filter out those preconceptions for those things.

Michael Bird (08:50):
So I guess we're at the intersection here of philosophy and sociology on one side and data on the other side. It's pretty cool, but I think a little bit scary in some ways.

Aubrey Lovell (08:59):
Yes, it's a really interesting thought, and one of the people I spoke to was Anjana Susarla, Professor of Responsible AI at Michigan State University. Here's what she had to say.

Anjana Susarla (09:09):
So I think the bias in AI comes from essentially three different things. One is, what is the data that has been used to train the artificial intelligence software and how has the data been collected or aggregated or put together, annotated? Second issue is how the models themselves have been trained. Is there some bias caused by how we are building these models? The third bias comes from people deciding to use the outputs from AI in some prejudicial fashion. And so why is there gender bias, for example? Well, it can be that historically, the data that has been used to feed into our algorithms did not really consider women's participation in economic activities.

(10:00):
The very famous example is that from Amazon, which used a resume screening tool and they were just try to automate the process of screening resumes. On the face of it, it seems wonderful. Artificial intelligence has so many capabilities, predictive power that human being can easily miss. But what happens is the historical data that was used to train the algorithm was primarily from software engineering where there are not many women, let's face it. So the algorithm taught itself all the prejudices or stereotypes, in a way, that any language, gendered language is bad, the word woman is down voted and so forth. So those are unconscious biases that are being reflected in AI.

Aubrey Lovell (10:51):
What are those real world effects of "bad AI" when it comes to discrimination, especially from the lens against women?

Anjana Susarla (11:00):
It can be systematically biases that will affect women's participation in work. It will be biases that can affect women getting credit scores, for instance. So there's another very famous example that we like to give our students is Apple had this credit card in partnership with JP Morgan and then there was this couple who was actually living in California. They've been married for 20 years. California is a common property state. So it's like, what is mine is yours. So both of them have basically same amount of wealth.

(11:38):
And in reality, what happened when they applied for the Apple credit card, the wife was given credit, which is 1 by 20th. So the husband got 20 times more credit limit compared to his wife. That is just crazy. I mean, there should not be any difference. It's not like a few thousand dollars, 20 times more. And what's more is her credit card, not only it has very ridiculously low spending limit, but it was also, even if she prepaid it or paid it up somewhere in the middle of the month until the end of the billing period, you can't spend with that credit card. So there were so many limits.

Aubrey Lovell (12:20):
That just transcends so many different areas of our lives.

Michael Bird (12:27):
I found her examples quite interesting and a little bit scary.

Aubrey Lovell (12:31):
Yeah, it really is because it's almost like a trickle effect. I mean, you could think of it being so simple as a credit card, but then when you really think about it and what you use a credit card for and spending and financing things for your life, it can definitely have an effect all the way through. So yeah, it is a little bit crazy.

Michael Bird (12:52):
It's a bit under the radar and potentially a bit hidden.

Aubrey Lovell (12:52):
Yeah, and this is an isolated case, but a 2021 report by Women's World Banking found that women are being denied over $17 billion in credit annually by AI based decision making. And that's purely based off the decisions the AI has come up with based on its input parameters. And you might look at it and go, "Well, why would men and women's credit scores be treated differently? What could be in those parameters?" But there are differences. For example, who is the prime name on joint accounts? Who owns the title deeds on a house, spending patterns, et cetera. What's interesting is that the report found in many cases, the chief data scientists and AI companies sit on the board and can affect how much oversight the AI is given in its decision making. And inevitably, they're going to see their system as effective and fair.

Michael Bird (13:41):
Oversight in decision making is starting to look like a huge part of the issue. AI is marketed as this amazing tool to automate processes and save money, but not necessarily as one which needs human decision makers in the loop. So we've been speaking to Ivana Bartoletti, Global Chief Privacy Officer at Wipro and visiting policy fellow at the Oxford Internet Institute. She was also Woman of the Year at the 2019 Cybersecurity Awards.

Ivana Bartoletti (14:12):
What this happening in the moment is that you can end up in a situation where you've given algorithms and predictive technologies, they used to make big decisions. We've seen incredible media reporting things that have come out to public domain around algorithmic discrimination. We've seen cases, for example, in Dutch government having to deal with a fraud algorithm that was used to determine whether somebody was going to the fraud system or the benefit system, which had a criteria inside it, which was taken into consideration, a second language and that would increase the fraud score. And basically, that system was government, public sector was so discriminatory and led to people even killing themselves because they were accused of fraud and they have to repay. It's not because of the fault of the algorithm, the problem was who created it and not taken into account the problems that an unfettered use of data and not a critical approach to data and to the algorithm, where it can lead.

(15:14):
Immediately, anyone would understand that it's going to penalize immigrants. And we've had some huge cases happening in the UK for example, during the A-Levels with the algorithm used to grade the students during the pandemic at the time of the A-Levels. And that algorithm was automatically given a better rate to the students coming from private education. For example, Roger Taylor at the time, he was the Head of the Center for Data Ethics in the UK, made a really wonderful comment. He didn't say we made a mistake with the algorithm, the state made a mistake in doing that. He said that wasn't a case where you should have used an algorithm in the first place because the progress of a student is something very personal. And governments have been facing challenges and financial style challenges for a long, long time. And so these systems, they can be very appealing because that means that they can rely less on humans and more on machines and they can save money, but it may not be the right time yet or it may just not be the right thing to do.

Aubrey Lovell (16:24):
Listening to that, it's pretty incredible. I don't know how you're feeling about it, but obviously there's a give and take with how we use AI and when we use it.

Michael Bird (16:34):
Yeah, I think the point that she was trying to make here was that actually, there are some occasions where AI is amazing and it's a really good tool to use, but there are other times where it's like, is this the right way to do it? And actually, does there need to be maybe humans in the loop to help sanity check some of those decisions? Particularly things like exam results or benefits. They have massive impact on people's lives.

Aubrey Lovell (16:58):
They really do. And I really like the comment around data being a mirror to society and the need to really ensure that there is a balance to the AI controls and the inputs in order to counteract some of that bias that is affecting many people in so many ways.

Michael Bird (17:12):
That is so true. That is so true. And if it is a mirror to society, it's a mirror to society whether society is good or bad. And if you are trying to be fair to people with decisions like exams and benefits, then you need to figure that out in your model and you need to make sure you try and take those biases out, I guess. So how well understood is the problem at the highest levels of AI development and government?

Aubrey Lovell (17:39):
Great question. It's something I asked Erin Young about.

Erin Young (17:42):
So I think the technology industry is aware there's an issue, and in particular, a field of research in academia is documenting how AI systems can exhibit gender and other biases. Biased AI products are increasingly in the media for their discriminatory outcomes. So you might have heard about the marketing algorithms that disproportionately were showing fewer scientific job ads to women, which then are encouraging even fewer women into the sector. And so this was absolutely picked up by the tech company in which this was happening because this was recalled, this piece of technology. And then there was a study from MIT, which found that facial recognition software identifies the faces of White men, but not those of dark-skinned women.

(18:37):
And actually, a really classic example is that of Google Translate. So when translating gender neutral language related to certain fields or activities or hobbies or whatever it might be, Google Translate defaulted to male and female pronouns reflecting the traditional gendered stereotypes relating to those areas. So for example, if you typed in they wash the dishes or they are a politician, in Hungarian or Romanian, then the translation would default to, she washes the dishes, but he's a politician and Google are trying to address this issue, which is great. And now I think they're providing both translation options, which is brilliant, but this obviously ultimately, doesn't solve the problem of the underlying data bias, which is still there. Once these systems are being used in society, it's then reinforcing the original problems. It's not only the same feedback loop again and again, but it's actually being amplified each time.

Aubrey Lovell (19:51):
That's scary and fascinating. And it's also something which Hewlett Packard Enterprises, Fidelma Russo has been thinking about a lot, both in terms of the problem of biased AI and how to solve it by getting more diversity and responsibility into the teams creating AI.

Fidelma Russo (20:08):
The first responsibility is the companies that are generating and are going to profit from building this great technology. And so in order to do that, they have a responsibility, that is their main responsibility. If this were a project that we were running as an engineering project, I don't think that we would take the approach to it that we're taking today. I think we would have a call for a 911. We would all huddle and say, "The things we're doing aren't working, and so what do we need to do differently?" I think there's a lot of a fallout after we get people into teams. So we really have to encourage people to think about their bias, think about, in some ways, leaning heavier into diversity.

Aubrey Lovell (21:01):
When we think about diversity, how difficult is the fight to get more diverse voices in the tech industry as a whole? Do you think it's actually improving?

Fidelma Russo (21:11):
Unfortunately, the numbers are not moving as fast as we would like them to. It's the leaders who have to help move the needle. If we wait for kids to come up through school, kids to come up through college, diverse talent to enter the workforce only to leave after four or five years because they don't feel like they belong, we're never going to make change. And so I believe it's up to everybody who works in tech companies to really question yourself every time you're making a decision on a hire, every time you're making a decision on a promote. And sometimes, you just have to take a chance on diverse talent that you may not be able to measure them the same way, but they're actually going to knock it out of the park once people get a chance.

(21:57):
The second piece is, of course, we do have emerging regulations in the EU, we have regulations here in the US, but regulations can only go so far and they're usually after the fact, but they do make sure that they're important to be there because they make sure that companies are aware that this potentially could be regulated over time and that therefore they have to put the policies and practices in place to make sure that they build unbiased AI.

Aubrey Lovell (22:33):
Couldn't agree more. And if you think about it, Michael, diversity leads to equality. Equality leads to the influence of technology, which then affects how AI is programmed to see the world. So I think we both work in tech and being in the industry for over a decade now, I can definitely see a change in how companies are prioritizing this and how it's strengthening our talent and our innovation.

Michael Bird (22:56):
Yeah, absolutely. And I think it's no longer acceptable to be hiring talent in a overtly biased way. And actually, when you have that talent in your organization, it's no longer okay for, again, there to be any uninclusive way of doing things. I know HPE, for example, with our parental leave policy, it is the same whoever you are. It doesn't matter if you're a mom or a dad or whatever, it is exactly the same. And so I love seeing this element of equality and trying to think in a diverse way because I think it means that you are not excluding people, everyone's included. And those people who maybe are on the fringes who think, "Oh, I don't know. I don't know." I think they'll film more part of it.

Aubrey Lovell (23:40):
Absolutely.

Michael Bird (23:45):
And that was also a really interesting comment on regulation because of course, there are laws around discrimination and just because an AI has made a decision, doesn't mean that the people running it or acting on that decision aren't culpable. And of course there have been cases of companies being taken to court for AI driven discrimination. So is the answer more regulation? Well, here's Ivana Bartoletti.

Ivana Bartoletti (24:09):
There is already legislation. This stuff does not exist in isolation. So privacy laws already apply to AI. If you've got to have a fair outcome and it has to be a fair outcome because fairness is a key principle in privacy legislation all around the world. I have right to access my data. And that happens in AI as well. If I want my data to be deleted, then my data ought to be deleted from an algorithm as well. I have a right to transparency and in formation of rights, I need to understand how things work. And that is why privacy legislation, data protection legislation has been leveraged quite dramatically over the last few years in courts all around Europe and other places on this.

(24:52):
But privacy is not the law of AI obviously, because there's a lot of AI that does not use personal data, but also because AI is much more than personal data. But there is legislation around consumer laws, there is legislation around non-discrimination, there is legislation by human rights. So AI does not exist in isolation. Existing laws do apply. Obviously, there are legislation that have been discussed around how you come to market with this product. So what is the due diligence that you have to demonstrate?

(25:22):
So for example, in the US, they are reproposing now the Algorithmic Accountability Bill. There are specific elements of AI which require probably regulation around how you market it. What is the due diligence that you need? How you demonstrate it? What are the safeguards for users and consumers? What are the responsibilities around liabilities? Because if I purchase a system from a company and then I use it to my users, where does the liability end? If it's a system that keeps learning by itself, who is liable and all of those. So how do you apply existing discrimination law to this and how do you find out that you've been discriminated in the first place? Those are new dimensions and the world is grappling with this as we speak. I mean, it's mad.

Aubrey Lovell (26:08):
Mad indeed. And it's also something that the Alan Turing Institute has been looking at in depth. After all, they're looking at solving societal problems with tech, and if tech is being used to make existing issues worse. Here's Erin Young's thoughts on legislation.

Erin Young (26:24):
We absolutely do need ethical standards and we're already seeing the harm which can be done even if totally unintended by AI systems. And it's one thing to recall bias technology, but it's obviously another thing to ensure that bias technology isn't being developed in the first place. But I also think it's key that this regulation isn't too restrictive so that it would stifle investment and innovation. And we're seeing a new generation, albeit small at the moment, of ethical standards being developed as the ethical legal societal impacts of AI progress.

(27:08):
So for example, the new AI Standards Hub, which is part of the UK's National AI Strategy, and in which the Alan Turing Institute was heavily involved in the development, is working to advance responsible AI with a focus on the role that developing national and international standards can play as governance tools, but also as innovation mechanisms. In Europe, for example, the European AI Act, so AIA it's being called, is aiming to establish the first comprehensive regulatory scheme for AI. And this could become a global standard. And I think these are really promising steps towards responsible regulation, which doesn't stifle innovation at the same time.

Michael Bird (28:02):
So yeah, tech companies are obviously treading a fine line here because on the one hand, they want to be responsible and to be able to frankly, make and sell good products. But on the other hand, there are obviously questions about how an AI product is used by the consumer and the way the data they input into it affects its output, which tech companies, well, they probably want to steer well clear of. After all, headlines about kids being cheated out of their exam results or women being denied credit, which their male counterparts can get, well, they don't really reflect well on anyone. It's a tough question for Fidelma Russo.

Fidelma Russo (28:44):
I think regulations are always after the fact. I think that the great thing about the technology industry is incredibly innovative and you don't want to stamp that out. And so I think they'll put guardrails around it and force you to think about things. But I think it's really up to the industry and the industry leaders to use this in a fair and equitable way and for company leaders to make sure that they're watching what's going on and putting their hand on the brake when it needs to go on the brake. So it's a partnership, but I personally think if you're developing the technology, it's your responsibility to make sure that it can be used in a fair and equitable way.

Michael Bird (29:34):
This is it, isn't it? Because we have Erin talking about how more legislation is important. But on the other hand, Fidelma makes a very good point that we don't want to stifle innovation with lots of legislation.

Aubrey Lovell (29:47):
Right. I mean, I think that last line that she said, "If you're developing the technology, it's your responsibility," that's really powerful, especially as we're seeing all of these themes and just the news generated around AI and its capabilities over the last year, even last months, weeks. I mean, it's really imperative that you have the right fail safes in place essentially, to make sure that there is a balance in that, moving forward the AI is responsible.

Michael Bird (30:14):
It's a difficult balancing act, and it's not just a moral desire to fix the problem because as Ivana points out, there are serious financial incentives for tech firms and users to be proactive and invest money and resources now in fixing the issues of bias endemic to AI.

Ivana Bartoletti (30:33):
I think it's more expensive not to do this because bias can lead to massive mistakes. So if you spend your time and your IP is an algorithm that is trained to spot skin cancer, and then you train this algorithm in one part of the world and then you deploy it somewhere else and it doesn't work because it's a deployment bias, I'm not talking about color of the skin, I'm just talking about an algorithm that is trained in one part of the world that latches it onto something which is local. You deploy it somewhere else, you make massive mistakes and massive penalties, reputation and financial damages. Or you are not going to decide who to recruit and you make discrimination and then you have to face lawsuits based on that.

(31:27):
So I'm not just talking about being nice and doing the right thing, I'm also talking about the fact that biases can really lead to bad decision making, which can make organizations lose a lot of money. So I think it's important to invest in AI which goes through due diligence and also involves different people in its creation so that if, for example, it's more diverse in its creation, people can say, "Hey, you're doing something here that may lead to bias," and all homogenous group understand.

Michael Bird (32:07):
Okay, okay, okay, okay. So I guess the question here is how do we fix the problem?

Aubrey Lovell (32:13):
Michael, I think as we're talking through this and hearing all of our guests, the biggest theme here is balance. And I think when we talk about regulation, it's definitely one of those things that does need to have a balancing act in this whole formula. And if you think about, especially with finance, you think about the crypto industry. We've talked about this on another podcast where there is that teetering of regulation and you see the outcomes of what happens there. So regulation, in a positive way, might force companies to be more aware and users to pause and take a breath before thinking about entrusting societal needs to AI and how do we take those steps and turn potentially biased AI into responsible AI? Here's Anjana with her thoughts.

Anjana Susarla (32:59):
AI needs to be fair. AI has to be safe and reliable. AI needs to safeguard privacy. AI needs to be inclusive and transparent. And finally, that AI has to have some accountability built into it where we can ask question as a user, algorithms made this decision and can I ask a what if question? What if I were someone else or what if the situation was changed? How would this algorithmic decision making look like? So that accountability has also to be there. And so I think if you're a young person who's getting into these fields, I would say that it's important to understand the fairness, reliability, inclusivity and all these, accountability. The same thing that we demand from non-AI systems. After all, if you are going into grocery store and you're buying some food, it has been approved by the FDA. There's some check mark. We don't consume stuff without having gone through a lot of consumer product safety checks. And so can we ask the same question for artificial intelligence?

Aubrey Lovell (34:17):
What do you think at the highest level, what is being done to overall fix this problem?

Anjana Susarla (34:22):
As I said, one is there's more awareness. The other is, I think the consumer rights advocates and the positive development is that very senior people in industry have also understood this problem. So credit card companies understand some of these issues and they have been a bit proactive, I would say, to address. So I won't say the problem is fixed, but at least we know that there is some root cause of this problem and we are working on getting some better fixes.

Aubrey Lovell (34:52):
Right, so the consensus is we have that awareness that companies and other people involved in all of these different decision making processes are aware and they're trying to make it better.

Anjana Susarla (35:02):
Yes. And I think at least in some cases, they have been attempts to, if not change the laws, at least put more enforcement on the books. So if a company is using an algorithm to make a decision, will there be some unfair impacts on, let's say, based on gender or race or something like that? Can we measure that? And what is the recourse to a consumer? What is the recourse to an employee? What are some mechanisms to mitigate these damages? So I think we are working on some of those issues as a society. If we see companies really investing, that's a very positive. If companies would really invest in responsible AI initiatives in the next five, 10 years, that would be very positive for consumers.

Aubrey Lovell (35:53):
Erin Young has her own thoughts on how we can fix AI.

Erin Young (35:57):
Yeah, I think about this quite a lot. As most of us are now interacting with AI multiple times a day in everyday life, often without knowing. You can see how the drive for inclusion in technology is really urgent to begin to mitigate these issues of bias in AI. I think it's really crucial that we have to think about this data pipeline or this data infrastructure, whatever you might like to call it, as part of this broader ecosystem. So there are definitely things, if you think about isolating what we might be able to do in terms of data collection to mitigate for these issues. There are absolutely ways to, if you are aware that there could be problems in a potential data set, there are ways to mitigate for this.

(36:40):
So for example, the census survey data, how we design the survey is really important in order to encourage inclusion and people to feel comfortable in answering questions in a way that reflects them. But then this data pipeline and data collection and analysis and interpretation process is happening in the much broader context of whatever you are collecting the data for, what you might want to use it for, why. And so I think that's really important that we don't only think about it as an activity by itself, but we think about what's happening more broadly and why we're doing it in the first place.

Michael Bird (37:22):
Yeah, really makes you think. I asked Ivana for her fix as well and she came back with something rather interesting.

Ivana Bartoletti (37:35):
There are many different reasons why bias can come in. So it's really looking at every possible area where bias may creep in and say, "How am I going to intervene?" That's the first one. The second one I think is around rewarding bias spotting. So it's really can we reward bias spotting rather than punishing it? There has tools that can be used to understand where, for example, bias may come from, whether a particular input has played a bigger role, and a lot of companies are proposing to use this and there are so many, and that is good.

(38:10):
However, there is no technological fix for problem which is not technological, which is more social. So it's understanding the limits. Fairness is never automatic. If a company wants to make a choice that is fair, it has to be a conscious choice, otherwise it will end up maximizing profit, which is reasonable. Of course, companies want to maximize profits. It may not be a fair solution. And what I'm saying is companies need to make the right choice and say, "Where is the trade off and what is the definition of fairness that we have as an organization and how are we going to translate that definition of fairness, which is an admission, which is the way that we view the world into mathematics?" Because that is the next step.

Aubrey Lovell (38:59):
Ultimately for all the guests we spoke to this week. It comes down to getting more diversity into the workplace. The best way to make a non-biased AI is to create non-biased development teams.

Michael Bird (39:10):
Absolutely. And HP's Fidelma Russo shared her thoughts with us on that too.

Fidelma Russo (39:16):
People look at this industry and they need to see people that look like them. And so what would I like to see? I'd like to see more women at the table. I'd like to see even more advertising geared towards women and diverse populations. I'd like to see more open discussion of what are the things that we're going to change given that we've been at this for 20 or 30 years and things really haven't changed all that much. And so it's an industry that's full of really, really smart people. And so I think my question is, how come we haven't solved this? Because if it was a big technology problem, we would've figured this out. I do know that if you keep doing the same thing over and over again and you expect a different outcome, you're not going to get it.

Michael Bird (40:09):
Okie dokie, so we're nearly at the end of the show and I feel like we've now heard about the solutions proposed by each guest. So what's next? I mean, do you think all these solutions should be combined or maybe the solution is a massive ask to shift in industry? What's your take on it?

Aubrey Lovell (40:26):
I think there's multiple layers to this. I don't think we have the answer today. I think we're working on it for tomorrow. But yeah, it's an interesting conversation and I think having conversations like this brings awareness and that's the key to start moving in the right direction.

Michael Bird (40:41):
Yeah, and so it's a fascinating area for discussion and hopefully by having these kinds of conversations, we can start to make a real change to the face of the AI industry and to the millions of people's lives going forward.

Aubrey Lovell (40:53):
100%. AI is such a powerful tool and one which can affect so many people. It's absolutely vital we get it right.

Michael Bird (41:06):
You've been listening to Technology Untangled and a huge thanks to our guests, Anjana Susarla, Erin Young, Ivana Bartoletti, and Fidelma Russo.

Aubrey Lovell (41:15):
You can find more information on today's episode in the show notes. This is the first episode in the fourth series of Technology Untangled, so please subscribe on your podcast app of choice so you don't miss out. And to check out the last three series.

Michael Bird (41:28):
Technology Untangled is hosted by Aubrey Lovell and me, Michael Bird. And today's episode was written and produced by Sam Data, Aubrey Lovell, and me, Michael Bird. Sound Design and editing was by Alex Bennett with production support from Harry Morton, Alison Paisley, Alicia Kempson, Alex Podmore, and Ed Everston. Technology Untangled is a Lower Street Production for Hewlett Packard Enterprise.

Hewlett Packard Enterprise