Podcast transcript: Do we need AI regulation?

Podcast transcript: Do we need AI regulation?

This automatically-generated transcript is taken from the IT Pro Podcast episode ‘Do we need AI regulation?'. To listen to the full episode, click here. We apologise for any errors.

Adam Shepherd

Hi, I'm Adam Shepherd.

Sabina Weston

And I'm Sabina Weston.

Adam

And you're listening to the IT Pro Podcast where this week we're examining the area of AI regulation.

Sabina

The AI industry has been going from strength to strength over the past several years, with machine learning technology becoming increasingly widely available to businesses, along with a stream of breakthroughs in research and development. However, this explosion of AI capabilities has also brought its share of problems.

Adam

Questions of model transparency, implicit bias and ethical deployments have frequently been levelled at efforts in this space. And numerous campaigners have called for governments to introduce legislation, which will place greater controls on the development and implementation of AI systems.

Sabina

Joining us this week to discuss the issue of AI regulation, whether it's necessary and how it might be implemented without stifling innovation is Cindi Howson, chief data strategy officer for analytics software vendor ThoughtSpot. Cindi, great to have you on the show.

Cindi Howson

Great to be here, Sabina, and Adam.

Adam

So Cindi, can you tell us a little bit about what ThoughtSpot does?

Cindi

Sure. So ThoughtSpot is the modern analytics cloud platform. We use search and AI-driven insights to allow people to find the meaningful insights in their data. Within the context of AI, we would call this more narrow AI. And yet still trust and transparency is very important. So if we generate an insight using AI, we also will give the users full transparency over the inputs. So if gender was an input, or postcode, things like this, and users can choose to remove those variables from the input to the AI.

Adam

So Cindi, what is the current state of AI legislation?

Cindi

Well, I think you'd have to put some boundaries on that down to the city-country world region. You know it. So it really does vary. I would say there's not strict regulation per country. In the UK, it's more about innovation. There's a lot of ideas and proposals out there. But getting very precise, is something banned. That is more the exception.

Adam

So it's kind of quite fragmented at the moment.

Cindi

Yes, fragmented and inconsistent. But in its early days, let's say.

Adam

So what's the impact of that? Do you think from the perspective of building and deploying AI technologies?

Cindi

Right now, in terms of building and deploying? I don't think there is a big impact. If you think about the technology providers working on AI systems, what they want from regulation is a level playing field. But they do not want innovation stifled. what citizens want is all the good that AI can bring. And I don't think we talk about that enough. I think there's still a lot of fear about AI. And that is where we need informed regulation.

Sabina

That's a very interesting subject here, because, on one hand, this regulation is needed. But I remember I remember covering research in December 2021, which found that 70% of UK businesses, well, the majority of UK businesses in that study said about data regulation was stifling AI innovation. And many, like 70% needed more information to help them navigate this very complex legal requirements surrounding especially data collection and use to vent sort of like to be used in artificial intelligence. And how do you see that, especially in the, in the market at the moment?

Cindi

Well, I mean, so So let's parse that a little bit. And you also talked about the data collection, which then we need data to build AI. And this is actually where the problems start because we do not have enough of a recognition that all data is biased. And let's also take some examples. So one of the hot buttons in AI is really facial recognition. There are some issues, let's say also with financial services and discrimination there. So if we think about some of the broad based AI facial recognition where we do not want it is we do not ever want to arrest somebody based on a match, a potential match of a photo scanned from somebody walking down the street. That's invasive, it's a violation of privacy. And the degree of accuracy is not high enough, particularly with minority communities or people with darker skin tones. So we don't want that. But I will also say when I visit the UK, I love the facial recognition at London Heathrow Airport, you know, the lines there have gone from two hours to about five minutes to immigrate. I love the way AI facial recognition is being used to rescue children from human trafficking. So this is the good of AI. But we need to separate it from when it can be used to unintentionally harm and create bias at scale, really.

Adam

So what role do you think legislation can play in helping to, I guess put in guardrails for that kind of that kind of potentially harmful use case that you've touched on?

Cindi

Yeah, so I see legislation in general, as just the moral minimum company should be better than legislation. Legislation is, in a way, a reaction when companies have not behaved in the best interests of the citizens and sometimes their stakeholders. So maybe in the rush to get an algorithm out there. They didn't do all the vetting of what was the data used to train the model. And again, we have to go back to all data is biased, and how do we overcome that? We need it to be explainable, not to the point that you're revealing intellectual property, but more that the human interpreting the AI model can say, oh, I can see where this would be a problem. And so the regulation is the moral minimum companies have to be more proactive on these other aspects.

Sabina

How do we get companies to be more proactive? Because it seems like they, it would be great to assume that they follow, you know, they are better than for roles, but unfortunately, we've seen when in many cases, but they really do take advantage of any potential loophole. And especially in you know, in relation to data collection, or even, you know, full on, full on the scraping from, you know, social media, you know, any, for facial recognition usage. We've seen a lot of abuses of, of data, I think privacy, what I call it data privacy requirements, or data protection. My question is basically, what can we do to get companies to actually behave themselves and restrict regulation?

Cindi

How do we make well, companies don't want, again, companies want regulation only to the extent that it levels the playing field. But what's also not helpful is if a company has invested so much in AI and bringing a product or an algorithm to market, let's look at the Apple Pay and the credit card. And then they found out afterwards, oh, wait a minute. There's some biases in here. Why is the woman who earns the higher salary, better credit score, getting a lower credit, than her husband, is what happened in that situation? So once a company has invested in an AI application, that's almost a little too late. So we need education. So it has to be a multifaceted approach. And this is where I think it has to be education of both the customers or people that benefit from the AI. And so that's us, the citizens. It also has to be the people building the models. And I was very discouraged to read a data point from a McKinsey survey, that even the best in class companies, only 36% of the best companies that AI leaders, as McKinsey dubs them proactively looks for biases in their data. And that's where a lot of the problems start. So this is really an education of the AI and data science community. And then it really has to be conversations with a broader range, a more diverse set of stakeholders, the customers as well as the company providing it, as well as community watchdog groups in the UK, I follow the work of Big Brother UK. So academia I think has a role to play here. This point of diversity is a very pernicious problem, because unless we have diverse developers working on these AI models, it's the unintended outcomes and the unintentional bias the questions that you did not think to ask

Adam

it's unknown unknowns, right? Yeah, yeah.

Cindi

I mean, again, financial services. Let's take this: if you train your data, going too far back, and I lived in Switzerland for eight years married to a Brit, because I was married, I was not allowed to have my own bank account. This is five years ago right now. So if you trained your model on historical data going back 25 years, well, I'm going to look like a big credit risk, a bigger credit risk. So this is a problem. Now, a woman knows this rule deeply. If I only have male developers working on this, then I may not even think, Oh, wait, if I go too far back, that data is biased.

Sabina

I think we especially see it with, like you said earlier minorities, but a lot of algorism, especially in facial recognition are trained on white faces, on the Caucasian faces. And yeah, I think the biggest problem is about a lot of the times when it comes to facial recognition software it classes. Black women as men, it doesn't seem agenda for black people a lot of times. And, and yeah, it just stems from like, maybe not complete, like, like a very big like problem of diversity issues in the tech industry. This is what basically showcases Yeah,

Cindi

yeah, Sabina. It's the diversity of the data. It's the training data. So this is where I do think synthetic data may help with some of these things. This is still early days in the industry, but there are some companies working on this. But it's then so you only have so much data available. So recognise that the data is biased account for that in the model development, and then reveal the limitations. So this is also where AI on its own. We're not We're not ready for that it's human plus AI. So use the facial recognition, or whatever use the algorithm to inform your research your decision, but then it has to be ai plus the human using both.

Adam

So just to bring it back to the question of legislation, then we've talked a lot about the knowledge side of things and making sure that you're asking the right questions. When you're building an algorithm or an AI system, which a lot of organisations currently aren't doing. Do you think then this is an argument for imposing, if you like, due diligence requirements on companies that are looking to build AI systems?

Cindi

Yes, and this is where I think so regulation is almost just, you don't want it to be so precise that it renders AI useless, and people will stay out of the market. You want regulation to protect those who might be harmed from AI. And so having guidance is saying it should never just be an AI only decision, whether it's in criminal sentencing, or there was again, a mistake about something about who could get certain retirement benefits in one school system. And so you want the AI regulation to prohibit that. Now, can we get so precise to say there has to be a degree of transparency or Explainable AI to say what the inputs were to a particular model. I think that's good. I think that degree of regulation does not invade somebody's IP, or encroach on somebody's intellectual property. But once you start saying you have to share the full code, then then you can forget it.

Adam

But I think there's, before you get to that stage, even I think there's an argument for saying in the way that you know, you have things like Know Your Customer regulations, I think there's an argument for saying, Okay, if you want to implement an AI system, if you want to even, you know, build an AI or machine learning driven system, these are the questions that you need to be able to demonstrate that you have asked internally, before you start putting pen to paper with the with the code, you know, you need to have, you need to have asked, you know, where are we getting our data from? Is it as unbiased as can be reasonably expected? You know, what are the, you know, what the range of applications that we're looking to put this AI system to? Are there any unintended consequences that this could potentially have? And asking those kinds of questions before you start on a project? You know, we've heard from multiple sources throughout the industry is how you build robust and beneficial AI models.

Cindi

Yes, and I think your point, so doing this early in the process should be something that companies are doing, regardless of regulation. Now, if you want to tell them, they have to do it, fine. But then I would also get more prescriptive and say, Who is the contrarian reviewing the unintended consequences? If you look at what happened with Timnit, at Google, she was and one of the leading AI researchers, she spoke out against some potential harm from some of the AI. And we still don't know the full details but was fired. So then being a contrarian, and an employee is difficult. Having a contrarian from a marginalised group who is brainstorming, and identifying the unintended consequences, again, is a best practice, should it be regulated? Possibly.

Sabina

When it comes to the self regulation, what do you think about companies such as Fujitsu, for example, establishing their own AI, like governance offices, we've seen that on the in late January this year. So that was only a few weeks ago, really about that Fujitsu established its AI ethics and governance office. What do you think about companies taking the steps to sort of like establishing their own sort of like, self regulation? Office like that?

Cindi

I think it's a great idea. I think all companies should be doing this and doing this proactively. But again, I would want to know, what is the composition of that AI ethics review board? How diverse is it? Both in terms of the professionals, and also the who are the groups that may benefit from the AI? And who might be potentially harmed from the AI? You need to have both as part of that? This is almost where at one point I wrote about, should we have just like, there's for doctors, a Hippocratic Oath, should there really be for AI builders a kind of oath or do no harm?

Adam

At the very least that kind of code of practice?

Cindi

Yes, yes. And and, you know, recently, I had an interesting conversation with the chief data officer at a company on the data chief podcast. And we were talking about how it's really a failure of the data science education system, where the developers are rewarded, and academia rewards them for how much their model improved, and they are not paying enough attention of what was the training data set. And given that I come from a data background, to me that is the root of so many problems. And so we have a disconnect between AI developers, data science professionals, and data professionals. That really feeds the AI models.

Adam

What it kind of comes back to a lot for me, is there really isn't any oversight of AI, specifically in the way that there is for other elements of technology. I mean, in the UK, for example, we have the Information Commissioner's Office that looks into things like data breaches, and you know, data protection and all of that kind of stuff, which overlaps with AI in a lot of cases. But I think there is a strong argument for having an organisation like that, that is specifically focused on kind of ensuring and enforcing and advising on AI best practice, because it's, it's not going away, and it's only going to get more influential.

Cindi

Yes, so some of these best practices I see are coming out from different regulating and governing bodies. So they are just guidelines, the degree that practitioners are aware of them and follow them vary. So for example, under in the US, under the Trump administration, it was much more hands off, there was the federal data strategy that had aI recommendations and guide guidelines in there. Under the Biden administration, there is more coming out from the OSTP, the Office of Science and Technology. And and we once again, have a US chief data scientist, we did not have that here in the last four years. So but all of these are frameworks, they're frameworks. They're not required practices.

Adam

They're not binding in any way. Exactly.

Sabina

Yeah, I think in the EU, we're seeing regulation come out, which is actually legally binding. And is basically we'll see for implementation of fines, like, which are very similar to how the GDPR fines work. But this will be basically used, as, you know, fines for abusing artificial intelligence. misuses? Yeah, well, yeah.

Cindi

And this is where I just think we kind of put the cart before the horse here. So because again, most of where AI goes wrong, was unintentional. So we can say, and I, if I remember correctly, it's 6 million euros, or sorry, 6% of turnover, or 30 million euros, whichever amount is higher. Now, defining the harm here, and where did it go wrong? Was it really the model? Or is it how somebody used the model? If it's a self driving car that killed somebody, and it was a decision between hitting a tree or hitting a little old lady crossing the street? And yet the car manufacturer or the person developing the AI that drives that car, If they said, you know, it's never recommended that you put it in self drive mode in a neighbourhood, Well, who's really at fault here? Who's really involved?

Sabina

Yeah, well, there's a change in that I think there's supposed to be another like changing law about this. Because technically, I think that changing the way sort of like have a responsibility of the so called driver of the car, but it's technically not a driver anymore, it's, I think there's going to be like, the person in the car can be sort of like they'll have due to the car being self driving. But that driver has less of an authority then, than what we saw in previous sort of like in normal car models. So again, it's like, the waste of like, the law has to constantly be updated. How do we stay sort of, like on top of, you know, technology being developed? And how many, you know, and how often should we review these laws really?

Cindi

So the first question, how do we stay on top of this, it has to be education. And education, not just for the data professionals, and the AI professionals, it is up to every citizen to understand the good of AI and the bad. You know, the same thing is true with GDPR. GDPR is Europe-led here, California came out with CCPA later, but your blood here really is a way of protecting the individual's data privacy, but there were some negative consequences in that it's harder to get personalised recommendations like I want that loyalty coupon or what have you. So every citizen has to Be educated about this. And if I can make a recommendation I think of in the analytics industry, a groundbreaking book was Moneyball and later made into a movie. To me, every family should watch the documentary coded bias, that details some, it's on Netflix.

Sabina

I really liked it.

Cindi

Yeah. And it is it doesn't get so deep that you'll get lost in the technology. The other one would, would maybe also be the social dilemma. Now I would like a more balanced view, because I want people to understand the good of AI, whether it's again, the human trafficking or just look on your phone, look at your pictures on your phone, and filter by cat or dogs. See what you get. I thought it was hysterical. I was I was looking for is we recently had Pi Day 3.14. I think in the UK, it's it's 22.7 is your day for this. But I was looking for photos of pies and a picture of a stuffed bear came up a stuffed animal. Okay, but AI was not working there. But I don't think every citizen understands enough of what AI really is. They think it's just some mysterious thing that tech companies have invented. And yet data is part of our everyday lives. AI is part of our everyday lives. It is what allows you to order an Uber, it is what allows you to immigrate faster. When going into different countries.

Adam

on that basic level is predictive text, right. I mean, that's something that everyone is familiar with. predictive text essentially, is a very basic form of AI.

Cindi

Yes, yeah. And, and even so, so think of all the good I mean, AI is being used to expedite cancer recognition. This is huge. So I think of AI, it is the best of times, and the worst of times, and if we do not get this right, and to me, regulation is an afterthought. It's a moral minimum. We have to have this multifaceted approach, education of the professionals, but also the citizens. What do you want? What do you want AI to do for you? And what do you not want it to do? So a lot of times, we're just not being proactive about the potential harm, and in the risk for profit, and the race. And let's also keep in mind, this is not just a single country race, this is a global race. So this is where we want regulation to level the playing field to protect people who might be harmed. But we do not want it to stifle innovation, because there are countries that are less ethical about this, that will develop AI faster, and we do not want that.

Adam

So with that in mind then, although specific regulation isn't yet eminent companies can still ensure that they're approaching the topic responsibly and developing this technology in a safe way that's going to provide actual benefit. What would your top tips be for ethical AI development?

Cindi

The first one is Sabina mentioned, have a review board, make AI ethics core to your data science and AI programme. And make sure that you have some external stakeholders there, the contrarians and the people, not only who will benefit from it, but who may be harmed from it. The other big thing is education. Recognising where bias starts, and just acknowledging that all data is biased that alone is not something that we have enough recognition about.

Adam

Well, I'm afraid that's all we've got time for on this week's show. But thanks once again to ThoughtSpot’s Cindi Howsen for joining us.

Cindi

Thank you, Adam and Sabina, it's been lovely.

Sabina

You can find links to all of the topics we've spoken about today in the show notes, and even more on our website: itpro.co.uk.

Adam

You can also follow us on social media as well as subscribe to our daily newsletter.

Sabina

We'll be back next week with more analysis from the world of it but until then, goodbye.

Adam

Bye.

ITPro

ITPro is a global business technology website providing the latest news, analysis, and business insight for IT decision-makers. Whether it's cyber security, cloud computing, IT infrastructure, or business strategy, we aim to equip leaders with the data they need to make informed IT investments.

For regular updates delivered to your inbox and social feeds, be sure to sign up to our daily newsletter and follow on us LinkedIn and Twitter.