https://diginomica.com/bts-chief-digital-and-innovation-officer-ai-biases-and-next-generation-skills
BT's Chief Digital and Innovation Officer on AI biases and the next generation of skills
One of the world's leading telecommunications companies, BT, is working in partnership with Google Cloud to carry out a 'company-wide digital transformation', which will see it make use of the vendor's cloud infrastructure, machine learning and AI, data analytics, security and API management tools. This five year partnership is being spearheaded by BT's Digital Unit and aims to create an 'AI-fabric' across the organization, where data-led decision making and improved customer service are the desired outcomes.
Earlier this year it was reported that BT would become a "leaner business", where up to 10,000 jobs would be replaced by artificial intelligence. At the time, BT Chief Executive Philip Jansen said that there was a "huge opportunity to use AI to be more efficient".
At a recent Google Cloud event in London, BT's Chief Digital and Innovation Officer, Harmeen Mehta, spoke about the '360 degree partnership' with the cloud vendor and said that almost all of BT's AI is built on Google Cloud's platform.
Mehta also spoke very candidly about her views on AI, as well as the impact the rapidly developing technology could have on BT as an organization - ranging on topics from skills to digital copyright. At the same event, diginomica highlighted how creative marketing giant WPP is preparing for generative AI, where CTO Stephan Pretorius said organizations need to ask themselves: what is your role in the ecosystem and the economy, given generative AI?
BT's Mehta started by discussing how the next generation of talent coming into the organization will likely be freed from the burden of having to carry out mundane tasks, and as such, predicting what world they will create is difficult at this moment in time. She said:
I actually think almost everything we're talking about, we are on the brink of something fundamentally changing in society. Not incrementally changing in society. This is going to lead to a very different society.
BT currently has more than 100,000 employees and Mehta said that when you enter an organization of that size, it's highly intimidating for a new starter. Equally, she added, you're only as good as who you know - and when joining, you know very few people. But the biggest complaint from new starters joining BT is the onboarding process, something that AI could help with significantly. Mehta said:
It not only takes a long time onboard them for them to be productive, but then we also do a really crappy job at that. Because it's just impossible to bottle up a large part of information and put it in a capsule and give it to that person. But that's what this technology does. And in fact, it changes it.
The focus going forward is reskilling BT. She added:
I don't know what it's going to look like, but that's the fun of it. And I feel very privileged that I'm part of helping shape that society, rather than only knowing this side of the wall, because then you can make out a lot about the contrast.
We've been really working a lot on reskilling within BT and bringing in a lot of different skills, because the skills I need today are not what the company has today. And the company has some great people who were great for the skills that we needed yesterday.
But it's both - it's bringing in new skills, but reskilling these people as well.
Mehta is also taking a glass half full approach to the advancements in AI, as it relates to how it will impact the way people work. She said that the latest developments have woken people up and made them curious, which could drive some interesting changes in society and the economy. Mehta said:
If nothing else, as human beings we've started becoming a little less curious and a little more lazy and complacent. This has woken us up. At least in me it's invoked a whole lot of curiosity.
It's almost the same thing that DaVinci probably felt when he tried to fly for the first time and crashed and burned very badly. But if he hadn't made that attempt, nobody would have invented the aircraft. We're on the brink of something phenomenal, and I think the new generation will do much fewer mundane tasks.
My aspiration for what they will do next is that they will truly invent a different world. Because they don't want to spend time like us, living one third of our life doing mundane tasks, one third of our life traveling from one place to another. They really will use a lot of their life to really create something beautiful that we don't even know exists today.
AI and bias
The second topic considered by Mehta at the Google Cloud event was around how we can remove biases from AI models. This is particularly critical if we are heading for a world where more autonomous decision making takes place and we are expected to trust AI models to make those decisions for us. We've already seen numerous examples of how AI has made biased decisions based on gender or race in the workplace (and even the courts) because of the data feeding it replicating human biases that came before it.
There's a debate to be had about whether any data feeding a model could ever be unbiased, given all of us humans hold both conscious and unconscious biases to some degree. And the data being used in the context of machines has been created by humans. That being said, Mehta seemed to be considering how in order to make AI less biased we perhaps need to think beyond the four walls of an organization. She said:
I want to ask a counter question, because how is that different from humans? If I grew up in a society, which only has people who look and feel like me, that's all I've learned. Until I go somewhere where I meet somebody dramatically different. And you see that with kids all the time. That's when they realize that there is somebody who thinks differently, looks differently, speaks differently out there.
It's the same for AI. It's going to learn only what we teach it. And yes, if you feed it bias, it's going to bias. But that's the beauty and that's where we've got to strike a balance.
We've heard a lot from Google about how it has spent millions, trying to prove to companies like mine that 'we're going to keep your data yours'. And we saw a lot of value in that. And I fought for it. But this morning I was sitting there thinking, am I even fighting the right fight? Because that means everything I'm learning…I'm almost saying I don't want anybody else to learn from it.
In a way when I'm training the model, I'm actually training it to be biased towards my industry, towards my lingo, towards my corporate culture, towards even the colors of my brand. And in society's eyes, that bias is okay. But a different bias is not okay. So who made that rule? I don't know what is right or wrong. I don't know.
I do know that there is some part of your private data that you don't want to share out and we have rules and laws against that. But beyond that, if you really want to embrace this world and we want to embrace AI, you've got to find a way of letting it breathe. Almost letting it learn from different people other than us. And sharing that knowledge. That's not what I thought at nine o'clock this morning, but that's what I think at the moment.
AI and copyright
diginomica has been regularly highlighting the challenges relating to generative AI and the implications of copyright law. With the training aspects of generative AI being able to distill the plots of stories, logic of code, and style of art in ways that just aren't covered by existing copyright regulations - and with artists and authors going to war with AI vendors over the issue - it's very, very untested water.
In addition to this, generative AI is driving down the cost of creating pretty much any type of content to effectively zero (or close to it), which when considered at scale, makes the idea of 'ownership' very difficult to grasp. In the past it was easy to own the rights to something when it couldn't be replicated by hundreds of models across the internet easily.
When asked about this topic, Mehta exercised caution, but said that the regulations we have today are likely to be very different to those needed to deal with AI. She said:
I don't think that that question can be answered today. Google has talked about doing digital watermarking - so new digital images will have that, but images that don't have that, does that mean they don't have a copyright?
Actually, no, because that's not what the laws of different countries say.
But what I have not heard anybody talk about - and I've spent a lot of time with the people who are developing models in these companies - is nobody's ever told me how a model can unlearn something that it has already learned.
If they've already learned something you can only create a different bias by making them learn new things a bit more. I actually think this might give way to rethinking what copyright actually means in the new world. And it could have different implications.