This month’s Women in Business episode features an interview with Sue Turner OBE, founder of AI Governance Ltd, and one of the first people to be certified as an Artificial Intelligence auditor by the For Humanity organisation. Sue sits down with Sophie Harris, Head of Marketing and New Business to discuss the power of AI for businesses and how to utilise Artificial Intelligence ethically.
AI is becoming pervasive across all levels of business activity, from HR to enterprise planning and decision making. And it’s widely accepted that most company boards are ill-equipped to understand the implications it has for their business, let alone create a governance framework for its use. Sue is at the forefront of raising AI ethical awareness and helping boards to navigate the governance minefield.
What role does AI play in marketing?
As marketing professionals, we’re trained to attract and convert customers with the right message, at the right time, in the right place. We analyse, we strategise, we plan, we create and we optimise to achieve commercial growth.
AI does this at scale – analysing, optimising and making autonomous decisions at speed. Autonomy is the key word here. AI learns as it goes, analysing huge datasets, testing multiple creative variants across a vast network of interconnected digital profiles. Its prime objective is acquiring and retaining the right customers as efficiently and cost-effectively as possible – across the entire customer journey. It does whatever is needed to meet the objective it’s programmed to fulfil, and it does so in real time.
The results can be impressive. Here are just a few examples. But bear in mind that these case studies have been written to promote the use of AI. In reality, we believe there’s still a considerable amount of human intervention and management involved, as brands are understandably cautious and AI is still finding its feet.
Vanguard Institutional, one of the world’s largest investment companies, used the Persado AI language platform across their LinkedIn marketing activity. The platform identified the phrases that resonated with customers and delivered messages with the right formatting, tone and call to action to increase conversion rates by 15%.
The American Marketing Association used the rasa.io natural language AI system to automate the curation, placement and subject lines of articles, to create personalised newsletters for their 100,000 subscribers. This drove a significant increase in traffic to their website, and an engagement rate for their newsletter of 42%.
Dutch online retailer Wehkamp, used the natural language AI of Wordsmith to convert the structured datasets of their 400,000 product lines into search engine optimised product descriptions in their own brand tone of voice. This enabled the company’s copywriters and editors to shift their focus to creating advertising and content to attract new customers.
So, with the promise of results like these, why isn’t every business jumping on board?
The answer is simply human nature. Business people are cautious. We like certainty. We like data to support our own decision making. With AI, we don’t know for sure what will happen until we try it, and we’re devolving the decision making to the machine.
We may dip our toes into one part of the process. Programmatic display advertising, for example, or predictive big-data analytics. But applying AI across the whole marketing process seems a way off.
It’s the same caution that drives our relationship with autonomous cars (pun intended). We’re relatively comfortable with automated cruise and lane control yet giving complete control of the car still seems scary – even though we know that pilots regularly do this on the planes that carry us through the skies.
The problem of bias
So are we right to be cautious? In short, yes. Hence the need for AI governance to address the concerns around, for instance, bias and privacy.
AI governance audits are encouraging the disclosure of differences in the data a system has been trained with and the data it finds when it’s working. A diverse universe of training data is advised to eradicate bias in practice across all dimensions: gender, race, politics, location etc.
Backed by funders including Elon Musk and Microsoft, the GPT-3 API is already used by companies like IBM, Cisco and Intel, and the engine is embedded into solutions like Pencil who already have over 100 customers including Unilever.
Backed by tech giants and increasingly used by large global brands, GPT-3’s learning dataset has been the whole of the internet. It’s powerful. And yet there are already a growing number of ethical concerns about it.
Despite its power and huge internet-wide learning dataset, tests have shown it still fails some simple common-sense reasoning tests. It also has a problem of bias. A recent research paper from Stanford found that in 60% of cases GTP-3 described Muslims as violent, and wrote about Black people in a negative way.
These problems don’t seem to be isolated to GPT-3 either. A research paper from the Allen Institute for AI found that they applied to nearly every popular AI language model, including Facebook’s RoBERTa software.
The moral dilemma for brands
Clearly, there are many bigger brains working on these ethical problems than mine. But it has led me to ponder some very real ethical dilemmas for us as marketers.
On the one hand, we’re deploying fast-learning machines with the objective to grow our businesses. They do this by testing and learning from the behavioural data they encounter. They’re pragmatic. They’re not programmed to change the world, but to exploit what it finds to be true.
On the other hand, we’re driven by the objective of our western sensibilities to change attitudes and behaviours by portraying our brands as inclusive and accepting of diversity.
The dilemma for our governance of AI machines in the service of global brands is how to direct them when these two objectives come into conflict.
Do we instruct them to show diverse imagery and use inclusive language in regions, cultures or segments of our communities where they don’t resonate with the audience or deliver commercial results? Or do we let them pragmatically deliver an ever-evolving hyper-personalised version of the brand with a core purpose, tailored to the sensibilities of whoever it encounters?
I don’t have the answer to these questions. But it’s important our industry and the practice of AI governance addresses them now, in these early stages of evolution. Even now, we can see AI engines being embedded and replicated in other systems, and applications being swiftly adopted.
The danger is that any bias or ethical conflicts programmed into the DNA of these early engines may be hard to stamp out as they replicate and grow into interconnected ecosystems.
If we think things are moving quickly now, this is nothing to where the experts predict it will go next. Imagine an AI system designed by another AI system with the objective of being better than itself. Science fiction? It’s already happening. The DeepMind project has developed AlphaGo, an AI-powered machine, to play the game ‘Go’ – thought to be one of the most challenging board games in the world. Not satisfied with beating the world’s top players, they used AI to create a version that beat itself.