From OpenAI’s revolutionary language model ChatGPT to the creative tools of Adobe Firefly; developments in AI – especially generative AI – dominate the tech scene and there’s no sign of it abating. In fact, according to the FutureBrand 2023 Index – a major survey of CEOs, Chairs, MDs, Senior Civil Servants and Managers examining the brand perception of some of the world’s most recognisable companies, AI is now considered the most important determiner of business success today and in the foreseeable future.
Yet with declining corporate trust amongst the public being identified as the second biggest threat to business success after lagging tech adoption, AI’s capabilities remain both revered and feared, maligned and misunderstood. If businesses cannot afford to enter and weather a future without it, how can AI be successfully adopted by companies and organisations when it has come to be the very thing that the public at large mistrusts?
AI is a huge governance challenge and an ESG issue. But it also presents the ultimate opportunity for organisations to demonstrate that they can be trusted. From stealing our jobs to being used as a tool to manufacture fake news – the problem with AI is that its reputation has become so besmirched and smeared by negativity and Doomsday-esque projections that the potential good that it can do to enhance societies and human life has become less and less clear.
Thus, AI becomes the very catalyst for our most primitive fears that it has practically elicited a freeze response amongst mainstream businesses. Hearing “we must get on top of AI” is a common boardroom refrain. In this state of stasis, neither the global business community nor governments have been able to fully frame, adopt and therefore bring a sense of order, regulation and trust to the technology, limiting the scope for meaningful change to happen. We worry instead that AI will debase us, not optimise us, or worse – destroy us, not save us. Inaction becomes the result. In doing so, we halt our ability to urgently innovate our way out of some of the biggest social challenges that we, as a species, face. Still, AI is here to stay and these polarising tensions must be reconciled for corporates to prosper and become future-proofed in the face of new challenges and, as yet, fully unrealised threats.
Whilst sci-fi’s depiction of new technologies like AI is as a cold and calculating entity (which is true: AI does not have feelings; it concerns itself only with data-based facts), a more accurate if crudely reductionist view of AI is that, simply, it is just a statistical probability of a result. Yet the positive implications of its ability to identify and predict patterns in data, say, concerning road safety at a dangerous junction or disease management to affect a better prognosis are clearly transformative. In terms of human longevity and quality of life, interconnected data, when in the right hands, presents an incredibly powerful resource for medical research, service planning and personalised care provision, never mind easing the pressure that healthcare professionals and systems face where underfunding and over demand has led to huge labour attrition and premature deaths.
Businesses in all spheres, not necessarily ‘pure’ tech firms, will increasingly shift towards becoming more artificially intelligent organisations, reducing administrative burdens, transforming corporate training, improving productivity, tackling the mundane and banal and allowing humans to do what machines cannot do: lead and make decisions with empathy.
So, what exactly are we afraid of? Perhaps on an existential level, we worry about becoming defunct. AI’s capability to calculate and analyse indefatigably and at speed is something that even the most adept human brains cannot compete with. Nor should we. On an ethical level, we might worry about being deceived or spied on. On a human level, we might fear a loss of meaningful, one-to-one exchanges, particularly in areas where assurances are especially sought – such as consulting with a doctor over a worrying health matter. And, on an idealistic level, perhaps we worry that innovators could get too carried away in the excitement of their own technology that ethics become completely forgotten. When the creators of something, like AI, simultaneously note its threat to humanity, it’s easy to see why the rest of us might worry.
In terms of losing our jobs, such anxieties have existed ever since the advent of machine automation and the Industrial Revolution’s inception; the dawn of AI is no exception. We fear what we do not know. Whilst there were indeed disruptions to sectors, like farming and manufacturing, which once exclusively relied on physical labour and basic tools to fulfil commercial obligations, new technology eventually optimised workers, not disempowered them. The mobile phone industry itself supported 28 million jobs last year; equally, as AI evolves and becomes more embedded in organisations and services, just as many jobs could be created.
We can now start to see why organisations – whether in healthcare or accounting, retail or automotive – need to adopt and adapt to AI if they are to retain their competitive edge and why brands that are pioneering this technology index well against the markers for resilience and indispensability. So how can brands reconcile this sticky issue of trust?
Brand is of the essence. Companies must be extra mindful of how they behave internally and externally. There is a huge job to be done in positioning AI as ultimately a benign force that can be life-enhancing, not life-destroying. But like any formidable force, we must hold both a healthy fear and respect for the goodness it can wield and the destruction it can cause. Transparency is key. Corporations must take a clear, unwavering stand about how consumer data is treated, how privacy is respected, and create new value points that they must live and die by.
In the current absence of governments developing laws on how AI should be used, the onus should fall on trusted, iconic companies – like Apple, Samsung and Microsoft, three of the most positively perceived global technology brands – to take the lead in creating a charter that sets the gold standard for governance. As architects of this nascent technology, they must work in concert with governments to establish the framework for working ethically with AI. The threats that AI poses are not going to be solved by regulators or governments but by the very tech companies that conceived it. This is a huge, exciting opportunity.
Our relationship with AI today feels a little like the end of the beginning. The hype was real and we are now split between doom-mongers and evangelists. This is a golden moment for brands and businesses to bring order and vision to AI; for businesses to win trust. How they approach this challenge will determine how successful they will be in a turbulent and unsettled future.