The BBC and Accenture are amongst them, with the latter announcing a collaboration with SAP to help other businesses adopt the tech. PwC has also recently announced the launch of a private generative AI tool for employees, aptly named ChatPwC.
It’s understandable that companies want to be amongst the first to adopt new tech and reap its potential rewards. But the harsh reality is that adopting generative AI without certain guardrails in place leaves users open to inaccurate results, and poses a serious risk to data security.
In order for companies to unlock the potential of generative AI whilst avoiding these risks, business leaders must be careful about where and how models are applied.
Firstly, look for ways to use large language models securely
First and foremost, our advice is to avoid public large language models (LLMs), such as ChatGPT, altogether. Their benefits are currently far outweighed by the risks that come with feeding public LLMs precious company data. This includes potential loss of IP, as well as privacy breaches that break promises to customers, employees and partners.
Instead, look for existing models that can be brought in-house and used privately. This ensures that company data stays in company hands, minimising security risks. Llama 2 and Falcon AI are both examples of LLMs that can be downloaded and used securely. Alternatively, options like Azure Open AI offer a halfway house, where data remains within the company’s Microsoft tenancy.
The possible applications of generative AI that can be used safely and securely are endless.
Using generative AI for customer service
How many times have you visited a company’s website to suddenly be greeted by a chatbot pop-up? With the rise of generative AI, these chatbots can be made more sophisticated to streamline the customer experience. Rather than have customers trek through your website searching for answers, visitors simply give details of their problem to a chatbot powered by generative AI. The model can then instantly surface the information needed by individuals, present it in natural language, and engage with users in complex dialogue.
This method also helps customers get to accurate answers more quickly than they would by phoning a customer support line. According to data from Microsoft, call centre wait times leave UK customers waiting for up to 85 minutes on average – and that’s not including the time taken to fix their problem. Sophisticated, generative AI-powered chatbots can help slash these wait times.
The key to safely integrating customer support AI is to tightly control the information which models have access to. A set of customer FAQs or how-to articles should be all that a generative AI model needs to safely and accurately answer straightforward customers queries in natural language via a chatbot.
Using Generative AI for sales support
There’s also a strong use-case for generative AI when it comes to supporting the sales process. This could include drafting bid and framework responses, by combining case studies and successful examples of previous bids to generate quality responses within tight word limits. As long as humans review the text generated by models for accuracy, generative AI can improve outcomes and increase efficiency, by automating what can otherwise be an incredibly lengthy process.
Again, it’s best to use secure, private versions of generative AI to protect your IP, and limit the information which models have access to for the most accurate, relevant responses.
Using Generative AI to unlock company data and insights
On average, only 1% of organisations’ data can be seen by employees. And Gartner research shows that almost half (44%) of employees have made a wrong decision because they were unaware of information that could have helped. Generative AI can fix this, by instantly connecting employees with the information and answers they need. But first, businesses must audit their data to understand what information they hold, where it lives, and who has access to it. This way, when generative AI is introduced, companies can ensure that the right people get the right information – no more and no less than individuals need and are authorised to view. This process also ensures that the answers and content produced by generative AI are transparent and can be explained. When users know exactly what information generative AI has been fed, they can easily reference and cross-check source information for accuracy.
For teams who are able to instantly access joined-up information, there are endless business opportunities to spot and efficiencies to unlock – all of which can be supercharged by smart applications of generative AI. At Aiimi, this is how we empower teams to reap the rewards of AI-powered data insights tools safely and securely, to obtain the most accurate results, and de-risk wider operations at the same time.
Don’t have your head turned exclusively by Generative AI
Whilst generative AI is taking centre-stage right now, many other forms of AI can be just as useful, and are often safer, cheaper, and more accurate for certain use cases. Take extractive AI, for example. The main benefit of extractive AI is that it is designed to pull exact information from a specific, limited source. This avoids the fallibility associated with generative AI, whereby company data is isolated and taken out of context, meaning that the tech is at risk of producing irrelevant answers and hard-to-spot inaccuracies.
Responses generated by extractive AI on the other hand can be easily cross-referenced against the source information, and reviewed by users for accuracy. Extractive AI also avoids the lengthy natural language responses produced by generative AI, meaning that results are often easier to consume and make for a better user experience.
AI-powered HR assistants can use extractive AI to great effect. For example, HR bots which live on Slack and cite company handbooks can instantly answer employees’ questions about specific company policies. This can save HR teams time, whilst also ensuring confidentiality for staff whose questions are of a sensitive nature.
The bottom line?
Businesses thinking about using generative AI should take a measured approach. Consider the safest versions of the technology, as well as best practice when it comes to using models securely and governing the data which feeds AI use cases. Businesses that can do this stand to reap the many rewards which generative AI and extractive AI have to offer.