Since its release on 30 November 2022, ChatGPT has gained over 100 million users, likely making it the fastest-growing consumer application to date. ChatGPT promises many benefits to its users, including increased efficiency and accuracy in carrying out tasks. As Artificial Intelligence (AI) like ChatGPT continues to transform workplaces, Small and Medium Enterprises (SMEs) need to be mindful of the potential risks that its use may bring.
This article explores the risks associated with AI in the workplace and provides practical tips for SMEs to mitigate these risks and navigate implementing AI in the workplace safely.
Legal risks of using AI in the workplace
ChatGPT has enormous potential to assist in rapidly completing tasks that can be time-consuming and expensive. ChatGPT can be used to summarise documents, perform basic drafting (i.e. emails, letters, minutes of meetings), and carry out basic research. Soon, ChatGPT may be specialised to a degree that it could respond to questions in areas like law, medicine, or finance.
What are the risks of using ChatGPT?
Before an SME allows the use of AI in its business, it is vital to understand the risks of doing so, including breaches of client confidentiality, potential liability, loss of client trust, and reputational damage. Recognising these risks can enable SMEs to develop a proactive approach to managing the use of AI.
Businesses are responding to the rise of AI in different ways. Several large companies have restricted the use of ChatGPT by employees. Other companies have experienced data leaks. For example, Samsung employees recently unwittingly leaked top-secret information including internal meeting notes while using ChatGPT to help them fix issues with source code.
The consequences of leaks of sensitive client information are obvious and can have a significant and detrimental effect on business. They can include:
- Breaches of client confidentiality, which erode trust between an SME and its clients, damage the reputation of the SME and can potentially lead to legal consequences.
- Legal consequences for negligence, breach of privacy or contract. Regulatory authorities could investigate a data breach and an SME’s compliance with data protection law, which may impact on reputation and have financial consequences.
- A loss of trust from existing and potential clients. Clients may choose to terminate their relationships, seek compensation, or share their negative experiences.
- Significant reputational damage. This can make it challenging to retain existing clients or attract new ones.
Some practical tips for SMES
By implementing proper safeguards, SMEs can use AI more safely and leverage its benefits while protecting client relationships and complying with legal obligations.
SMEs should have a comprehensive understanding of legal obligations when it comes to client confidentiality and data protection. Seek advice from a reputable law firm, and be familiar with applicable laws, regulations, and industry standards, and professional codes of conduct. They should also obtain informed consent from clients regarding the use of AI in handling sensitive information. Such transparency allows clients to make informed decisions about their information.
SMEs should also continue to monitor and assess the performance of any AI used. This will help to maintain the effectiveness and compliance of AI systems. Investing in training programs to educate employees on the responsible and secure use of AI is also an effective tool.
They should also ask employees to report potential issues or concerns they encounter while using AI, as well as writing and enforcing clear policies and documentation governing the use of AI. These should be regularly updated and communicated to all employees.
The legal risks associated with AI in handling sensitive client information are significant, but with careful planning and implementation, SMEs can mitigate these.