05 Apr 2023

ChatGPT: Use and misuse within the workplace


If you had any doubts about the disruptive nature of generative artificial intelligence, the arrival of ChatGPT should have well and truly put them to bed. The language processing tool has been a runaway hit since its launch by OpenAI last November, and has sparked conversations about how businesses and institutions could be impacted by its increased use.

The premise of ChatGPT is as simple as it is extraordinary. Trained on a massive amount of text data, it’s capable of understanding and generating human-like prose. It answers questions and can assist on tasks like composing essays, job applications or letters. 

There are numerous implications from a data protection, infringement and copyright, confidentiality, accuracy and bias perspective and in this short alert we highlight some of the key considerations for employers.

Key considerations for employers

  • Are your employees using ChatGPT for their work? If so, have you set out guidelines or a policy about its use?
  • If you have already introduced guidelines or intend to do so, will you also roll-out training? There is little point in policies sitting on the shelf gathering dust, but employees need to be actively trained on them and understand how they work in practice.

  • Have you considered banning the use of ChatGPT for certain roles or types of work product? Putting aside the legal implications of using the tool to generate, for example, a speech or article that an employee intends to pass off as their own, it is worth bearing in mind that there is a real risk that the same or similar content could be generated for another user! At the very least, that could be embarrassing and cause reputational damage. 

  • Will you establish a process for employees to report any concerns or issues related to the use of Chat GPT?

  • Do you need to consider adapting your performance processes or targets for those employees who use ChatGPT in their roles?

  • Do any of your suppliers use ChatGPT or similar technologies? If so, is your data being fed into their systems?

  • Have your recruitment policies or practices – or those of your suppliers - been adjusted to take into account the use of ChatGPT, and the risk of bias, in recruitment or training exercises?

The sooner that organisations grapple with these questions the better, enabling them to take a pro-active rather than reactive stance to issues that might occur. As with any new joiner, we would at least expect organisations to implement some form of probationary checks and balances on ChatGPT's performance, before embedding it into the business.

If you have any questions on the use of ChatGPT in your workplace or you would like assistance in putting together staff guidelines or policies, please contact Anne Pritam, Leanne Raven or your usual Stephenson Harwood contact.

This alert is based on our more detailed insight which is available to read here.



Leanne Raven

Leanne Raven
Senior knowledge lawyer

T:  +44 20 7809 2560 M:  +44 7827 353 108 Email Leanne | Vcard Office:  London