Developing a workplace policy for AI use
The AI landscape continues to evolve with organisations embracing and incorporating this cutting-edge technology into their operations.
While AI's potential to provide significant benefits, including increased efficiency and cost savings, appeals to businesses, it is crucial that the risks posed by AI (including intellectual property rights, privacy and confidentiality concerns, bias and reliability) are not ignored. Organisations that ignore the risks face losing trust and credibility, and damage to their brand.
Internal policies and procedures
All organisations currently using, or intending to use public AI, should implement internal policies and procedures governing the use of the software. Some relevant considerations in developing a workplace policy may include:
Purpose and scope
Consider what your organisation wants to achieve with the software and where the benefits can be applied (e.g. generating internal messages, summarising non-sensitive but lengthy documents, social media posts etc.).
Identifying risks
Outline specific and general risks of the use of the AI within your organisation. This will likely factor in the considerations outlined in this series but may also involve the likes of reputational risk and other industry specific concerns.
Risk categorisation and policy
Your organisation's position on the use of AI in the workplace, based on both the purpose and identified risks, means your organisation may wish to base its policy on different types of usage (based on a risk categorisation), for example:
- High risk: Prohibited usage
- Medium risk: Permitted with prior organisation approval
- Low or no risk: Generally permitted without prior approval
Alternatively, organisations may wish to adopt the risk averse approach of a blanket prohibition on the use of AI in all contexts, unless prior consent is provided by a designated senior IT or management individual within the workplace. This may involve implementing access controls restricting the software to authorised individuals within the organisation's network.
Consequences for breach
The policy should clearly set out any consequences should employees breach the organisation's AI policy, which should be justifiable depending on the seriousness of the breach and its potential consequences for the organisation.
Before taking any action against an employee, the organisation will need to ensure that the employee has received the proper training and that they are first given a reasonable 'grace period' before any disciplinary action is taken to enforce the policy. Generally, employers should approach breaches as a training issue unless the employee has knowingly, wilfully or repeatedly breached the policy despite reasonable training.
Training
Employees should be trained in the acceptable uses of AI and made aware of associated risks (e.g. disclosure of personal/confidential information, reliance on output without first cross-referencing it) and how to avoid or minimise those risks.
Regular reviews
As the AI landscape is evolving at a rapid pace, organisations should regularly review their approach to AI, and, if necessary, amend and update existing policies and training to reflect new developments in this area.
If you have concerns about the legal risks for you or your organisation when using AI for business, get in touch with one of our experts for advice.