New research suggests that HR executives are becoming increasingly concerned about the use of generative AI tools within the workplace. It’s been reported that the Society for Human Resource Management is currently receiving ‘between 30 and 50 calls per week’ from HR executives who are anxious about how they can gain greater control and oversight with regards to how their colleagues are using generative AI chatbots on the job.

Generative AI tools – like ChatGPT and Google’s Bard – are often being used by employees whilst their respective employers are still developing policies around usage or – in some cases – regardless of whether the tools have been approved for use. There are also a number of growing concerns and risks cited by HR executives too, including: the production of inaccurate or poor-quality work, privacy issues, breaches and potential legal issues.

There are “guidelines” that technology experts in the field of generative AI say that companies need to consider with regards to how they train and inform staff on any new policy and process rules in order to protect their respective organisations from exposure to risk.

Three measures to protect organisations from exposure to risk, include:
1. Train your employees to ensure they inspect outputs from chatbots…
One major flaw that has been identified with ChatGPT and similar generative AI tools is that outputs aren’t always accurate and can be fabricated. These are often referred to as ‘AI hallucinations’. In most instances, this is where a system has limited training data on a particular subject and will generate the most statistically close information it has to produce an answer.

The concern cited by experts in the field of this emerging technology is that this could led to systemic spreading of misinformation, resulting in damage to company reputation. At present, it’s been suggested that – whist the technology can be useful to help with smaller tasks – critical considerations should be made before using them in instances where businesses cannot afford to make mistakes.

2. Train employees on privacy, data security and proprietary information usage…
Even Google has warned their employees about the risks of its own chatbot, Bard. Recent reports suggests that Google’s parent company – Alphabet – has told employees to avoid entering confidential information into chatbots and alerted them about the associated risks.

Recently, Samsung went further – banning generative AI usage for the ‘indefinite future’, after reports of a breach emerged where Samsung engineers unintentionally leaked source code via a ChatGPT upload.

The ban – which is reported to extend across internal networks and company-owned devices, including tablets and phones (not personal use) – could see staff, who are caught violating the terms, facing disciplinary action, up to and including termination. This reported ban illustrates the anxiousness that employers have around employees using the technology without proper understanding or consideration of the potential security risks posed.

3. Train employees to ensure your policies and processes are learned and retained…
To combat the risks of employees failing to understand and retain critical policy and process information, HR and L&D leaders should consider adopting an evidence-based approach. With a myriad of significant risks that comes with generative AI usage, employees need to know what your respective policies and processes are, and what to do in the event of a breach occurring.
Not all Artificial Intelligence is the same…
Elephants Don’t Forget uses multi-award-winning Artificial Intelligence (AI) called Clever Nelly to continually assess and improve the knowledge and competence of your people.

Our AI treats every employee as an individual and financially guarantees they will learn and retain their workplace training – helping you to improve customer service levels, customer satisfaction, operational KPIs, reduce employee errors and increase sales revenue.

If you are looking to improve the performance of your workforce, reduce operational risk, and get quantifiable, independent ROI from your training interventions, get in touch to see how it works!

Apprehensive about Artificial Intelligence? Click here to learn more about the benefits →

Typical L&D use applications include:

Knowledge retention


Employee wellbeing


Training needs analysis

Evidence L&D effectiveness

Policies & processes quantification

QA performance

Flight risk tracking

Regulatory compliance

Sales enablement

Complaint identification

You may be interested in…
How Microsoft use AI to improve employee knowledge, capability and KPIs
This use case examines how Microsoft use Clever Nelly to evidence cause and effect of Learning & Development (L&D) interventions on the bottom line.

Join the herd

Request a time to discover how Elephants Don't Forget can help transform your business today.