New research suggests that HR executives are becoming increasingly concerned about the use of generative AI tools within the workplace. It’s been reported that the Society for Human Resource Management is currently receiving ‘between 30 and 50 calls per week’ from HR executives who are anxious about how they can gain greater control and oversight with regards to how their colleagues are using generative AI chatbots on the job.
Generative AI tools – like ChatGPT and Google’s Bard – are often being used by employees whilst their respective employers are still developing policies around usage or – in some cases – regardless of whether the tools have been approved for use. There are also a number of growing concerns and risks cited by HR executives too, including: the production of inaccurate or poor-quality work, privacy issues, breaches and potential legal issues.
There are “guidelines” that technology experts in the field of generative AI say that companies need to consider with regards to how they train and inform staff on any new policy and process rules in order to protect their respective organisations from exposure to risk.
The concern cited by experts in the field of this emerging technology is that this could led to systemic spreading of misinformation, resulting in damage to company reputation. At present, it’s been suggested that – whist the technology can be useful to help with smaller tasks – critical considerations should be made before using them in instances where businesses cannot afford to make mistakes.
Recently, Samsung went further – banning generative AI usage for the ‘indefinite future’, after reports of a breach emerged where Samsung engineers unintentionally leaked source code via a ChatGPT upload.
The ban – which is reported to extend across internal networks and company-owned devices, including tablets and phones (not personal use) – could see staff, who are caught violating the terms, facing disciplinary action, up to and including termination. This reported ban illustrates the anxiousness that employers have around employees using the technology without proper understanding or consideration of the potential security risks posed.
Our AI treats every employee as an individual and financially guarantees they will learn and retain their workplace training – helping you to improve customer service levels, customer satisfaction, operational KPIs, reduce employee errors and increase sales revenue.
If you are looking to improve the performance of your workforce, reduce operational risk, and get quantifiable, independent ROI from your training interventions, get in touch to see how it works!
Typical L&D use applications include:
Training needs analysis
Evidence L&D effectiveness
Policies & processes quantification
Flight risk tracking