A Veritas-commissioned study reveals nearly a quarter of employees think coworkers who use Generative AI should have their pay docked, while nearly half think these same coworkers should be required to teach the rest of their team how to use it. One thing is clear: guidelines and governance require organization-wide effort to mitigate risk from its use.
With AI advancing at a rate faster than most organizations can keep up with, it is essential to partner with a trusted provider to mitigate the risks associated with generative AI and other emerging technologies, said Andy Ng, VP and MD for Asia South and Pacific region at Veritas Technologies.
He observed that the lack of generative AI guidelines is putting organizations at risk. “In Singapore, more than a third (36%) of office workers have admitted to inputting potentially sensitive information like customer details, employee information and company financials into generative AI tools.”
He also opined that this is likely due to more than half (53%) failing to recognize that doing so could leak sensitive information publicly and could cause their organizations to run afoul of data privacy compliance regulations.
The new research commissioned by Veritas further reveals that confusion over generative AI in the workplace is simultaneously creating a divide between employees besides also increasing the risk of exposing sensitive information.
Organizational-level guardrails and enforcement
In Singapore, less than half of employers currently provide any mandatory usage directions to employees, even though the majority of the employees think that guidelines and policies related to the use of generative AI are important. As a result, more than 80% of employees in Singapore say they want guidelines, policies, and training from their employers on using generative AI within their organisations.
While AI presents numerous benefits, spanning from higher productivity to automation, employee concerns related to data security and risks are still relevant and critical to daily operations.
As a starting point, having a Chief Information Security Officer (CISO) would be useful in understanding the type of data that employees are inputting into AI tools, the security of AI tools, and on how the company should proceed with further use of AI tools. From that analysis, management will have a better understanding of the risks that AI tools present, and how to mitigate these security concerns. It is also possible for organizations to engage external consultants to audit and assess their AI tools for similar purposes.
Overall, senior management needs to act quickly. AI tools are here to stay, and if they fail to plan, they plan to fail. Without establishing any proper guidelines on the use of generative AI, organisations could face regulatory compliance violations.
Andy said, “Generative AI is here to stay. We know that its applications will be highly data-intensive, creating the need for enterprises to manage it efficiently and responsibly. As a solution provider, Veritas understands that data governance in the AI age should be a priority, and not an afterthought.”
As a starting point, having a Chief Information Security Officer (CISO) would be useful in understanding the type of data that employees are inputting into AI tools, the security of AI tools, and on how the company should proceed with further use of AI tools.
Veritas has solutions that can help protect an organization’s critical assets, namely its data and IT infrastructure, and ensure that every part of its IT environment is backed up to immutable storage, according to Andy.
With advanced threat detection capabilities and total visibility across the IT landscape, Veritas ensures an organization’s compliance with regulatory requirements and capability to quickly recover from any disruptions.
Employees as the first line of defense
Business leaders play a crucial role in AI governance and should take control of the generative AI agenda, before their employees are caught in a wild west situation. This starts with setting the overall direction for their organization’s policies and holding the responsibility of enforcing the policies across the teams through relevant training and clear guidance.
Depending on the complexity of the regulations, legal and IT teams with expertise in data privacy laws and best practices play a crucial role in drafting and refining the guidelines. Additionally, compliance teams are also responsible for implementing and enforcing the developed policies, but AI is a team game – different functions such as marketing, HR, and customer service should be on board to ensure there is consensus around the adoption of generative AI with consistent messages communicated to every employee.
Ultimately, as a first line of defense, all employees have a responsibility to be aware of and comply with the organization’s policies. AI policies are ineffective if employees aren’t aware of them or don’t understand them.
Veritas’ message is clear: thoughtfully developed and well communicated guidelines on the appropriate use of generative AI, combined with the right data compliance and governance toolset, is essential for businesses looking to stay ahead.