Artificial intelligence platforms, like ChatGPT, have evolved from being the latest gadgets to play with, to a vital part of our work lives and for a lot of people, digital pals.

According to Definition's research, around one in five workers in the UK talk to AI like a friend, looking for guidance on personal and professional problems. Our data shows that engaging with AI like this can leave us feeling heard and less isolated. But, with this newfound connection, many of us share sensitive, sometimes highly confidential information, even though over a third of people don't realise that AI platforms like ChatGPT and Google Gemini may not be very good at keeping our secrets... secret. 

For businesses, the implications are worrying. Consider Microsoft Copilot for example, the free, well publicised version. It grants Microsoft broad rights to the data inputted or outputted by any user - rights to use this data in any way it sees fit, it can even share it with third parties. This means that any sensitive business information, from HR details to financial documents, could potentially be exposed to the world. Employers are taking note and taking action. According to our research, 25% have decided to either outright ban AI or regulate its use within their organisations. But even with these policies in place, some employees choose to break the rules. They have their reasons - around 63% of them report that using AI increases their productivity, and some even feel AI offers more help than their human colleagues. 

The situation presents a delicate balance between leveraging AI for its productivity gains and risking confidential data exposure. Employers need to manage AI tools with the same level of care as any other form of data sharing or storage. There's also a knowledge gap that needs to be addressed - 40% of individuals surveyed are unsure about who retains ownership over the content produced by AI. By instilling best practices in AI engagement and creating policies that evolve with the technology, businesses can positively shift the AI landscape. 

With AI's potential to simplify our professional lives, do the admin and enhance the work experience, it's up to us to navigate its usage cautiously so it supports us without compromising our privacy. There's no need to back away from progress, as long as we're equipped with the knowledge and tools to make sure AI remains a friend. And this is where the challenge lies. There are so many products and versions of said products out there, being promoted with huge advertising and marketing budgets, that it's easy to fall victim. But there are safe and better value ways of enabling a workforce with the latest AI, that companies are increasingly taking advantage of. By building a culture of digital responsibility within our businesses, we can create a future where AI can help us without spilling the beans. 

For further information visit Definition