Employees in most industries are using ChatGPT in their day to day work, but they could be putting businesses at risk

generative ai business use
(Image credit: Shutterstock / thanmano)

New research by Indusface shows that ChatGPT is seeing increased usage across industries despite its usage in the workplace being heavily questioned in recent months.

ChatGPT can be a very useful productivity tool, helping to gather, summarize, and simplify information - but there are a number of issues that could land workers in hot water.

Jack of all trades, but master of none

In the rankings, the legal sector came in a close second, with 38% of respondents using ChatGPT in their work. This is followed by the Arts & Media industry at 33%, both the Information & Communication Technology industry and Construction industry ranking at 30%, with Real Estate & Property, Manufacturing and Call Centers & Customer service all seeing around 29% of respondents using ChatGPT in the workplace.

The Healthcare & Medical industry matched the Government & Defence usage at 28%. Among all industries, the most common use of the generative AI was to write up reports (27%), closely followed by using ChatGPT to translate information (25%), with research purposes not fair behind (17%).

Venky Sundar, Founder and President of Indusface points out that there are a number of troubling issues in the usage of ChatGPT within the workplace stating that, “Specific to business documents the risks are: legal clauses have a lot of subjectivity, and it is always better to get these vetted by an expert.

“The second risk is when you share proprietary information into chatGPT and there’s always a risk that this data is available for the general public, and you may lose your IP. So never ask chatGPT for documentation on proprietary documents including product roadmaps, patents and so on.

Sundar also points out that the use of generative AI and large language models (LLM) have shortened development times across industries, allowing an idea to become a product in a very short amount of time.

“The risk though is that proof of concept (POC) should just be used for that purpose. If you go to market with the POC, there could be serious consequences around application security and data privacy. The other risk is with just using LLMs as an input interface for the products and there could be prompt injections and the risk is unknown there.”

Interestingly, over half (55%) of respondents stated that they would not trust working with another business that used ChatGPT or a similar AI in their day to day work.

More from TechRadar Pro

TOPICS
Benedict Collins
Senior Writer, Security

Benedict has been with TechRadar Pro for over two years, and has specialized in writing about cybersecurity, threat intelligence, and B2B security solutions. His coverage explores the critical areas of national security, including state-sponsored threat actors, APT groups, critical infrastructure, and social engineering.

Benedict holds an MA (Distinction) in Security, Intelligence, and Diplomacy from the Centre for Security and Intelligence Studies at the University of Buckingham, providing him with a strong academic foundation for his reporting on geopolitics, threat intelligence, and cyber-warfare.

Prior to his postgraduate studies, Benedict earned a BA in Politics with Journalism, providing him with the skills to translate complex political and security issues into comprehensible copy.