ChatGPT Fever Spreads to US Workplace, Firms Raise Concerns Over Intellectual Property Leaks

Many workers across the U.S. are turning to ChatGPT to help with basic tasks, a Reuters/Ipsos poll has found, even as employers such as Microsoft and Google limit its use due to concerns. Companies around the world are considering how best to utilize ChatGPT, a chatbot program that uses generative artificial intelligence to engage in conversations with users and answer countless prompts. However, security firms and corporations worry that this could lead to the leakage of intellectual property and strategy.

Anecdotal examples of people using ChatGPT to help with everyday tasks, including drafting emails, summarizing documents, and conducting preliminary research.

Some 28 percent of respondents to an online artificial intelligence (AI) survey conducted between July 11 and 17 said they regularly use ChatGPT at work, while only 22 percent said their employer explicitly allows it class external tools.

The Reuters/Ipsos poll of 2,625 adults in the US showed a confidence interval (a measure of precision) of about 2 percentage points.

About 10 percent of respondents said their boss explicitly forbids external AI tools, while about 25 percent don’t know if their company allows the technology.

ChatGPT has become the fastest growing app in history since its launch in November. It has been both exciting and shocking, bringing its developer, OpenAI, into conflict with regulators, particularly in Europe, where the company’s massive data collection has drawn criticism from privacy watchdogs.

Human reviewers at other companies may read any generated chats, and researchers have found that similar artificial intelligence AIs can duplicate the data it absorbed during training, posing a potential risk to proprietary information.

“People are using generative AI services without understanding how the data is being used,” said Ben King, vice president of customer trust at enterprise security firm Okta.

“For businesses, this is critical because users are not contracting with many AIs because they are free services, so businesses don’t have to go through the usual evaluation process to take the risk,” King said.

OpenAI declined to comment when asked about the impact of individual employees using ChatGPT, but highlighted a recent company blog post that assured corporate partners that their data would not be used unless they explicitly gave permission. Take the chatbot a step further.

When people use Google’s Bard, it collects data such as text, location and other usage information. The company allows users to delete past activity from their accounts and to request deletion of content entered into the AI. Alphabet’s Google declined to comment when asked for more details.

Microsoft did not immediately respond to a request for comment.

“Harmless Mission”

A U.S. employee at Tinder said employees of the dating app use ChatGPT to perform “harmless tasks” like writing emails, even though the company doesn’t officially allow it.

“It’s normal emails. Very irrelevant, like making fun calendar invites for team events, sending goodbye emails when someone leaves … We also use it for general research,” the employee said , who declined to be named because they were not authorized to speak to reporters.

Tinder has a “No ChatGPT rule,” but employees still use it in a “generic way that doesn’t give away any of our information on Tinder,” the employee said.

Reuters could not independently confirm how ChatGPT was used by Tinder employees. Tinder says it “provides employees with regular guidance on best security and data practices.”

In May, Samsung Electronics banned ChatGPT and similar artificial intelligence tools for its global workforce after it discovered that an employee had uploaded sensitive code to the platform.

“We are reviewing measures to create a safe environment for the use of generative artificial intelligence to improve employee productivity and efficiency,” Samsung said in a statement on Aug. 3.

“However, until these measures are in place, we will temporarily restrict the use of generative artificial intelligence through company devices.”

Reuters reported in June that Alphabet had warned employees about how to use chatbots, including Google’s Bard, as it rolled out the program globally.

Google says that while Bard can make unwelcome code suggestions, it helps programmers. The company also said it aims to be transparent about the limitations of its technology.

blanket ban

Several companies told Reuters they are embracing ChatGPT and similar platforms with security in mind.

“We’ve begun testing and learning how artificial intelligence can improve operational efficiency,” said a Coca-Cola spokesman in Atlanta, Georgia, adding that data is kept inside its firewall.

“Internally, we recently launched an enterprise version of Coca-Cola ChatGPT to improve productivity,” the spokesperson said, adding that Coca-Cola plans to use artificial intelligence to improve the efficiency and productivity of its teams.

Meanwhile, Tate & Lyle Chief Financial Officer Dawn Allen told Reuters the global ingredients maker is trialling ChatGPT and has “found a way to use it safely”.

“We have different teams go through a series of experiments to decide how to use it. Should we use it in investor relations? Should we use it in knowledge management? How can we use it to execute tasks more effectively?”

Some employees said they could not access the platform on company computers at all.

“It’s completely banned on the office network like it doesn’t work,” said a P&G employee who wished to remain anonymous because they were not authorized to speak to the media.

Procter & Gamble declined to comment. Reuters could not independently confirm whether P&G employees were unable to use ChatGPT.

Companies are right to be vigilant, said Paul Lewis, chief information security officer at cybersecurity firm Nominet.

“Everyone can benefit from enhanced functionality, but information is not completely secure and can be eliminated by design,” he said, citing “malicious prompts” that could be used to get AI chatbots to disclose information.

“There’s no need for a blanket ban just yet, but we need to proceed with caution,” Lewis said.

© Thomson Reuters 2023


Affiliate links may be automatically generated – see our Ethics Statement for details.

Svlook

Leave a Reply

Your email address will not be published. Required fields are marked *