ChatGPT poses a risk to organizations' dataChatGPT poses a risk to organizations' data

by Carlos Rodrigues, Vice President Latam at Varonis

ChatGPT, as well as other generative Artificial Intelligence tools, has become popular by attracting large numbers of users. The platform reached 100 million monthly active users in January, just two months after launch, making it the fastest growing consumer app in history.

Recently, OpenAI admitted that a bug leaked data from 1.2% of plus users. Over a nine-hour period, users could access or receive data from other subscribers: first name, last name, email, billing address, and the last four digits of their card. In one case, this data was only visible if both users were logged in at the same time. The company has not disclosed the total number of accounts that had personal information leaked.

Furthermore, the use of this technology by hackers and malicious people, as well as the sharing of confidential information can bring great risk to data security.

Employees and executives may be sharing sensitive business data and privacy-protected information to these large language models (LLMs), raising concerns that these systems are incorporating such data into their databases. This means that eventually this information can be retrieved in other searches and requests.

For example, an executive cuts and pastes the company’s 2023 strategy document into ChatGPT and asks to create a PowerPoint presentation. After this information exchange, if a third party asks “what are [company name]’s strategic priorities for this year,” the chat can respond based on the information provided by the executive.

Or a doctor enters the name of his patient and his medical condition and asks to write a letter to the patient’s insurance company. A third party may ask for a template letter that ends up with this information, or even through a specific question get information about the patient.

Some companies have restricted workers’ use of ChatGPT, and others have issued warnings to employees to be careful when using generative AI services.

But the risks go beyond data sharing. Cybercriminals can take advantage of the tool to create optimized malware and phishing emails. Recently, researchers discovered that ChatGPT can aid in the development of ransomware. For example, a user with a rudimentary knowledge of malicious software can use the technology to write functional malware or ransomware. Some research shows that malware authors can also develop advanced software, such as a polymorphic virus, that alters its code to avoid detection.

Another security risk associated with ChatGPT is that the technology can be used to generate spam and phishing emails. Spammers can use the GPT-3 template to generate convincing emails that appear to be from legitimate sources. These e-mails can be used to steal information.

ChatGPT security risks can also take the form of impersonation scams. To do this, a cybercriminal studies the writing style of an employee. This employee is usually a high-ranking executive, or the person in charge of a company’s finances. After the cybercriminal has finished learning, he uses ChatGPT to write messages to an employee in the same way that the impersonated person would. For example, a hacker (impersonating an employee of the organization) can write to the accountant to request the transfer of a huge amount of money into an account.

By identifying these potential security risks, organizations need to establish techniques to mitigate these risks. Continuous monitoring of the network against any malicious behavior, or sharing of data externally, is required. Behavior-based models detect any suspicious pattern of data access. More than this, there are already solutions that can detect, for example, when an employee is pasting sensitive information into a web browser.

Another necessary point is the correct permission management of information within organizations. This is not necessarily a novelty, but it is not a widely adopted practice. It is very common to find employees with access to a lot of important information in the company – and which has no relevance to the activity they perform. In the end, it doesn’t matter what new technology, “trick”, or “gimmick” the criminal uses: the solution is simple, and it starts with the careful use and sharing of information within the company.

By admin