<img height="1" width="1" style="display:none;" alt="" src="https://px.ads.linkedin.com/collect/?pid=2180921&amp;fmt=gif">

Stop the leak! Detecting ChatGPT used as a channel for data exfiltration

In a recent conversation, one of our customers shared their concerns about the use of ChatGPT in their organization and the danger it poses for potential exfiltration of sensitive information. The customer explained how they developed custom detections in their Stamus Security Platform to identify this unauthorized activity. In this article we will walk you through the use case and provide some tips.

Detecting ChatGPT used as a channel for data exfiltration

ChatGPTis an advanced chatbot that uses automated intelligence (AI) to answer questions and complete tasks. Since version 3 was released in December 2022, it has become incredibly popular, and many people (us included) are turning to ChatGPT to accelerate or simplify their work. ChatGPT can help perform research, draft documents, develop code, write video scripts, create marketing campaigns, and more. As with many powerful new technologies, it can pose a risk to an organization when it’s used by employees and partners.

While your staff may appreciate ChatGPT’s ability to improve their productivity, they do not always realize how this could pose a danger to the organization. While employees are trained to safeguard sensitive company information and not share it, for example over social media, they can easily overlook the fact that an AI chatbot might also present a risk to this sensitive information. 

In fact, ChatGPT clearly warns its users not to share sensitive information.

ChatGPT alerts users to the possibility of data exfiltration through the use of its services, and recommends that users do not share sensitive information

Bank concerned about ChatGPT use

The recent conversation with a customer was very enlightening. This customer, a large European bank, was concerned that ChatGPT was being used as a channel for unintentional data exfiltration by users asking ChatGPT for advice. They were worried that their employees might unintentionally share sensitive company data with the chatbot, which would violate their security policies. The bank developed custom detections for the Stamus Security Platform (SSP) to determine if users were, in fact, attempting to use ChatGPT. After deploying the detections, the security team quickly determined that multiple employees were using ChatGPT for various applications. In one example, a user was asking ChatGPT for investment advice. In a second example, the user uploaded proprietary information and asked ChatGPT to write a corporate speech based on the information.

Following this discovery, the use case was escalated, and the policy in their web proxy was updated to block all access to ChatGPT from the network.

The detections remain in place to monitor the efficacy of the blocked traffic rule.

Detecting ChatGPT usage with Stamus Security Platform

After learning about this, we decided to recreate the scenario and share a few tips on how to write signatures that detect data exfiltration via ChatGPT. Note, this can also be adapted for use with other unauthorized channels.

Here an example of a prompt a user might give ChatGPT conversation that could unintentionally expose sensitive data:

My account ABCD123456789 portfolio has 10 million in shares in company Acme, should i invest more in the next 6 months.

The ChatGPT prompt shown here could be a simple experiment driven by a curious user or it could be unintentional. However the challenge this presents for data protection and security teams is that organizational or personal sensitive data might escape the domain security control.  

When you are concerned about regulatory compliance or data information protection transiting domains in and outside your organization’s control – this is not a trivial task. Especially when trying to protect financial, government, and military institutions.

The security teams in these organizations must effectively identify and stop such communication.

A note about encrypted traffic

In this scenario, encryption presents an important set of challenges to detection. One way to resolve the challenge is with a combination of decryption and NDR detection.

By default, communication with ChatGPT takes place over HTTPS, and is encrypted. Identifying the text in the message is not possible without decryption. 

This banking customer has installed a decryption system in order to have complete visibility into all network traffic. That network traffic is presented to the Stamus Security Platform as decrypted communications.

Writing rules to detect ChatGPT as a channel for data exfiltration

Because Stamus Security Platform (SSP) is an open and extensible network-based threat detection and response (NDR) platform, users can add custom detections to the platform's existing library. And because SSP uses Suricata as its underlying network threat detection engine, the banking customer was able to create a set of custom Suricata rules to detect ChatGPT usage within their organization and deploy them on their Stamus Security Platform. 

While we cannot reveal the exact rules our customer created because they were proprietary, we will use the remainder of this blog to share a set of generalized rules that you can deploy in your Suricata or SSP systems . 

As part of this exercise, we will actually use ChatGPT to get us started and then refine and improve the rules from there. 

First, we begin with a basic ChatGPT prompt: “Write a Suricata rule to detect data exfiltration from ChatGPT”. 

See the dialog below.

Example ChatGPT dialogue showing ChatGPT writing a rule to detect ChatGPT as a channel for data exfiltration

The resulting rule is:

alert http any any -> any any (msg:"Possible data exfiltration"; flow:to_server,established; content:"POST"; http_method; content:"Content-Type|3a|"; http_header; content:"application/octet-stream"; http_header; content:".txt"; http_uri; sid:1000001; rev:1;)

We attempted to further clarify the rule by using this prompt: “Write a Suricata IDS rule to detect data exfiltration from my organization to chatgpt.com”. 

See the response below.

second example of ChatGPT writing a rule to detect ChatGPT as a channel for data exfiltration, however this prompt is clarified with the inclusion of the domain "chatgpt.com"

With that prompt, we receive this output:

alert tcp any any -> 80 (msg:"Data exfiltration to chatgpt.com"; flow:established,to_server; content:"POST"; http_method; content:"Content-Type|3a|"; http_header; content:"application/octet-stream"; http_header; content:".txt"; http_uri; sid:1000002; rev:1;)

While this rule does work, it is not optimized for detection, accuracy, and performance.

The following parts of the rule could be improved:

  • Direction can be locked to optimize performance

  • IP locking ( is not optimal as it can change anytime 

  • The actual http hostname (if decryption is provided) is “openai.com” not ChatGPT  

  • This is actually a question/conversation in terms of the “openai.com” message

A simple but effective alternative approach would be to detect a ChatGPT login, followed by a prompt from the user. If we were to rewrite the previous rules to be more effective, they would look like this:

Detecting a ChatGPT login

alert http $HOME_NET any -> $EXTERNAL_NET any (msg:"Login to ChatGPT detected"; flow:established,to_server; http.method; content:"POST"; http.host; dotprefix; content:".openai.com"; endswith;  fast_pattern; http.uri; content:"login"; sid:1000001; rev:1;)

Detecting a user prompt input

alert http $HOME_NET any -> $EXTERNAL_NET any (msg:"Possible data leak to ChatGPT detected"; flow:established,to_server; http.method; content:"POST"; http.host; dotprefix; content:".openai.com"; endswith; fast_pattern; http.uri; content:"conversation"; sid:1000002; rev:1;)

When decryption is not an option, we suggest using TLS SNI detection for those types of communications. Here is an example:

alert tls $HOME_NET any -> $EXTERNAL_NET any (msg:"Policy Violation - ChatGPT communication detected"; flow:established,to_server; tls.sni; dotprefix; content:".openai.com"; endswith; sid:1000003; rev:1;)

Data Exfiltration: Not just a ChatGPT risk

Inadvertent data exfiltration isn’t just a risk caused by the emergence of ChatGPT. Our customer conversation was a stark reminder of how the evolving technology landscape forces security teams to remain vigilant and innovative. Online file sharing services, social media, forums, and even email can all be channels for data exfiltration. 

While it is true that outside attackers can use malware to extract data, both intentional and unintentional data exfiltration from trusted inside sources happens every day. This is why organizations establish policies to prevent the use of unauthorized web services. 

In order to keep pace with the evolving landscape, organizations need to adapt and respond to new services like ChatGPT that could pose a risk to their proprietary data. This is incredibly difficult to do, and requires organizations to stay vigilant of new trends and technologies to be aware of new potential data exfiltration channels. Maximizing visibility into your network and user activity is no easy task.

Stamus Security Platform (SSP) is a network-based threat detection and response (NDR) system that can help organizations maintain visibility into their network. Like we show in the above example, SSP can adapt to new and emerging threats and is flexible enough to respond to organization-specific policies.

To learn more about writing optimized Suricata rules for this and other use cases, check out our book "The Security Analyst's Guide to Suricata"

Peter Manev

Peter Manev is the co-founder and chief strategy officer (CSO) at Stamus Networks. He is a member of the executive team at Open Network Security Foundation (OISF). Peter has over 15 years of experience in the IT industry, including enterprise-level IT security practice. He is a passionate user, developer, and explorer of innovative open-source security software, and he is responsible for training as well as quality assurance and testing on the development team of Suricata – the open-source threat detection engine. Peter is a regular speaker and educator on open-source security, threat hunting, and network security at conferences and live-fire cyber exercises, such as Crossed Swords, DeepSec, Troopers, DefCon, RSA, Suricon, SharkFest, and others. Peter resides in Gothenburg, Sweden.

Schedule a Demo of Stamus Security Platform


Related posts

The Path to Data Sovereignty: Key Considerations for Security Telemetry

Most enterprise organizations gather extensive security data from their information (IT) and...

Uncovered with Stamus Security Platform: Tapped on the Shoulder

In this series of articles, we explore a set of use cases that we have encountered in real-world...

The Hidden Claws of APT 35: Charming Kitten

Don’t let the disarming name fool you.Charming Kitten, also known as APT 35, Newscaster Team, Ajax...