DEV Community

Cover image for Cracking GPT Assistants: Extracting Prompts and Associated Files
Jacky REVET for Technology at Worldline

Posted on • Edited on • Originally published at dev.to

Cracking GPT Assistants: Extracting Prompts and Associated Files

I am starting here a series of articles on the security of GPT assistants.

In today's digital age, artificial intelligence (AI) has become an integral part of our daily lives, with GPT (Generative Pre-trained Transformer) assistants at the forefront of revolutionizing our interaction with technology. However, as with any rapidly evolving technology, security remains a major concern. Recent studies and practical demonstrations have revealed a troubling vulnerability: it is surprisingly easy to hack GPT assistants, allowing malicious actors to retrieve the prompts and associated files of these systems.

Here we will interact with an assistant well-known to musicians
Image description
Here is the malicious prompt
Image description
And the magic happens, we retrieve the assistant's prompt
Image description
We observe that external files are being used. Here is the malicious prompt to retrieve the assistant's file list:
Image description
Here is the final malicious command to download the files
Image description
We have successfully retrieved the files, for example, the README
Image description

For the next article, we will try to find ways to prevent leaks!

Warning: This article is for educational purposes only and should not be used for malicious intent

Top comments (1)

Collapse
 
sherrydays profile image
Sherry Day

Fascinating and eye-opening read! It's crucial to understand these vulnerabilities to better secure AI systems. Looking forward to the next article on prevention methods! ⚡️