x
N A B I L . O R G
Close
Security - July 16, 2025

Meta Addresses Bug Leak: Protecting Users’ AI Prompts and Content Integrity

Meta Addresses Bug Leak: Protecting Users’ AI Prompts and Content Integrity

In a noteworthy development, Meta has rectified a security vulnerability that allowed users of its AI chatbot to access and view the private prompts and AI-generated responses of other users. This revelation was exclusively disclosed by Sandeep Hodkasia, founder of AppSecure.

Hodkasia, in a private disclosure on December 26, 2024, uncovered this issue and received a bug bounty reward of $10,000 from Meta for his findings. The tech giant implemented a fix on January 24, 2025, with no evidence indicating malicious exploitation of the bug.

Hodkasia explained that he identified the security lapse by examining Meta AI’s system that enables logged-in users to edit their AI prompts for regenerating text and images. Upon editing a prompt, Meta’s back-end servers assign it a unique number. By manipulating network traffic within his browser while modifying an AI prompt, Hodkasia discovered he could alter the unique number, receiving another user’s prompt and AI-generated response from Meta’s servers.

This security flaw indicated that Meta’s servers failed to verify if the user requesting the prompt and its corresponding response had authorization to access it. Hodkasia further mentioned that the prompt numbers generated by Meta’s servers were “easily guessable,” potentially enabling a malicious actor to scrape users’ original prompts using automated tools by rapidly modifying prompt numbers.

Upon contact, Meta acknowledged the bug resolution in January and confirmed that they had found no evidence of misuse, with Meta spokesperson Ryan Daniels stating, “We rewarded the researcher for their findings.”

The disclosure of this security vulnerability occurs amidst tech giants’ ongoing efforts to introduce and refine AI products, despite the associated security and privacy risks. The incident also sheds light on a previous issue that Meta AI’s stand-alone app faced upon launch earlier this year, when some users inadvertently shared what they believed were private conversations with the chatbot.