Your email address that you used to login to ChatGPT may be at risk

In a concerning revelation, a research team led by Rui Zhu, a Ph.D. candidate at Indiana University Bloomington

In a concerning revelation, a research team led by Rui Zhu, a Ph.D. candidate at Indiana University Bloomington, uncovered a potential privacy risk associated with OpenAI's powerful language model, GPT-3.5 Turbo. Last month, Zhu reached out to individuals, including New York Times employees, using email addresses obtained from the model.

The experiment exploited GPT-3.5 Turbo's ability to recall personal information, bypassing its usual privacy safeguards. Although imperfect, the model accurately provided work addresses for 80 percent of the Times employees tested. This raises alarms about the potential for generative AI tools like ChatGPT to disclose sensitive information with slight modifications.

OpenAI responded to the concerns, emphasizing its commitment to safety and rejection of requests for private information. However, experts raise skepticism, highlighting the lack of transparency regarding the specific training data and the potential risks associated with AI models holding private information.