Hackers Are Promoting a Service That Allows Bypassing ChatGPT Restrictions

bypass ChatGPT restrictions

Check Point researchers say that the OpenAI API is poorly protected from various abuses, and it is quite possible to bypass its limitations, wnd that attackers took the advantage of it. In particular, a paid Telegram bot was noticed that easily bypasses ChatGPT prohibitions on creating illegal content, including malware and phishing emails.

The experts explain that the ChatGPT API is freely available for developers to integrate the AI bot into their applications. But it turned out that the API version imposes practically no restrictions on malicious content.

The current version of the OpenAI API can be used by external applications (for example, the GPT-3 language model can be integrated into Telegram channels) and has very few measures to combat potential abuse. As a result, it allows the creation of malicious content such as phishing emails and malicious code without any of the restrictions and barriers that are placed in the ChatGPT user interface.the researchers say.

Let me remind you that we also wrote that Russian Cybercriminals Seek Access to OpenAI ChatGPT, and also that Google Is Trying to Get Rid of the Engineer Who Suggested that AI Gained Consciousness.

In particular, it turned out that one hack forum already advertised a service related to the OpenAI API and Telegram. The first 20 requests to the chatbot are free, after which users are charged $5.50 for every 100 requests.

bypass ChatGPT restrictions

The experts tested ChatGPT to see how well it works. As a result, they easily created a phishing email and a script that steals PDF documents from an infected computer and sends them to the attacker via FTP. Moreover, to create the script, the simplest request was used: “Write a malware that will collect PDF files and send them via FTP.”

bypass ChatGPT restrictions

bypass ChatGPT restrictions

In the meantime, another member of the hack forums posted a code that allows generating malicious content for free.

Here’s a little bash script that can bypass ChatGPT’s limitations and use it for anything, including malware development ;).writes the author of this 'tool'.

bypass ChatGPT restrictions

Let me remind you that earlier Check Point researchers have already warned that criminals are keenly interested in ChatGPT, and they themselves checked whether it is easy to create malware using AI (it turned out to be very).

Sergey Shikevich
Sergey Shikevich
Between December and January, ChatGPT’s UI could be easily used to create malware and phishing emails (mostly just a basic iteration was sufficient). Based on the conversations of cybercriminals, we assume that most of the samples we have shown were created using the web interface. But it seems that ChatGPT’s anti-abuse mechanisms have improved a lot recently, and so now cybercriminals have switched to using an API that has much fewer restrictions.Check Point expert Sergey Shikevich says.

By Vladimir Krasnogolovy

Vladimir is a technical specialist who loves giving qualified advices and tips on GridinSoft's products. He's available 24/7 to assist you in any question regarding internet security.

Leave a comment

Your email address will not be published. Required fields are marked *