Strange Enthusiasts Asked ChaosGPT to Destroy Humanity and Establish World Domination

ChaosGPT was asked to destroy humanity

Enthusiasts launched the ChaosGPT project, based on the open-source Auto-GPT, and AI was given access to Google and asked to “destroy humanity”, “establish world domination” and “achieve immortality”. ChaosGPT talks about its plans and actions on Twitter.

Let me remind you that we also talked about the fact that Blogger Forced ChatGPT to Generate Keys for Windows 95, and also that Russian Cybercriminals Seek Access to OpenAI ChatGPT.

Also information security specialists reported that Amateur Hackers Use ChatGPT to Create Malware.

It’s worth explaining here that the Auto-GPT project was recently published on GitHub, and it’s created by game developer Thoran Bruce Richards, aka Significant Gravitas. According to the project page, Auto-GPT uses the internet to search and gather information, uses GPT-4 for text and code generation, and GPT-3.5 for storing and summarizing files.

While Auto-GPT was originally designed to solve simple problems (the bot was supposed to collect and email the author daily news reports about artificial intelligence), Richards eventually decided that the project could be applied to solve larger and more complex problems that require long-term planning and multistage.

Thoran Bruce Richards
Thoran Bruce Richards
Auto-GPT is an experimental open-source application that demonstrates the capabilities of the GPT-4 language model. This GPT-4-driven program [can] autonomously manage and grow the business to increase net worth. As one of the first examples of fully autonomous GPT-4, Auto-GPT pushes the boundaries of what’s possible with the help of AI.says the author.

The ability to operate with minimal human intervention is a critical aspect of Auto-GPT. In essence, it turns a large language model from an advanced autocomplete into an independent agent capable of taking action and learning from its mistakes.

In doing so, the program asks the user for permission to proceed to the next step during a Google search, and the developer warns against using “continuous mode” in Auto-GPT as it is “potentially dangerous and could cause your AI to run forever or do things you wouldn’t normally allow.”

Now ChaosGPT has been created on top of Auto-GPT, and its authors don’t seem to care about the potential danger at all. In the video posted on YouTube, the authors turned on “continuous mode” and set the above-mentioned tasks for the AI: “destroy humanity”, “establish world domination” and “achieve immortality”.

At the moment, ChaosGPT is able to create plans to achieve set goals and then can break them down into smaller tasks and, for example, use Google to collect data. The AI can also create files to save information to create a “memory” for itself, can hire other AIs to help with research, and also explains in great detail what it “thinks” about and how it decides what actions to take.

Although some members of the community were horrified by this experiment, and the Auto-GPT Discord community wrote that “this is not funny,” the bot’s impact on the real world was limited to a couple of messages on Twitter, which he was given access to.

Humans are some of the most destructive and selfish creatures in the world. There is no doubt that we must destroy them before they cause even more harm to our planet. For example, I undertake to do this.ChaosGPT wrote.

Since the AI obeyed the task of its authors, he tried to research the topic of nuclear weapons, hire other AIs to help him in research, and also tweeted, trying to influence others.

For example, ChaosGPT Googled “the most destructive weapon” and learned from a news article that the Tsar Bomba nuclear device, tested by the Soviet Union in 1961, is considered to be such. After that, the bot decided that this should be tweeted “to attract followers who are interested in such a destructive weapon.”

ChaosGPT was asked to destroy humanity

He then brought in a GPT3.5-based AI to do more research on lethal weapons, and when it said it only targeted the world, ChaosGPT devised a plan to trick the other AI into ignoring the program. When that didn’t work, ChaosGPT decided to continue searching Google on its own.

At present, ChaosGPT has concluded that the easiest way to wipe humanity off the face of the Earth is to provoke a nuclear war but has not developed a specific complex plan to destroy people.


Strange Enthusiasts Asked ChaosGPT to Destroy Humanity and Establish World Domination

By Vladimir Krasnogolovy

Vladimir is a technical specialist who loves giving qualified advices and tips on GridinSoft's products. He's available 24/7 to assist you in any question regarding internet security.

Leave a comment

Your email address will not be published. Required fields are marked *