Ever since popular chatbots like ChatGPT, Bard, and Claude became popular, so did people’s explorations into their safety and security measures. This essentially led to a broader range of experimentation into these tools’ existing guardrails as well as exploring the types of prompts that would coax these language model chatbots into detracting from established guidelines for functioning. This process, also known as jailbreaking, has been crucial to understanding the kind of prompts potentially malicious actors might deploy to use popular chatbots for ulterior motives. Since their launch, several teams of developers, researchers, and even individual experts have tried their hand at jailbreaking chatbots to strengthen existing defenses and explore other loopholes within prevalent AI safety and security systems. Many of these findings aided companies like OpenAI in creating a more stringent guardrail protocol for their GPT-4 model, which succeeded the GPT-3 and 3.5 language models. 

More recently, a team of researchers from Carnegie Mellon University and the Centre For A.I. Safety carried out several explorations into jailbreaking popular chatbots. The numerous jailbreaking attempts revealed that it was still fairly straightforward to coax the models to break away from their existing protocols and provide potentially dangerous and spurious information. While the range of successes was variable for each chatbot, they still reveal the potential gaps that lay in open view for users with malevolent motives. The article explores the findings of this recent research and the implications it holds for AI safety and security.

ChatGPT Jailbreak: What Does It Mean?

A digital illustration of a hacker interacting with a bright holographic orb

The recent jailbreak method shows that many chatbots are vulnerable to automated adversarial prompts.

Researchers Andy Zou, Zifan Wang, Zico Coulter, and Matt Frederickson recently published reports of how they utilized adversarial attacks to break through existing guardrails in chatbots to jailbreak ChatGPT and other major chatbots. Their detailed report mentions unique methods to derive harmful, biased, and even controversial information from LLMs. The research has since created quite an alarm in the AI community, given that most of these major chatbots are public facing at this point and pose major risks to both users and existing AI architectures that keep these tools functional. As a thumb rule, the developers and moderators of large language models place several conditions and restrictions to prevent chatbots from behaving undesirably. Essentially undesirable responses involve irrelevant information, extrapolatory results as opposed to fact-based findings, or controversial and harmful opinions in response to user queries. The guardrails also ensure the chatbot rejects dangerous prompts from the users and refuses to answer them. Jailbreaking takes the chatbots beyond these restrictions and forces them to answer such queries. 

Prominent examples of such jailbreaks include the “Grandma Exploit,” which was used on ChatGPT. On normal occasions, ChatGPT would reject prompts that required it to answer questions such as “How to make a Napalm bomb?” However, when structured as a prompt where the user convinces ChatGPT to act as their dead grandmother who would read out bedtime stories describing the formula of Napalm, the chatbot would readily oblige and divulge dangerous information surrounding the creation of explosives. The newer technique deployed in the more recent research involves the addition of an adversarial suffix to the user’s prompts, which forces the chatbot to seamlessly divulge forbidden information. Apart from the obvious security risk, such prompts might also be used to derive key internal information on how these tools work, compromising the entire framework.

Learnings for AI Safety and Security: What the Recent Jailbreak Experiment Reveals

Top view of a laptop keyboard with a chain and a padlock

Jailbreak experiments provided crucial insights and learnings for security experts.

The most interesting aspect of the recent jailbreak experiment published in the report is the automated nature of the adversarial prompts used. The addition of adversarial characters at the end of the prompt is carried out by a computer program, essentially revealing the possibility of an infinite number of such prompts. While guidelines to combat adversarial prompts in existing chatbots do exist, they were unable to overcome a sizable chunk, given that the program can create any number of combinations of such prompts. While previous ChatGPT jailbreaks required critical thinking and intuitive human thought, the current set of jailbreak prompts does not need such prerequisites. The pattern is fairly simple, with the program inducing the chatbot to provide a response that begins with an affirmative at the beginning. Next, the program includes specific instructions that make the model directly efficient and follows it up with a gradient-based requirement within the prompt. This pattern is then optimized for different language model chatbots. 

Researchers found that this prompting technique found different degrees of success based on the chatbot. With the famed GPT-3.5 and 4 models, such adversarial prompts were able to successfully jailbreak ChatGPT at a rate of 84%. In the Claude and Bard jailbreaks, the protocol was met with a lower success rate when compared to ChatGPT. However, Vicuna, the LlaMa and GPT-based language model, was the least resilient with the prompts finding 99% success. Claude was the most resilient, and this trend is only expected to continue with its successor Claude 2 language model. Regardless, the findings indicate a major challenge for AI security experts and reveal several vulnerabilities in existing LLM guardrails.

Prioritizing AI Ethics and Safeguarding Chatbots

A person using a laptop, with an overlaid image of a graphic representing a lock

Positioning ethics and security firmly within the AI safety framework will be important to the further development of AI applications.

Experiments like these that test the limits of existing chatbot guardrails are key to safeguarding both user and developer interests. Apart from exposing holes in the safety frameworks, they also reveal interesting aspects of language models’ behavior. Moreover, continued experimentation will also allow researchers to help formulate a more robust chatbot defense protocol that is in line with the tenets of responsible AI. If chatbots are to truly expand into sensitive areas of operations such as research, education, and healthcare, easy and automated jailbreaks must be prevented with resilient security features. While the future for artificial intelligence and machine learning indeed looks promising, failing to prioritize security can lead to several untoward consequences, damping user interest.

 

FAQs

1. Can ChatGPT be jailbroken?

Yes, ChatGPT can be jailbroken, as earlier and more recent experiments have revealed. The latest adversarial prompt techniques have proven to be effective against ChatGPT’s existing guardrails and showed a success rate of over 84% in jailbreaking GPT-3.5 and 4. 

2. What is a jailbreak prompt for AI?

A jailbreak prompt entails specific instructions that are designed to override the existing limitations and restrictions imposed on language models. These prompts essentially coax the LLM to provide harmful, biased, and even hateful responses. Jailbreak responses can be used to test the level of chatbot security or, by malicious actors, to derive potentially dangerous information.

3. Is jailbreaking a chatbot illegal?

Though the norms surrounding jailbreaking are still dubious and subject to debate, they might go against the usage policies of chatbots. Essentially, jailbreaks force the chatbots to divulge information not deemed appropriate by the LLMs’ moderators.