Artificial intelligence and AI-generated content have been under a close scanner ever since their rise in popularity. AI, to this day, is seen with a hint of suspicion, and with good reason. Apart from obvious instances of bias that arise from limited datasets or malfunctioning protocols, chatbots are also capable of providing downright incorrect information to users due to a phenomenon known as hallucination. Apart from these issues, there are several outstanding ethical and technical problems that are still being worked on. In the meantime, tech firms such as OpenAI, Anthropic, and Google monitor their offerings and the content they produce with a strict protocol that involves AI content moderation. This is often carried out internally using a set of strict guidelines and guardrails that prevents the AI chatbot from sharing incorrect, harmful, and unverifiable information. 

In recent times, however, there have been individual developers alongside small groups that have taken an opposite stance to existing AI practices and content moderation protocols. Basing their arguments on the tenets of free speech and the right to information, several small-scale AI developers have put together several unmoderated and unregulated chatbots. Often, these can be run on a local computer without being connected to the internet. Popular chatbots that function in the absence of these guidelines have slowly but steadily grown in prominence. Well-known examples include chatbots like GPT4All, FreedomGPT, and WizardLM-Uncensored. The upcoming sections discuss the motivations behind these chatbots, their workings, and their implications.

Open Source AI and Unmoderated Chatbots

A hooded person standing before a projected screen of code

Unmoderated chatbots lack the guardrails that keep sensitive, biased, and harmful information away from users.

Open-source artificial intelligence is no new phenomenon. Platforms like Github host several thousands of open-source AI algorithms. Hugging Face’s chatbot is a popular open-source alternative to major chatbots such as Claude, Bard, and ChatGPT. While free options in AI chatbots are not necessarily all unmoderated, a vast majority of the unmoderated chatbots do tend to be present on open-source platforms to allow broader contributions from the development community. Though larger, regulated AI chatbots including the open-source ones have trudged forth with their development. On the other hand, their unregulated natural language counterparts might not have similar monetary or technical backing. Unmoderated chatbots like FreedomGPT are built on prominent LLMs like Stanford’s Alpaca, which in turn is coded using LlaMa’s fine-tuned version. Most uncensored chatbot creations have been inspired by resistance to existing stringent norms governing the dissemination of information by chatbots. The regulations surrounding chatbots are in place to prevent malicious and harmful information from being propagated by limited datasets and potential malfunctions. 

Apart from no censorship on the content produced by the generative AI chatbot, there also exists no bar on what the users present in the prompts they direct toward these tools. This also entails a complex and problematic aspect of unmoderated chatbots, where potentially biased and spurious information makes it to the language model’s workings. Moreover, there also exists considerable security and safety concerns associated with these chatbots, since they’re often not backed by robust encryption and data protection protocols offered by their larger, regulated counterparts. Regulated chatbots such as Inflection.ai’s chatbot and Anthropic’s advanced Claude 2 models are built specifically to tackle these problematic issues surrounding the harm arising from weak moderation policies. However, their open-source, unregulated counterparts adopt a different approach, placing primacy on the freedom to access information regardless of its nature or authenticity of it.

Assessing GPT4All, FreedomGPT, and Other Unregulated Open Source AIs

A concept depicting a robotic hand pointing at a representation of a shield with a keyhole depicting cybersecurity

AI content moderation ensures the impact of the misgivings of AI is reduced.

Uncensored AI chatbots like GPT4All, FreedomGPT, and WizardLM-Uncensored are all loosely moderated and function to work as conversational engines that serve to remain non-judgmental concerning their users’ requests. Requests often rejected or carefully navigated by more popular chatbots are often dealt with in a more forthcoming way by these chatbots. Though limited, these chatbots are capable of adapting to user requests and provide a fairly consistent flow of natural language. However, the ethical concerns surrounding them are based on open policies when it comes to sharing potentially harmful, vulgar, violent, and prejudiced information with their users. Despite being loosely moderated, most of these bots do respond to prompts that are underlaid with potentially harmful intent. 

As humans make their way through these potentially troubled waters that broach both censorship and ethical concerns, such matters will remain relevant to the future of artificial intelligence. Though many firms aim to consistently advance their capabilities in creating advanced AIs such as an artificial general intelligence model, concerns surrounding these issues reveal the infancy of current AI progress. A lot of the unmoderated chatbots have also been made to provide content and responses that are built for mature audiences. While there are more mainstream ethical debates on AI such as those surrounding academic integrity and AI in medicine, moderation, too, has become a fairly prevalent issue that has not warranted as much attention. However, the inherent risk in allowing AI models to operate without guardrails must be addressed, sooner or later. Apart from dubious information, these chatbots also pose risks to security and user safety.

The Importance of AI Content Moderation

A man using a laptop and a phone, with a holographic caution sign overlaid on the phone

Content moderation will remain integral to AI and cybersafety.

While there are concerns surrounding censorship and arbitrary moderation practices in prevalent chatbots, it must be understood that AI remains in its early stages and is prone to several pitfalls. Several global institutions and organizations are consistently updating their policies on artificial intelligence and the regulations that must be applied to these technologies. Soon enough, AI is bound to enter a phase where its ubiquity will remain its most noticeable feature, much like what the internet is today. As artificial intelligence is mainstreamed and makes its way to sensitive domains, precision, neutrality, and a keen sense of pragmatism will be indispensable. With such a future, AI cannot be left unmonitored or bereft of human ethical paradigms and practices. Free speech and expression, while important, cannot be left to the discretion of an autonomous algorithm that lacks objective rationality and grounding in human principles. AI content moderation and supervision will remain not only relevant but quintessential to the progression of machine learning and other related disciplines as a whole.

FAQs

1. Is FreedomGPT safe to use?

While FreedomGPT allows users to run the model bereft of the internet on their local computers, it does pose other risks. It is capable of generating highly harmful and biased responses that can be construed as being a security risk. 

2. What is AI moderation?

AI moderation refers to the monitoring and vigilance of user-generated content and prompts on an AI platform. It serves to keep the model in check, while also flagging potentially harmful prompts and training the model on user commands it must not respond to. Moderation also allows developers to set precise guardrails to prevent language models from providing false or harmful information. 

3. Why are there concerns and risks with AI?

Several concerns and risks abound with AI technologies. Apart from obvious ethical and content moderation concerns, there exists a tangible risk of biased information, hallucination, prejudice, limited perspective, cybersecurity risk, and a lack of transparency to name a few.