The composition of society and its values are constantly in flux. This has become especially evident over the course of the last century. Today’s values are drastically different from those of even just a couple of decades ago. Among the most obvious is that several previously marginalized groups have found greater representation than ever before.

On the flip side, this diversity has not settled in seamlessly. Old biases and prejudices persist and are even joined by new ones. These are manifested in a wide variety of ways, insinuating themselves into the fabric of society even as it shifts. The range of opinions and sensitivities grows wider even as the extremes pull further and further apart.

In such a scenario, the logic of technology may be assumed to be a space free of the illogical fissures evident in human populations. The new field of artificial intelligence (AI) appears to promise to act as an unbiased decision maker, free of the sentiments that prop up societal biases. In actuality, however, this assumption is proving to be a wrong one.

What is AI Bias?

Wooden blocks spell out "Implicit Bias".

AI bias is built into the system, often unknowingly, by the humans who created it and thus is implicit.
Image Credit: lexiconimages / Adobe Stock

The excitement around the development of artificial intelligence has always been high. The idea is not new and the technology that makes it sentient has taken a while to develop, but the concept still stimulates the imagination. The possibilities it opens up appear to be endless. This buzz, however, has masked some of the disadvantages of AI. A primary one is AI bias. 

AI does not act in isolation.  It is built using algorithms and then trained on data that provides it with the information and context needed to make decisions and take the appropriate action. These algorithms are formulated by humans, and it is humans again who select the training data. Whatever prejudices the people who design these systems have, conscious or unconscious, are likely to seep into the AI. This results in AI bias. 

The causes of AI bias are similar to those of human prejudices: data that is incomplete or inaccurate. This misinformation further results in the drawing of erroneous conclusions. In turn, these lead to faulty decision making by the AI and a biased output. 

Is ChatGPT Biased?

Text saying 'ChatGPT' within an outline of a folderof a

ChatGPT, AI’s latest and most advanced offering, is still subject to concerns of AI bias.
Image Credit: reewungjunerr / Adobe Stock

ChatGPT is the latest development in AI, and the closeness of its output to human responses is causing a widespread furore. The realistic nature of its responses to prompts is partly because of the vast dataset it was trained on, which is the largest one used till date. Added to this is also the extensive range of real human responses that it was trained on, allowing it to pick up language skills. 

However, a few months into its release, ChatGPT has clearly revealed an inherent bias on several fronts. The large dataset used doesn’t appear to have prevented that. OpenAI, the company behind ChatGPT, did attempt to include safeguards in the system–it declines to respond to “inappropriate” prompts or requests for potentially dangerous information.

However, these guardrails are easily jumped over, and users have reported instances of gender, racial, and political bias

How Does ChatGPT’s AI Bias Affect Students?

As it derives from, and is so similar to human prejudice, one may reasonably conclude that the effect of AI bias is similar to that of the former. However, in practice, it is likely to have a more severe and far-reaching impact.

This deeper effect is due to the common perception among people, described earlier, that since AI is artificial, it is also objective and immune to human prejudices. This leads to AI bias going unrecognized and unaddressed. The discrimination continues but is invisible to most and for a very long time. The patterns of inequality are noticed, if they ever are, only after prolonged use of the technology, unless the user is on the lookout for them (which they rarely are).

ChatGPT has caused a splash in many fields, one of the main ones being education. Within a short time after its release, students began turning in essays and other written assignments that had, in fact, been generated by the chatbot. Its ability to collate and present information in a simpler manner also attracted those struggling to understand concepts.

Cartoon of a robot working on a desktop

Since its release, ChatGPT has been used by students to generate essays and written assignments.
Image Credit: MyBears / Adobe Stock

The scramble to use the AI in these ways has caused several concerns among educators as it deprives students of the learning they may receive through the writing process. At the same time, ChatGPT also appears to present inaccurate data and make up facts. 

Adding to these apprehensions is the bias inherent in ChatGPT. So far, this bias has only been noticed and discussed by those who fed the chatbot prompts with the intention of exposing any biases. Others, including students, continue to use it and, more or less, accept its output as is. 

Students may be using ChatGPT to cut down on one, a few, or even all of the laborious steps in the writing process. One of these steps that is especially time-consuming (and equally educative) is the research. The exposure to a wide range of information during research provides an opportunity for students to learn as well as to form their own points of view (biased or otherwise). 

In fact, one of the most frequently set writing assignments are argumentative essays and similar opinion-based pieces that invite the student to express their thoughts and speculations on the topic. Using an AI such as ChatGPT to generate content that becomes these assignments is an off-loading of the independent research and critical thinking, along with the (unwitting) acceptance of the AI’s bias.

Typing on a laptop

Putting off independent research and depending solely on AI-generated content is damaging to students in the long run.
Image Credit: Photo by Christina Morillo

Heavy dependence on AI as a primary source of information also creates the potential for an echo chamber. Once it picks up on the types of responses a user is looking for, it begins to cater to those expectations. It may highlight the kind of information that supports and magnifies the user’s own bias and not return or downplay any contradictory evidence. From the user’s perspective, however, the information they receive seems to be objective, being the product of a purely logical, unemotional, non-human AI that has no stakes in an ideological debate. 

Thus, AI is clearly not free from the bias and prejudice that is rampant in human minds. It is not really “objective,” despite popularly held beliefs. In fact, this assumption makes AI bias more damaging as prejudice is expected among humans and is therefore recognized and questioned, while the same cannot be said of the former. 

As with human prejudice, the only way to free oneself from AI bias is to expose oneself to a wider range of information and experience. In education, this primarily comes with research, traditional or otherwise. Depending solely on AI, where the user does not know what data has been used to train it, is a step in the opposite direction. 

The potential of ChatGPT and similar AI in the field of education is great but, at least in its current state, students are better off resorting to more traditional methods of research and writing.