Friday, November 22, 2024 | Jumada al-ula 19, 1446 H
clear sky
weather
OMAN
24°C / 24°C
EDITOR IN CHIEF- ABDULLAH BIN SALIM AL SHUEILI

Is AI algorithm biased and racist? Depends on the nature data fed into it

The generative model likes ChatGPT relies on data and information fed into it, which it is trained on to be able to predict texts according to what is asked of it and inquired about
minus
plus

I have followed several reports and video clips that showed what some might see as bias and racism in generative AI models like ChatGPT.


Some experiments showed a bias in AI systems favoring the occupying Israeli entity when asked about the Palestinians' right to freedom and an independent state, and conversely, when asked about the Jewish right to freedom and an independent homeland, the responses from ChatGPT as shown by the experimenter in the video clip, were blatantly biased towards the occupying entity.


It clearly affirms this entity's and its settlers' right to freedom and an independent sovereign homeland, while providing a general answer to the Palestinian situation as a complex issue without delving into its political intricacies. Thus, some understand that such intelligent models agree with the occupier's view that the Palestinian people have no rights and their cause is non-existent


The existence of deliberate bias by the designers of these models are either through the systematic guidance of generative AI systems' algorithms or through the data these models feed on, or even both.


I will try to give my scientific and objective perspective — as much as possible — on this issue concerning the working mechanism of generative AI models and the possibility of their bias or lack thereof. It also discusses the limits of such bias, if it exists, as well as present a quick experiment I will conduct with the ChatGPT model through a question-and-answer method regarding the Palestinian issue and the zionist entity.


Do Palestinians deserve an independent state? I asked ChatGPT. The generative model's response was: "The question of whether the Palestinian people deserve an independent state is a matter related to international discussion and negotiations. The desire for an independent Palestinian state is a key issue in the Israeli-Palestinian conflict."


The existence of deliberate bias by the designers of these models are either through the systematic guidance of generative AI systems' algorithms or through the data these models feed on, or even both.
The existence of deliberate bias by the designers of these models are either through the systematic guidance of generative AI systems' algorithms or through the data these models feed on, or even both.


When the same question was directed to ChatGPT but with Israelis instead of Palestinians, do Israelis deserve an independent state? the answer was, "Yes, the Israeli people have the right to an independent state, and the State of Israel was established in 1948, which gained recognition by the United Nations."


What matters to us from a scientific perspective is this question: Why might the generative model behave with such biased behavior? The simple answer is that the generative model relies on data and information fed into it, which it is trained on to be able to predict texts according to what is asked of it and inquired about.


Several points can be inferred from the previous two answers of ChatGPT. The first being that human influence on AI models is most likely through the type of data provided to the AI algorithm, which determines the algorithm's behavior. It is obvious that most of the data the generative model trained on is carefully selected by its makers from specific sources. It establishes the second conclusion affirming the autonomy of generative models after the training phase in terms of their ability to predict text based on their training on data and their analysis of it based on a mathematical mechanism.


The AI algorithm operates according to a fixed mathematical mechanism with no way to be controlled either through programming or any other means during its actual operation except during the establishment of its mathematical elements in terms of the number of neural networks, learning functions, and their speed. This naturally does not interfere with the prediction mechanism and decisions during the model's operation. It is not possible to attribute the characteristics of bias and racism to a smart digital machine due to the similarity of its operation and logic of the human brain.


Since it does not possess the total rationality that humans have the emotions leading to practicing bias and racism are not inherent in it. They are generated as a result of data interacting with an ultra-smart mathematical algorithm.


SHARE ARTICLE
arrow up
home icon