top of page

Every gossip has multiple sides to it.

  • Writer: Maaz
    Maaz
  • Mar 1
  • 7 min read

No matter how many times you repeat, your words might change every time if you aren't reading them from somewhere where they are set. It's human nature to naturally adapt to changes and to change things to adapt them to ourselves. We've all heard gossip. We've all done gossip. Some of us have been the gossip. However, the same story feels a little different when we consider the thoughts of multiple brains. Sometimes the words are warped between sharp glances. Sometimes the subtle gestures do the talking. Every time someone speaks, our ears want to dig into the new information. We process every syllable carefully and yet, sometimes, some syllables hold more weight than the others.

Artificial Intelligence is another form of gossip to us. It provides us with data that it itself is fed. It learns what we provide. Then we expect it to provide us with an accurate result. Think about it, however much we hear someone, we don't find meaning to it until it feels right to us. However, why is there an exception to this nature of ours? AI chatbots have been created to hold and manipulate data, so that we can access it anytime we want. And it's not just data; AI is impersonating a human being. ChatGPT has now started giving relationship advice, when there are therapists spread around the world. We find its speculation correct. Interesting, because if the same thing was told to us by a human, there would always be a doubt in our minds.

We have been relying so much on AI, that we have forgotten that even if it is a scientific discovery, it is still made by humans. Our hands are the one feeding it data. Our voice, giving it a voice. Our eyes are its centre of gravity. We have been slowly providing AI with a piece of ourselves. So, how can we say that it would not contain some bias? How can we believe that it doesn't contain any judgement?

In the race to transform technology, and to bring about a future where AI thrives like the right hand of humanity we forgot that this humanity is the maker of AI. Our dependance on AI is returned, for without our knowledge, AI wouldn't have the information of the world. All that information that has been fed to it for a long period of time, such that, what it is fed more, becomes to it more honest and correct.


In this fast pacing world, even medicine is being transformed by Artificial Intelligence.

In a world like ours, it’s scary how people have started depending on AI for everything they do; even when it’s related to the most important thing, their health. ChatGPT has become a rising AI chatbot amongst the public to check their symptoms online in order to understand what health problems pertain to them instead of going to a doctor. It's easy nowadays to log in our symptoms and get ourselves diagnosed within minutes

They fail to realize that it often misguides them and is a major consequence of AI bias. AI is trained to assess their problems based on its analysis from previous medical records. This data is, however, mostly not fair and complete. Moreover, every individual has a different past medical record and a general conclusion may not be applicable to every individual.

If someone enters their symptoms such as headache or bodyache, AI jumps directly to serious illnesses such as brain tumour and cancer; when their problem might just be lack of proper sleep or stress. Such conclusions can make people worried and anxious about their health, thinking of a smaller problem as something terrible happening to them.

AI can never judge the situation like a true doctor. The bias present in its system leads to wrong judgement which can lead to even bigger health issues. Hence, people must consult a real doctor for accurate diagnosis. It is becoming for one to look for a simple solution to some symptoms, however looking for a definite result concerning our health problems is not an adequate solution.


Feminism has been all the rage recently, however it isn’t just our species being partial to one gender.

AI also enforces a lot of gender bias. The results generated by AI are based on unfair data about men and women. The data being fed to AI is of previous experiences and hence, this data becomes biased even if not intentioned; since our past generations have been favouring the male population for a much longer time than the new generation that sees everyone equally.

AI can analyze massive amounts of information and make educated decisions. But at the same time, it can also create bias when it comes to interpreting and acting on it. It's being trained in a way that uses data it is given. It's like a child being taught. What we provide to it is what it learns. A most popular gender bias of AI is seen in image creation by picking up existing stereotypes. This results in reinforcing the prejudice against women.

As a test of image generation, Bloomberg requested more than 5,000 AI images be created and found that, “The world according to Stable Diffusion is run by white male CEOs. Women are rarely doctors, lawyers or judges. Men with dark skin commit crimes, while women with dark skin flip burgers.” Midjourney conducted a similar study of AI art generation, requesting images of people in specialized professions. The result showed both younger and older people, but the older people were always men, reinforcing gender bias of the role of women in the workplace.

Unfortunately, AI does not understand the concept of fairness. It only copies the data it has collected from its training data. This results in prejudice against people. Which leads to prejudices being fed to us even, because we are the ones eventually using the information AI generates for us. Its bias eventually reflect in our decisions and thoughts.


Information is a weapon, however, if incomplete it can become a complication.

AI analyses a situation using statistics from the internet, which is a hub of all the information present in the world. Yet, it may miss some important facts, ignore certain details or rely on data that is incomplete, outdated, or biased. It straight away states the facts but it does not truly understand the context behind the situation. It leads to a lot of misunderstandings, wrong beliefs and poor decisions.

AI also does not disclose its resources so the user never gets to know whether the result is from reliable sources or not. When people read it they are misled to a wrong direction which affects the way they analyse a situation. It may focus on only one side of the story and not consider other important perspectives. The person reading the information may trust it without questioning it, thinking that AI has understood everything correctly. This can lead them to think in the wrong way, even though they do not know the full story.

When AI algorithms detect patterns of historical biases or systemic disparities embedded within the data they are trained on, their conclusions can also reflect those biases and disparities. And because machine learning tools process data on a massive scale, even small biases in the original training data can lead to widespread discriminatory outcomes. In the end it is our mind being influenced in the wrong way. We are the ones being led to doom because of the biases present in AI. This leads us to think if AI is the most reliable source of information or not, and how much we should be limiting ourselves with its use.


Not all bias are unintentional, because it is human nature to favor what we like and biased towards our own values and thoughts.

Every industry has its own faults that are due to intentional actions. So is the AI industry. The people building AI are the ones who have the decision of what AI learns and what it does not. This happens when the creating organization wants to feed a biased or personalized opinion to the consumer. It can either be to promote their business goals or social and political understanding. As a result, AI does not provide neutral or balanced responses and can influence people’s thinking unfairly. Such intentional bias can mislead users and reinforce inequality instead of supporting fairness and truth.

AI models function by analyzing large sets of data through the process of machine learning. Therefore when this data is being manipulated, AI models are also being manipulated and then so are we! Hence, the prejudices and biases of the individuals and teams involved in the creation of AI also reflects in the biases that occur in AI.


Disaster mitigation is a thing and successfully AI bias can also be mitigated.

AI biases need to be mitigated to prevent issues that can harm us. It needs a more comprehensive approach. These are the following ways in which we can mitigate AI biases :-

  1. Using data preprocessing techniques to reduce the influence of discrimination and prejudices before the AI models are being trained.

  2. Rules and guidelines that specifically prevent bias being induced in AI during coding to ensure fair outcomes.

  3. Data post-processing adjusts the outcomes of AI models to help ensure fair treatment. In contrast to pre-processing, this calibration occurs after a decision is made.

  4. AI made decisions shall be audited so that their legibility can be checked.

  5. Developers shall provide transparency regarding the decisions AI models make.


Conclusion

Artificial Intelligence is indeed a very strong tool. It has been helping in ways we hadn’t even imagined. Its uses are by far too many. We can not neglect how it has been influencing our lives, rapidly changing the future we imagined. However, we should also imagine artificial intelligence to be all knowing. It has its fair share of problems and bias is one of the biggest.

Which therefore makes it even more important for us to find ways to stop these bias from spreading throughout our thoughts and the working of our minds. By Tanisha & Yashita

B-tech (CSE A, IIOT 2nd Year)


bottom of page