AI Biases

Bias: A particular tendency, trend, inclination, feeling, or opinion, especially one that is preconceived or unreasoned. (https://www.dictionary.com/browse/bias)

AI bias refers to AI systems that produce biased results that reflect and perpetuate human biases within a society, including historical and current social inequality. Bias can be found in the initial training data, the algorithm, or the predictions the algorithm produces.

In all AI image generators, the quality of the outputs will depend on the quality of the data sets the millions of labeled images that the AI has been trained on. If there are biases in the data set the AI will acquire and replicate those biases. Studies have shown that images used by media outlets, global health organizations and Internet databases such as Wikipedia often have biased representations of gender and race. AI models are being trained on online pictures that are not only biased but that also sometimes contain illegal or problematic imagery, such as photographs of child abuse or non-consensual nudity. They shape what the AI creates.

Several types of bias can be distinguished:

  1. Selection Bias: it occurs when training data doesn’t represent the reality, often due to incomplete or biased sampling.
  2. Confirmation Bias: AI relies too heavily on existing beliefs or trends, reinforcing biases and overlooking new patterns.
  3. Measurement Bias: Collected data systematically differs from actual variables of interest, leading to inaccuracies.
  4. Stereotyping Bias: AI reinforces harmful stereotypes, like facial recognition being less accurate for people of color.
  5. Out-group Homogeneity Bias: AI struggles to distinguish individuals not in the majority group, leading to misclassification, especially for minority groups.

An analysis of more than 5,000 images created with Stable Diffusion found that it takes racial and gender disparities to extremes — worse than those found in the real world.

As these tools proliferate, the biases they reflect aren’t just further perpetuating stereotypes that threaten to stall progress toward greater equality in representation — they could also result in unfair treatment. Take policing, for example. Using biased text-to-image AI to create sketches of suspected offenders could lead to wrongful convictions.

Therefore, this problem really matters because the increasing use of AI to generate images will further exacerbate stereotypes. Many reports, including the 2022 Recommendation on the Ethics of Artificial Intelligencfrom the United Nations cultural organization UNESCO, highlight bias as a leading concern.

In the following posts I will analyze more in detail specific cases of AI-generated images that contain biases.

Schreibe einen Kommentar

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert