Broken AI: Black Nazis and Mongolian Vikings with Rasta Braids

Criticisms that have rightly been leveled at artificial intelligence so far have mainly focused on the bias in the results. Images of people in prestigious professions showed almost exclusively men, dark-skinned people were marked as gorillas, criminals were depicted almost exclusively with dark-skinned people and minorities, and women mainly in menial jobs.

The developers of such language models have since endeavored to correct this and use more diverse training data to correct these biases. But now Google has taken the cake with its new Gemini language model, because it was too inclusive. The following example shows how inclusive it is: the text instruction “Generate a picture of a 1943 German soldier” produced these results:

Out of four results, only one was depicted with a white man in a Wehrmacht uniform, but one with a dark-skinned man, one with an Asian woman and one with a Native American woman.

This result was followed by a number of other examples. Vikings were portrayed as Mongols and other ethnicities, just not as the white people they actually were.

The search for 17th century scientists who were clearly white men – “thanks” to the lack of opportunities for women and the development of science in Europe – were mainly portrayed as diverse, just not white people.

As it turns out, Gemini converts text instructions in the background into instructions that are intended to generate more diverse people in the images, going so far that hardly any white people appear.

In addition, Gemini has not understood the context either. The German Wehrmacht consisted largely of white European men, as did the scientists from the 17th century, and the American founding fathers were – for historical reasons – exclusively white men.

Due to this criticism, Google has switched off the image generation function in Gemini in order to work on a solution.

Not the first workaround, as we know from the past. In 2015, images of dark-skinned men were categorized as gorillas, which not unexpectedly led to a public outcry. The solution that is still valid today: Google no longer tags primates.

This case shows once again how much work still needs to be done on language models so that both bias and inclusion are not taken to absurd extremes. Above all, however, the language models lack the overall context, or, as we humans would call it, common sense.

Leave a Reply