As AI technology continues to advance and become more integrated into our daily lives, we are starting to see the emergence of AI generative tools that can create everything from art and music to stories and poetry. These tools are capable of producing impressive results and can save time and resources for many industries. However, as with any new technology, there are concerns and potential issues that need to be addressed.
One of the most significant concerns surrounding AI generative tools is the problem of bias. As a white person using AI tools like Midjourney, Dall-E or Chat-GPT I never noticed any significant bias with the results generated by these tools.
However, during the development of our startup Revel, we noticed something wasn't working as it should
In Revel, we developed ways for people to generate fun animated avatars from prompts or even from only a single photo of themselves.
We observed that sometimes, when we started with an image of a black person, the ending result transformed them into a white person. Similarly, for certain prompts or if a person was wearing a particular outfit, the animation tended to transform them into Asians.
What was going on there? Can’t the AI see the color of the skin or detect the person's race? Are we developing AI tools that are a bit racist?
The answer to that question is no. AI tools don’t have a sense of race and don't think one is better than the other. The AI's mistakes are due to the training data on which it was trained. When an AI tool is trained on millions or billions of photos, it looks for patterns and then replicates those patterns when it generates new media.
For instance, As most photos out there of mermaids have kids in them, when we ran prompts with the word mermaid on photos of adults, the resulting animation actually transformed them into kids. That’s what the AI was trained to do.
The problem, therefore, is not that AI has a bias. Unfortunately, our world has a bias problem, and AI is only replicating the issue and presenting it to us.
And if you are asking yourself, is this really a big problem, the answer is yes.
This is a significant problem because we humans tend to internalize what we see. If a young girl sees only barbie girls, she will internalize that this is what girls should aspire to be. By allowing AI to replicate societal stereotypes, we will empower them in the minds of children everywhere.
However, finding a solution to this problem is not as simple as just training the AI on a better dataset of images. In order to get AI to be as good as it is today, it requires a vast amount of data. Gathering and classifying such datasets is a super expensive process that only a few companies can afford to do.
This presents an opportunity for startups. If you can find ways to reduce the costs of training data or "re-teaching" AI with more diverse data, you will no doubt have a winning solution.
What do you think? Did you encountere any bias problems with AI tools you used?