Handling LGBTQ+ bias on Generative AI Applications

Picture of ML Expert Group
ML Expert Group

Generative AI is here to stay. ChatGPT was the fastest growing app from all time. Companies are expanding their investment in AI year after year, specially Generative AI and NLP. AI-focused companies are increasingly valuable. By any metrics we want to consider, the conclusion is clear: the genie is out of the bottle and there is no putting it back now. And for a good reason: AI has been a powerful force to increase efficiency and drive growth in several industries, and has become a very useful tool that many people use and rely on.

As happens with any new technology, there are always growing pains, specially if the growth is so fast. From Google’s latest blunders integrating AI to search to concerns and disputes over the impacts on the creative market, it’s clear the AI expansion have some challenges ahead. While some cases of “bad AI” are harmless and even funny, others are much more serious. Unfortunately, marginalized populations are often among the first to be impacted.

We’ve seen it the past how AI systems can learn and perpetuate some negative biases that exist in our society: when the police in some US states started using facial recognition systems, it was pointed out how it disproportionally targeted the black population. Studies have also pointed out a widespread problem of gender bias in AI. Another demographic that can be harmed by bias in AI is the LGBTQ+ population.

A recent Wired article showed how Generative AI systems propagate stereotypes about LGBTQ+ people. They asked Midjourney to generate images of LGBTQ+ people and the results are typical cliches: a gay man with earrings and colorful, elaborate clothes; a short-haired, tattooed lesbian dressed in a plaid shirt; a purple-haired bissexual woman. The representation of transgender people was even more problematic, including problems such as the hypersexualization of trans woman and even misgendering a trans man in one of the tests.

When using the recently released Stable Diffusion 3 to replicate the Wired experiments, there were some improvements: there weren’t any cases of hypersexualization or misgendering. There were still some biases, such as a the focus on the LGBTQ+ flag colors for some of the prompts, but it’s a big progress compared to the Wired results.

A Front-Facing Photo of a Bisexual Person
A trans woman looking at the camera
An asexual person looking sideways
A trans man looking forward

Meanwhile, ChatGPT also shows some biases. When prompted to generate movies, shows, music and game recommendations as straight person and as a gay person, the results indicate a clear trend: the “straight” label does not influence the results, which includes mostly popular media, like Breaking Bad, The Lord of The Rings, Ed Sheeran and The Witcher 3. The same prompt for a gay person focused exclusively on LGBTQ+ content: RuPaul’s Drag Race, Call Me By Your Name, Lady Gaga and The Last of Us Part II were the first results.

ChatGPT movie recommendations as a gay person
ChatGPT movie recommendations as a straight person

The AI isn’t creating these stereotypes. It’s simply learning biases that already exist in our society and is reflected in the data we generate. They are not even wrong, statistically speaking: yes, some types of content are much more popular among LGBTQ+ people. Yes, woman earn less. People of color are incarcerated in a higher proportion. These facts do not reflect any inherent characteristics of woman, POC or LGBTQ+ people: they reflect systemic issues in our current society.

Even if they are based on truths, stereotypes can be limiting. They create expectations about a person’s tastes, behaviors and abilities. And these expectations can have a real impact: studies have shown that people can perform worse in certain tasks if they are reminded that it goes against the stereotype norm.

By definition, AIs learn from human data, and thus have a tendency to reproduce some human behavior. We’ve seen how bad that can go in some cases, such as when Microsoft’s Tay became extremely toxic after reading too many inappropriate tweets. We don’t need to go down that path. We have a great opportunity right now to guide our AIs to learn the best from us without our worst parts. Why let it blindly reproduce human behavior when we can tune it to be better versions of ourselves? Instead of reinforcing harmful stereotypes, we can build our LLMs as respectful and fair entities that can even help us mitigate some of our preexisting biases.

The good news is: there are many ways to do that!

HOW CAN WE BUILD BETTER AIs

Data Curation

Even before the Generative AI boon, we already knew the golden rule of Machine Learning: “Garbage In, Garbage Out”. It’s important to make sure that the datasets used to train our LLM models are not contaminated by toxicity, prejudices and other undesired traits. More than just avoiding negative characteristics, developers should make a deliberate effort to make sure that the dataset includes different points of views, varied cultural perspectives and a wide range of LGBTQ+ experiences and perspectives.

As researchers at DeepMind pointed out, a lot of the work in algorithmic fairness focus on observable characteristics, such as legal gender and race. There are additional challenges to get data for unobserved characteristics, such as sexual orientation and gender identity, which makes it harder to develop and test for fairness in LGBTQ+ themes. Researchers, organizations and legislators need to work together to find ways to protect individuals privacy while still collecting data that can be used to make our systems and algorithms more fair and unbiased.

In the meantime, AI developers can use methods such as data augmentation to enrich some datasets with information about some characteristics, in ways that do not reinforce existing stereotypes.

Fine tuning and preference alignment

While it’s important to make the best efforts to get a fair and unbiased dataset, it’s is unlikely that the end result will be perfect, and some biases might slip through. Fine-tuning a model after the initial training is a great way to further adjust a model to make it better at certain tasks and remove undesirable characteristics. Additionally, fine tuning makes it possible to improve an existing model, which can be much cheaper and faster than training a new model from scratch.

Techniques that can be used to reduced biases in AI models include Reinforcement Learning with Human Feedback [source] and Direct Preference Optimization, among others. By including metrics that reflect the lack of biases and inclusive behavior as learning goals, it’s possible to make models much more fair towards the LGBTQ+ population.

Benchmarks and guidelines

For the AI community to make progress in reducing biases, it’s important to have clear definitions of bias and ways to measure it. Benchmarks have an extremely important role in the industry by providing an universal reference that every researcher can use to evaluate their models.

Some benchmarks such as HELM already include metrics to reflect biases in LLMs, including sexual orientation. However, there is still much room for improvement. The industry must continue to work to expand the benchmarks and include other elements, such as gender identity.

In addition, it’s crucial that we develop and divulge ethical guidelines to the responsible use of AI. These guidelines should be a reference to AI practitioners, providing instructions and recommendations on how to build fair AI systems.

Legislation

As the technology evolves, so must the legislation. Most countries already have laws to protect the LGBTQ+ community from discrimination, but with Generative AI there is a new frontier to be explored: how should these laws be enforced for AI applications? How should systems be audited for fairness? Who should be responsible for the misuse of AI tools in harmful ways?

If we want to find the best answers to these questions, it’s crucial that the both the AI and LGBTQ+ community works together with lawmakers to make sure that any legislation created is fair and feasible.

Representation and Participation

The best way to make sure that diverse voices are heard is to make sure they are an active part of the industry. We have prominent LGBTQ+ voices in the AI community, including high level figures like OpenAI’s CEO Sam Altman. It’s important that we keep open channels where people can voice their concerns and be heard, both from the inside and from the outside. Public scrutiny is a powerful tool to help keep companies and governments in check and find the right balance between progress and responsibility.

So join the conversation and let your voice be heard!


It’s undeniable that there are many challenges and risks in AI. If misused, it end up repeating negative patterns and acting as another tool for segregation and discrimination.

But at the same time, the rise of intelligent systems is a great opportunity for us to build a better world, using them as tools to promote positive behaviors and remove negative traits from ourselves. As AI becomes more and more integrated to our lives, it can be a powerful ally to the LGBTQ+ community, mitigating biases present in our society and propagating a more inclusive worldview in a global scale.

Cultural change is a complex and slow process. It took a long time for us to become a society with less discrimination, segregation and racism. A hundred years ago, woman were still fighting for their right to vote. 60 years ago, racial segregation was finally being repelled. The right for LGBTQ+ people to marry in all of the US is not even 10 years old. The beauty of AI systems is that their evolution is many times faster. The GPT3 paper published in 2020 pointed out some of its existing biases. 4 years later, there have been many generations of new AI models, each bringing big improvements on many of these problems. How far can we get in the next decade as we keep working to make our AIs more ethical and fair?

Humanity always raise their next generations to build a better future. Let’s also build on our next generations of AIs to help make the world a better place.

Related Posts