Probably the best thing about AI and the tools powered by it is that they’re largely available to everyone with an internet connection. But this doesn’t mean they were made for everyone. When talking about AI, “we forget that we’re talking about datasets or language models,” says Jamie Cohen, PhD, an assistant professor at Queens College of the City University of New York with a specialty in digital culture and internet literacy. “The major models we’re using today are extremely exclusive, and that means they’re very problematic.”
TL;DR: The “machine learning” behind AI has to learn from something. And the majority of models are created and trained by cis white men. The datasets they use to train bots are often open-source, meaning they’re publicly available, and many are pulled indiscriminately from the internet (the place where 4chan lives, in case you need reminding).
“As a matchmaker, where I give pause is on how AI might socially engineer which profiles people view,” says Maria Avgitidis, CEO of Agape Match and host of the Ask a Matchmaker podcast. “We’ve already seen that AI is discriminating against people based on gender or race when reviewing job applications at certain companies. Why wouldn’t it also discriminate against people in the online dating experience?”
For now, as more and more people turn to AI-assisted profile curation, swiping, and chatting, it’s on all of us as users to be aware of this potential bias, to raise red flags about it, and to demand better from the companies behind the tools we engage with.










