Beauty in the AI of the Beholder: The Shocking Problems with Dating Apps

Lauren Toulson
CARRE4
Published in
6 min readJun 3, 2021

--

Photo by Mika Baumeister on Unsplash

In recent years uptake in the use of dating apps has become as common as meeting in a bar, and since the pandemic the need to meet people online has sky-rocketed with Tinder reporting a record 3 billion swipes on 29th March 2020.

Dating apps use various types of matching algorithms to boost our chances of liking and matching with the people we see on the screen, but do you realise that you may not be getting the same chance at finding the “one” as everyone else?

Dating apps need to be more transparent with their users where they rank them in their algorithm — especially if the AI is racially marginalising them.

Photo by Alexander Sinn on Unsplash

How the algorithm works

Although dating apps are very secretive about how their algorithm actually works, there’s convincing evidence that Tinder uses a “Elo scoring” system while Hinge uses a “stable-matching” algorithm.

Tinder users are ranked not only on how many swipes-right they get, but the points they get from this swipe also depends on how many points the swiper has. If you are matching with someone with a higher score than you, it will increase your score more; someone with a low score with lower your score.

It therefore queues other users in the card stack of people based on similar scores. Using the app more increases frequency you’re shown to others, as someone active is more likely to make a match.

The aim of the app is to create meaningful matches, and therefore the algorithm recognises over-swiping by reducing how much the profile is shown to other users, as well as limiting swipes to only 100 a day in order to ensure the user properly looks at profiles first, to facilitate more meaningful connections.

The only way to ‘break-through’ this algorithmic stacking is to ‘super-like’, a paid feature where a user can super-like a profile they see and this will therefore add their card to the other person’s stack regardless of scoring compatibility.

Not so different in principle from Tinder’s approach, Hinge uses the ‘Galey-Shapely’ algorithm from patterns in who its users like and reject in order to create the most compatible matches, but giving a daily ‘most compatible’ suggestion to each user; two users will be presented each other’s profiles as the algorithms choice for the person it deems they are most likely to prefer. The app creates a pool of preferences, which is nicely demonstrates by a parody app Monster Match; If you normally swipe on Vampires, you’ll be shown more Vampires. If someone else who likes Vampires likes Werewolves, you’ll be shown a few too, until you’re satisfied.

Photo by Sean Stratton on Unsplash

Why it’s Morally wrong

In an expose about Tinder’s algorithm, Carr revealed in his discussion with Tinder CEO Rad that they hold a lot of data on their users, such as their ranking, popularity scores and the usual personal data like contact details. This data can now be downloaded, which I tried myself. If requested, it’ll send you a really incomprehensible list of how many swipes right and left on which dates, but nothing meaningful and certainly not in the detail you’d expect them to access inside the company.

Each user has a ranking, and users, especially paying ones, deserve to know if they are getting their money’s worth or if they’ve been cast to the bottom of the pile. The OECD promotes artificial intelligence that respects human rights, recommending transparency but responsible disclosure. This stresses a possible need for a mediator like a psychological professional to judge the ethics behind disclosure potentially ego-harming information.

And let’s talk about the Racial Bias ..

The biggest issue is the responsibility to report the racial bias embedded in the AI. Dating website OKCupid found that of all demographics, black women and Asian men received the fewest connections, interacting more with white people than vice versa. In the systems of Tinder and Hinge, this would mean black women would be less visible or less likely to be recommended to other users because with less swipes they have a lower score. This means the AI further enforce bias. Users have a right to know if they are being racially marginalised and have the chance to opt out of the algorithm, and experience better visibility.

As McMullen asks

“Where should the line be drawn between preference and prejudice?”

While Tinder say they don’t collect data on ethnicity, it is clear that regardless the AI reinforces society-specific ideals of beauty, and does not help users escape these ideals.

The apps reduce the ability to self-filter and instead provide recommended matches based on the tastes of the majority (which means more visibility for white users). Users may never get to see that person who is perfect for them, because the algorithm doesn’t think they match because of their previous swipe history and the tastes of others.

The designer of the Monster Match app suggests users should be able to opt-out of the algorithm to create more autonomy, or to erase search history and start a clean slate. Having more transparency, and more choice, has been shown to create a sense of a more successful match. So through this design, not only is it morally better, the users may in fact have a greater perceived sense of romantic success.

Responding to Regulation

Photo by Guillaume Périgois on Unsplash

The first ever AI-specific regulation was released April this year by the European Commission and sets out the requirement for “IA for AI” — Impact Assessment for Artificial Intelligence. This specifies the need for documentation of all risks, and specifically how those are being addressed, with regular audits.

Dating apps responded to the introduction of GDPR by allowing users to download their data (despite it being in a relatively meaningless form), so how are they going to address racial bias? As dating apps effectively discriminate by marginalising certain people from the app, it goes against all the regulations as high-risk, however under the new regulation dating apps do not fall into ‘high-risk’ AI. The Commission have drafted future law for the coming years, which includes banning AI that manipulates behaviour, like advertising and dating apps.

There hasn’t been any recent evidence from Tinder or other dating apps that they are attempting to tackle bias and discrimination or provide more autonomy to their users in terms of opting out of the algorithm, aside from Tinder’s 2019 denial to Wired that race has no role in their algorithm.

Photo by Kelly Sikkema on Unsplash

--

--

Lauren Toulson
CARRE4

Studying Digital Culture, Lauren is an MSc student at LSE and writes about Big Data and AI for Digital Bucket Company. Tweet her @itslaurensdata