0

Bumble dating app may block you if you wrongly report someone

Share

[ad_1]

Dating app Bumble has announced that it will now take action against those who knowingly provide false reports because of someone’s identity, including removing repeat offenders from its platform.

“As a platform rooted in kindness and respect, we want our members to connect safely and free from hate that targets them just for who they are,” said Azmina Dhrodia, Bumble Safety Policy Lead. “We want this policy to set the gold standard for how dating apps think about and enforce rules about hateful content and behavior. We have been very intent on addressing this complex societal problem with principles that celebrate diversity and understand how those with overlapping marginalized identities are disproportionately targeted with hate.”

Watch the video: Elon Musk Twitter Deal: Top memes you should watch

Dhrodia, an expert on gender, technology and human rights, joined Bumble in 2021. Dhrodia previously worked on online violence and abuse against women at Globalism The Wide Web Foundation and Amnesty International, as well as several technology companies to create safer online experiences for women and marginalized communities.

“Our moderation team will review each report and take appropriate action. Part of rolling out this policy included tacit bias training and discussion sessions with all safety moderators to decipher how bias exists when curating content,” Drodia said. “We always want to lead in education and give our community a chance to learn and improve. However, we will not hesitate to permanently remove anyone who goes against our policies or guidelines.”

In a recent analysis by Bumble, it found that up to 90% of user reports it received about gender non-conforming people were eventually rejected by moderators for not violating Bumble’s rules. User reports often contained language regarding the gender of the reported user and speculation that the profile might be fake. These new rules now mean that Bumble may take action against those who knowingly provide false or baseless reports just because of someone’s identity.

The app uses automated safeguards to detect comments and images that conflict with its guidelines and terms, which can then be escalated to a human moderator for review.

Read all files Latest technology news over here

[ad_2]

Source link