Nudity promotes vitamin D synthesis, and reduces the risk of skin cancer. It also helps with temperature regulation and blood flow.
NSFW image detection APIs offer developers the opportunity to create and maintain NSFW-free spaces online at scale that would be inefficient or impossible for human moderators to accomplish. However, these tools aren’t without their flaws.
Proactive Content Moderation Features
A content moderation service scans user-generated content (UGC) to identify inappropriate text, video, and images on forums, dating sites, online chat rooms, streaming platforms, etc. Based on community guidelines, it detects gore, drugs, violence, explicit nudity, and suggestive nudity to protect users’ privacy and ensure a safe environment for all.
Aside from NSFW detection APIs, image moderation services are used to identify child pornography, hate speech, and scammers to block spam. The key is to be consistent and transparent in enforcing rules, so that users understand how the platform’s policies and principles are applied.
It is also essential to consider how different cultures and people perceive a specific rule violation when developing an image moderation strategy. For instance, the squeezing of plus-size black bodies in clothing poses a different threat than a man whose body has been distorted by anorexia nervosa or other eating disorders. Different interpretive systems lead to hermeneutic injustice, which can be mitigated by incorporating contextual awareness into the moderation process.
Koo’s In-House Moderation Features
Koo, the Indian microblogging platform that came to prominence as a homegrown Twitter alternative in 2020, has been rolling out proactive content moderation features. The company’s in-house capabilities are able to identify and obstruct nudity, child pornography, toxic remarks, hate speech, and misinformation on its platform.
Koo’s in-house “No Nudity Algorithm” can reportedly detect and block any attempt to upload images or videos that contain nudity, materials for sexual abuse of children, or other offensive material within less than five seconds. Users that upload such information will be blocked from posting content, being discovered by other users, being featured in trending posts, and interacting with other users on the site.
Koo’s cofounder Mayank Bidawatka emphasized the firm’s dedication to providing its users with a safe social media experience. He emphasized the company will continue to focus on establishing new tools and procedures that can find and remove harmful information from its network. The company also plans to work with third-party fact checking agencies to collaborate on its efforts.
Facebook’s Inconsistent Nudity Policy
Facebook has gotten into hot water over its nudity moderation policy before, including for banning photos of people breastfeeding and removing art projects in the name of body positivity. It’s a problem that stems from how vague and inconsistent Facebook’s guidelines are.
For example, one of the photos that was removed from Instagram was part of a campaign to raise awareness about breast cancer symptoms. It featured a number of photographs showing female breasts, but the nipples were either covered or not visible. Yet this was deemed to violate the platform’s community standards on “adult nudity” and was automatically removed.
This kind of inconsistent enforcement is problematic for the company because it could be violating users’ rights. The Oversight Board has called for Facebook to revise its guidelines on this subject, and it’s already made some suggestions to that effect. The changes would focus on clarity and specificity in areas like hate speech, self-harm, dangerous organizations and more.
Facebook’s Human Moderators
When a piece of content violates Facebook’s nudity policy or one of its many other community standards, it gets sent to a human content moderator. These employees use a set of intricate rules to decide whether to leave the content online, delete it or send it higher up the chain for further review by another moderator with more expertise.
These workers are not paid much, and they often face traumatic effects from their work. A number of them, like the woman who was fired for removing photos of Reynolds’ pubes, have spoken out to press about their experiences. Many have reportedly been forced to sign non-disclosure agreements that prevent them from discussing their jobs.
Some, like the woman who was forced to leave, have become so traumatized by their job that they have begun smoking weed or espousing conspiracy theories. Yet, most of the moderators Motherboard has interviewed have largely internalized the fact that perfect moderation is impossible, and they try to make the best of what they can do.