Facebook updates Community Standards, lists new bans

Facebook Log-in page

Facebook, the undisputed king of social media, has updated its Community Standards and listed new guidelines to help users understand what they can and can't share on the social media site. Facebook hopes that revamping the guidelines would give users clarity on acceptable content on the social network. 

Included in the changes that Facebook implemented on its site concern visuals. The overhauled rules are still strict, but made more interactive, on graphic violence and nudity.

The site explains: "People sometimes share content containing nudity for reasons like awareness campaigns or artistic projects. We restrict the display of nudity because some audiences within our global community may be sensitive to this type of content - particularly because of their cultural background or age. In order to treat people fairly and respond to reports quickly, it is essential that we have policies in place that our global teams can apply uniformly and easily when reviewing content." 

The new Community Standards also includes a new section on "Dangerous Organizations," writing that the social networking site will not allow terrorist nor organized crime activities on the network. Facebook also has the right to remove content that shows support for these types of organizations. Furthermore, the new standards underscores the need for sensitivity toward victims of violence and discrimination. 

Apart from the above, the new guidelines lists banned content in terms of bullying and harassment, criminal activities, direct threats, attacks on public figures, regulated goods, sexual violence and exploitation, and even posts about self-injuries.

The Community Standards will also help people be clear on what content they can request to be taken down. Monika Bicket, global head of content policy for the social networking site, told BBC the updated guidelines can also help address confusion on why they still need to investigate before taking down content based on a user's request.

"We [would] send them a message saying we're not removing it because it doesn't violate our standards, and they would write in and say I'm confused about this, so we would certainly hear that kind of feedback," she explained.