Facebook Inc. will no longer allow graphic images of self-harm on its platform as it tightens its policies on suicide content amid growing criticism of how social media companies moderate violent and potentially dangerous content.
The social network also said on Tuesday self-injury related content will now become harder to search on Instagram and will ensure that it does not appear as recommended in the Explore section on the photo-sharing app. (bit.ly/2k9HRE1)
Facebook's statement comes on World Suicide Prevention Day and follows Twitter Inc.'s remarks that content related to self-harm will no longer be reported as abusive in an effort to reduce the stigma around suicide. (bit.ly/2k9Q8bb)
About 8 million people die due to suicide every year, or one person every 40 seconds, according to a report by the World Health Organization.
Facebook has a team of moderators who watch for content such as live broadcasting of violent acts as well as suicides. The company works with at least five outsourcing vendors in at least eight countries on content review, a Reuters tally showed in February.
Governments globally are wrestling over how to better control content on social media platforms, often blamed for encouraging abuse, the spread of online pornography and for influencing or manipulating voters.
Last month Amazon.com Inc. told Reuters that it plans to promote helpline phone numbers to customers who query its site about suicide, after searches on its site suggested users search for nooses and other potentially harmful products.
Alphabet Inc.'s Google, Facebook and Twitter have already been issuing helpline numbers in response to user queries involving the term "suicide."
With your current subscription plan you can comment on stories. However, before writing your first comment, please create a display name in the Profile section of your subscriber account page.