Warden threw a party in the Facebook Jail
Notice: Undefined variable: article_ad_placement3 in /usr/web/cs-washington.ogdennews.com/wp-content/themes/News_Core_2023_WashCluster/single.php on line 128
By the time you read this, I should be out of “Facebook Jail.” I couldn’t post or comment on Facebook for 24 hours last week. For those who don’t use Facebook or haven’t had its virtual cell door slammed in your face, here’s a description of how you might serve time.
In the wake of harsh criticism for failing to block or otherwise limit hate speech or misinformation about political or health issues – particularly during the 2016 U.S. presidential election – Facebook in 2019 attempted to develop and enforce stricter “Community Standards.” The list of these standards takes quite a while to read and includes, but is not limited to, topics such as “violence and incitement,” “dangerous individuals and organizations,” “coordinating harm and promoting crime,” “hate speech” and “violent and graphic content.” Any violation of these standards can provoke a warning or suspension of posting and commenting privileges for anywhere from 24 hours to 30 days.
I suppose that Facebook’s attempt at what many call censorship is well intentioned, if long overdue. None of us wants to see “offensive” content on social media, or even on newsgroups and blogs. But the problem in policing such matter is twofold: Who decides the meaning of “offensive”; and how to find and flag or remove every violation? With more than 2.9 billion users, can we reasonably expect Facebook moderators to catch every post that someone somewhere might find offensive? No, we can’t.
So Facebook developers rely on Artificial Intelligence (AI) to scan posts and flag those that violate standards. Guy Rosen, Facebook vice president of product development, has admitted the obvious by saying in interviews that context plays a prime role in determining if something is actually “hate speech” or, perhaps, a pun, a play on words or sarcasm. Facebook’s current system uses both automatic AI flagging and manual review to determine the intention of the words. But, in the absence of tone of voice or facial expression, how can one determine intention in the virtual world? We can’t.
- After this became apparent, users of email, blogs and social media developed a kind of preemptive strike to keep people from being offended. Thus the use of _sarcasm_before and after a statement, or the inclusion of “JK” – for Just Kidding – after one. Yet, just as spell-checking software often fails miserably at determining context, so do Facebook’s bots often miss the poster’s intention.
That’s why, for example, a photographer friend was thrown in the Facebook cyberslammer for posting that when a friend’s daughter returned home, he’d like to “shoot” her. Another was blocked for threatening to give someone “50 lashes with a wet noodle.” I have been jailed for 24 hours on two occasions. The first involved a comment about “hanging chads,” which Facebook’s bot apparently interpreted as a threat against poor ole Chad. The second involved a more nuanced interpretation. I had posted a picture of a friend wearing a backstage pass lanyard backwards – with the blank, white side of the pass showing. I said he apparently was endorsing “White Power.” Slam!
OK, so maybe I should’ve thought twice before saying that, knowing that we now are offended by even the most benign comments. Maybe I should’ve ended the post with “JK.” So I’ll take partial blame. If being overzealous to the point of micromanagement saves us from even one more post containing misinformation or incitement to violence, I’ll do my stretch.
Or maybe I’ll use my “Get out of Facebook Jail free” card.