Twitter is giving you a chance to think about what you just typed before hitting that reply button. Yup, users can stand the back down—the website will do the dragging for them.
The platform recently rolled out a test feature that prompts users with the option to revise tweets containing “harmful language” before sending them. The feature is available only to some iOS users for now, and will be limited to replies.
When things get heated, you may say things you don't mean. To let you rethink a reply, we’re running a limited experiment on iOS with a prompt that gives you the option to revise your reply before it’s published if it uses language that could be harmful.
— Twitter Support (@TwitterSupport) May 5, 2020
It isn’t clear what Twitter considers as harmful language, though. Some users report that Twitter is telling them to revise tweets that contain swear words.
the fuck is this?? pic.twitter.com/BFiOyY3zEo
— Ruth Bader Ginsburg Updates (@goodatsexguy) May 5, 2020
Late last year, Instagram tested a similar feature to curb bullying on the platform by nudging users to reconsider their captions that IG’s artificial intelligence detects as similar to captions previously reported.
Does that mean I can’t say “fuck” now? Twitter does have this policy on hateful conduct, which says anyone can post content as long as it: 1) doesn’t promote violence, 2) doesn’t wish serious harm on another person or group, 3) doesn’t incite fear and 4) doesn’t degrade others with harmful stereotypes.
It doesn’t say that cussing is prohibited or even frowned upon on the site―it seems more like Twitter is gently reminding us to mind what we say before we get suspended.
While there’s the initiative to promote safer online spaces, curbing harmful content needs more than revising tweets with bad words mom would have soaped my mouth for.
Header art by Rogin Losa
Comments