Consumer Electronics Daily was a Warren News publication.
Struggles of AI

TikTok Sees Wave of Takedown Disputes After Automation Shift

TikTok is fielding a wave of user complaints about content takedowns and account suspensions, after the platform’s shift to more automated moderation. TikTok is removing content and accounts based on false violations, one user told us. Others expressed frustration over seemingly frivolous takedowns. Content moderation experts told us automation is necessary at TikTok’s scale, but the platform can act to minimize mistakes.

TikTok removed dozens of videos from the account @bexonic over the past six months. Account owner Rebecca, 42 and a mother of four, has been diagnosed with multiple disorders affecting her connective tissues, plus stage 3 chronic kidney disease. Her account is nearing 100,000 followers and 1 million likes.

She described a recent example of TikTok’s moderation. The platform removed her comment calling another user’s content “disgusting” for mocking disabled individuals. The other user's content violated community guidelines, but it remained on the platform, Rebecca said. She shared examples of her own videos getting removed. The platform often cites issues of nudity and safety of minors with these takedowns, she said. She described two incidents in which she “stitched” another user’s content onto her videos.

In both instances, the other user was a scantily clad woman, and Rebecca was fully clothed. The first was removed for nudity, but the original video was never taken down. The second video was removed because the other user was a minor, but the minor’s content remained untouched. There have been dozens of other examples in recent months, she said: “You look at your videos thinking I’m not quite sure what I did to violate. There’s no nudity. There’s nothing.” Lots of users are experiencing the same issues since the shift to automation, she said: “You can’t figure out the pattern.” Most of the videos were restored after appeal, she added.

TikTok U.S. Safety head Eric Han announced plans July 9 for automated content moderation for “minor safety, adult nudity and sexual activities, violent and graphic content, and illegal activities and regulated goods.” TikTok’s terms of service state, “We have the right to remove, disallow, block or delete any posting you make on our Platform if, in our opinion, your post does not comply with the content standards set out” in the terms of service. The company didn’t respond to multiple requests for comment.

Another user approaching 1 million followers accused the platform of two recent bans he thought to be frivolous. James Butler, or @im_not_him_now, owns a septic tank service. He discussed a six-day suspension and a three-day ban in two separate videos this month. He described not being able to post, comment or message because of posting a video “about fire” and working with a septic tank. The platform didn’t demonetize him, he noted: “God, this is a stupid company, and I love it.” Butler didn’t respond to a request for comment.

It’s not surprising a platform of TikTok’s size is moving toward automation, said Jeffrey Westling, R Street Institute resident fellow-technology and innovation. “And this is generally a good thing: we want them to use automation because the alternatives” like no moderation or shuttering the company are “much worse,” he said. Perfect moderation is impossible, but automation is a useful tool for human reviewers to weed out straightforward violations, he said. But automation has its pitfalls, he added: “For example, AI struggles with things like context, and a statement said in sarcasm or a post about difficult topics can often get caught up accidently in the algorithms.”

William Papadin, who has 185,000 followers for @nobleknightadventures, discussed in a recent video the platform’s decision to remove a video of him dressed as a king. The platform cited community guideline violations on adult nudity and sexual activity. He noted the video has colorful language and suggestive gestures but nothing that should trigger removal. The video remained on Instagram and YouTube, he said. Paladin didn’t respond to request for comment.

Another user with nearly 2,600 followers, Javier Garay, @javiertheauthor, described in a recent video TikTok’s decision to interrupt his weekly live video due to community guideline violations. He claimed that during the live session, he was discussing thumbnail sketches for his children’s book and the complications of working from home. “I have absolutely no idea what this message means, why I got it, so if you have any suggestions about what I could do, let me know. Thanks,” he said. Garay didn’t respond to request for comment. A “community guidelines” search on TikTok turns up a multitude of examples from other users with similar confusion.

TikTok created a new notification system when it shifted to more automation. That “seems to be an effort to have more transparency on their decisions,” emailed Public Knowledge Senior Policy Fellow Lisa Macpherson. “Users can use the notification and appeals systems to help TikTok train its AI.” Automation can help platforms deal with the scale of content and “shield human content moderators from exposure to all of the worst content,” she said: “But TikTok users reported what they described as arbitrary or discriminatory content moderation decisions from human moderators, too.” The goal is to ensure platforms treat users “fairly and consistently” based on stated policies, she said.