Content moderation became a huge topic in the early days of 2021, especially around social platforms (i.e. Twitter) and their banning of certain public figures (i.e. then-U.S. President Trump). Entire episodes of podcasts were dedicated to content moderation, executives from some platform companies were called out, and some even stepped up to claim that “content moderation does not matter.”
As a company that works with many global, hyper-scale brands on content moderation, we can unequivocally say that content moderation does exist, it does matter, and brands take it very seriously. It’s a complicated issue of speed, scale, and security — you want your users to feel like their posts appear immediately (speed), but you also need them to feel safe in the community (security), and you need the entire ecosystem to operate at scale.
It’s not easy and brands pivot and move around on their strategy periodically. We’ve worked with brands and helped them co-create new best practices, new coverage hours, new languages they need to moderate, etc. It’s a consistently moving target. Then there’s the issue of tech vs. human. When social platform Parler was effectively taken offline because their host (Amazon Web Services) declined to host for them anymore, Parler’s CEO vowed they would return — with content moderation done by an algorithm.
This is a common concept these days with current regulatory environments; to maintain scale and cost-effectiveness, brands try to turn more of their content moderation efforts over to algorithm, RPA, and machine. Our view has always been that the best moderation work is a mix of tech and human. Tech helps with cost and scale, and it reduces the tedium on humans, which can have mental health consequences. But humans need to be part of the equation, because they have language and societal nuance that AI will often lack — and when humans know what the core moderation issues are, they can inform marketing, sales, operations, and product back at brand HQ about what’s going on. That insight can be valuable for future brand decision-making. Finally, the other big issue hanging over content moderation is training. Moderators need to be vigorously trained and re-trained. They need to understand what’s harmful, what’s not, where the brand is going, the guidelines, the best practices, the business model of the moment, and more.
Those are the big content moderation trends of 2021, then:
- A mix of tech and human in moderation
- Speed, security, and scale
- An eye towards the regulatory environment
- Training, re-training, and more re-training
- A focus on moderator mental health
Beyond those content moderation trends, some others you may see include:
It’s a high-growth space: The CAGR on content moderation services is 10.3%, 2019-2026.
Spam-free platforms: Spam has been growing in recent years, and while the idea of a “spam-free platform” is part of security (safe community), you will increasingly see content moderation partners offer a way to reduce spam for the partner brand.
Region-specific moderation: This will increase as brands look to new opportunities (markets) post-COVID.
Acknowledgment of “hidden digital labor: ” This is in the mental health bucket. “Hidden digital labor” refers to the tedium and emotional labor of multi-hour moderation, and content moderation partners (like us) and brands are increasingly trying to protect brand agents against it. As you keep thinking about content moderation trends and what your brand needs to do with content moderation, check out some of our moderation services.