Decentralized content moderation on Fantastic.app

Kc
3 min readJan 15, 2021

--

Fantastic is a social recommendations site that allows anyone to share recommendations for things they love and quickly find new things to try by exploring trending recommendations.

One of the challenges we’ve faced with our product is policing the quality of recommendations submitted given the open sharing nature of our platform.

To properly curate submissions, we needed to design a moderation system that would be robust enough to filter out problematic trending content but would also allow for subjective and dynamic interpretation of quality.

As we investigated potential solutions, what became increasingly clear is that content moderation is much more of a social problem than a technical challenge. The tools needed to promote well-performing content and stifle low-performing content exist. What doesn’t exist is the ability to determine if something that fairs well for a group of users is universal in its appeal.

Quality is inherently subjective and it seems that the primary issue plaguing current social platforms is not that poorly rated content is being promoted but rather that some content deemed “popular” or “trending” in local groups/communities are actually problematic when promoted to a wider set of users. Our approach at Fantastic seeks to address that specific observation.

On our platform when a piece of content is shared, a randomized selection of people who share interests with that content have the ability to cast judgement on it by indicating they are a fan or skipping (approve or ignore).

Using the dynamic percentage of people approving a piece of content, we can determine how often that content should be made visible to others. Content that is approved at a high probability by a randomized selection of people has a higher probability of being seen. Content that doesn’t perform well when exposed to a randomized selection of people is stifled.

Using personalization, we can skew content judgement towards users based on their interests, so that users are not inundated with casting judgment on content not aligned with their tastes.

The incentive for users to approve quality content is self-motivated in that the system finds content for them similar to content they have approved in the past. Consequently, the system hides content from users that is similar to content they tend to ignore. As long as the majority of users are sufficiently motivated to find quality content for themselves, the combined incentives of users will be aligned with authentic moderation.

As for malicious behavior, due to the randomized nature of crowd-sourced moderation, coordinated inauthenticity is difficult because users do not have the ability to determine who will eventually pass judgement on their content.

Creating fake accounts also has limited benefit for a malicious user because the probability that a newly created user will be selected to judge a specific piece of content decreases dramatically with the amount of content shared. In addition, a per user reputation score can be determined from the success of prior submissions/approvals and used to limit a users future influence. As more users participate in a system like this “universality” can be better approximated and the ability for manipulation decreases.

We believe this is a simple way to coordinate the incentives of users with the incentives of the platform. Allowing a randomized selection of users to curate content allows universal interests to determine promoted content rather than the performance of content in communities that may not represent a larger population.

Although the architecture needed to implement this form of decentralized moderation is specific to certain content types, simpler approaches that approximate universality from randomized selection of moderators can be employed to a similar effect.

To see this in practice and to explore trending recommendations checkout Fantastic at https://fantastic.app and let us know what you think!

--

--