On the whole, we've found that inappropriate content is very rarely posted on our participation platforms. However, should any inappropriate content appear on your platform, we have 3 complementary mechanisms to detect, report, review, and delete such content.

  1. Community reporting

  2. Profanity filter

  3. Automatic detection using Natural Language Processing (Premium)

  4. How can I review and moderate inappropriate content?

1. Community reporting

Any platform participant can report a post, proposal or comment as spam by clicking the three dots in the top right corner:

Once a participant flags a contribution as inappropriate, platform admin and project managers will receive an email notification and will also see the report in their notifications and "Activity" tab on the platform.

2. Profanity filter

Each platform also comes with a built-in profanity blocker that contains a list of common offensive words for each language - a general list that is applicable to all our clients' platforms. Should a platform participant attempt to submit a post or comment that contains a word on the list, they will receive an error message asking to review and edit their post before submitting it.

Here's an example in which our profanity filter blocks the word "shitty":

You can always decide to enable or disable this profanity blocker in your platform settings.

3. Automatic detection

Our platforms can also automatically detect if inappropriate content has been posted to your platform. This detection is done via Natural Language Processing, which reviews posts, proposals and comments for language that may be abusive, toxic or otherwise inappropriate.

If a post triggers a content warning via the auto-detect function, all platform admin will receive an email and it will also be visible in the "Content Warning" section of the "Activity" Tab in your admin panel.

Similar to the profanity filter, this functionality can be enabled and disabled in your platform settings.

⚠️ This feature is only available for Premium subscribers and detects inappropriate content on platforms available in the following languages: English, French, German, Spanish and Portuguese.

4. How can I review and moderate inappropriate content?

All posts that have triggered a content warning via the 3 methods above can be found and moderated in the "Activity" tab in your admin panel. In addition, all admin and project managers will receive an e-mail and a notification on the platform.

As an admin or project manager, you may then review the content and moderate it according to your own guidelines. Below you can read more about the options to moderate different types of posts shared on your platform:

  • Input (ideas, contributions,...) and proposals can be both edited and deleted by admin and project managers on the platform. In addition, you can choose to post an official update, or a reaction in the comment section.

    If you delete input or a proposal, the author will not receive an e-mail. Therefore, if you wish to do so, you should notify them separately via a personal e-mail.

    If the input is posted in a timeline project, instead of deleting, you can also decide to hide it by deselecting all timeline phases it is associated with in the project's input manager.

  • Comments on the platform cannot be edited by admins, they can only be deleted.

    If you delete a comment, the author will receive an email that their comment has been deleted because it does not meet community guidelines, as outlined in the platform FAQ and Terms & Conditions.


Do you have further questions, or need help? Don't hesitate to get in touch with our support team via support@citizenlab.co

Did this answer your question?