Every CitizenLab platform has a built-in system that detects throttling. That means that our software (ie your platform) will stop malicious acts anyway. Adding a large number of ideas in too short a time, or responding infinitely with the same or empty message to an idea or comment, will therefore not happen.
We could link this improper use with manual research to an IP address and we would therefore detect (fault detection) and stop it. However: an IP address is never linked to one computer, but a (Wi-Fi) network can also be linked. Therefore, accounts linked to one IP address are not immediately identified as suspicious. We can hardly stop that one person who is linked to one IP address or network creates 30 accounts, but if this happens quickly one after the other, we can of course see that this happens from one IP address and thus this gets signalled. The Terms & Conditions that have to be checked while loggin in, also state that it is not allowed to do this.
The administrator of the platform can thus legitimately intervene at any time and delete accounts, ideas or comments when improper use is noticed or messages contains profanity.
What we've seen multiple times in practice in our 80+ platforms, is that the platform input is strongly auto moderated due to downvotes sinking ideas down the default order. In practice, we've had no problems our platforms in that respect.
Another thing: administrators of the platforms get admin digests of the inputted content in their mailbox. These digest also serves to notify admins of new content weekly.
Furthermore, it is of course also possible to flag each idea as spam (both as registered user and as administrator) and you can indicate the reason why you want to identify an idea as spam. The Terms & Conditions are therefore an important filter today to prevent improper use.
E-mail verification (e-mail confirmation after login) and citizen verification (eg via ItsMe or DigiD) is in the pipeline when further developing our software.