I rescind this request, as I think about it and respond to questions it is becoming clear that it would be nearly impossible to implement this in a way that is neither ineffective nor overbroad, and I would prefer to stick to user flagging, or a system that only notifies curators of potentially offensive content without taking action against it. I am also concerned that an automated system flagging some words and not others (because it is impossible to list every slur in existence) could be misperceived as treating discrimination against some groups more seriously than others.
Platform(s), such as mobile, website, API, other: All
URLs (aka web addresses) of any pages, if relevant:
Description of need: Some sockpuppets will target specific users they have a grudge against by mass posting offensive comments on the target userâs observations. When suspended the harassers just make new accounts, using VPNs, I know of ongoing harassment cases with dozens of sequentially created socks, when one is banned another pops up, posts as much offensive material as possible (perhaps 100 comments on one observation) before being suspended, and then the cycle repeats. Some words that are used by these socks have no place on iNat in any context
Feature request details: Create a filter that flags and automatically hides comments or IDs containing certain words or phrases that have no inoffensive use if coming from new users.
It is important that this only flags content from new users, and doe snto flag users who have been flagged as non-spammers, to avoid automatically hiding moderation discussions on flags in which curators may have reason to quote offensive comments when discussing moderation decisions with each other, and to avoid hiding complaints from victims of harassment along the lines of âthey called me [offensive thing]â
Some slurs are so offensive and widely known that they are always referred to by an abbreviation and not typed out even in a moderation discussion, but this is not true for all offensive words.
It is also important to be aware of words with multiple meanings, for example there is an abelist slur when used as a noun that is also a verb for âto slow downâ, and a part of a word for some fire suppression agents. And Iâm not sure we want to auto flag these uses.
I am against the use of AI moderation, and am not calling for that, what I am proposing is a filter that removes certain pre programmed strings that have only a bigoted, insulting, or sexually offensive meaning.
I also do not want this to flag the comments as spam, it should flag them as inappropriate so that they show up in the regular flag log that curators monitor, reducing the risk that erroneous flags wonât be corrected.
Finally, procedures for this should recommend that an automated flag on offensive content is resolved by curators, and the content manually hidden, so as to avoid a large number of unresolved flags that need no action making it harder to find the flags that do need action.
As I type this it sounds like it could be difficult to implement, and I am open to being convinced that this is not a good idea, but me and @wildskyflower have been wondering why there is not a faster way to stop some of the recent mass posting of the n-word by sequential socks