Human obs used for cyberbullying

seems like bullying or adjacent behaviors could be accomplished in several ways in iNaturalist:

  • via image (ex. image of a person in a compromising situation)
  • via text (ex. “[name] is a [pick your word of choice]”)
  • via audio recording (ex. spoken version of “[name] is a [pick your word of choice]”)
  • via juxtaposition of observation elements (ex. human in photo but identified as an animal)

if you’re planning to fully address bullying, it seems like you would need workflows to handle all of the above, not just the hiding of images.

it seems to me like even beyond these, it would be helpful to have specific functionality in the existing flags workflow / UI, perhaps with its own “escalate” category, that directly escalates the observation to help@inat. although it might drive up the work of iNat staff a little, it seems like it would allow someone to more quickly remove truly offensive / inappropriate items. it wouldn’t have to be a situation where only staff gets the flag. moderators could also see these and attempt to do minor resolutions if they get to it first, but they wouldn’t have to separately then e-mail iNat staff or whatever the process is currently.

it seems like at least some curators are already effectively hiding pictures by hacking the copyright infringement flow.

although this helps address inappropriate images, the underlying image record is still there, and if the underlying image record is still there, it’s not difficult to then access the underlying image file. i think staff are currently the only ones who can remove the image record.

but then even when staff remove the underlying image record, the image files still remain on the media servers, accessible to all. so this means that if bad actors linked directly to the image files, then removing the image records would still not be enough to totally resolve the problem. so any complete solution would need to address this as well.

3 Likes