I have seen a substantial number of human obs that are clearly posted by one schoolchild of another, sometimes the subject of the photo is looking up at the photographer confused why their picture is being taken, many of these are posted with derogatory comments like “dork” or claims that the person photographed is on drugs
When people object to these obs they are typically told to mark them as human and move on, unless there is flagrant abuse in the caption, but I think it’s problematic when people are photographed at school without their consent and then put on the internet forever with negative comments attached, its hard to tell from where we stand what is harmless goofing off, what is innocent but could still harm someones reputation, and what is being used as part of a serious cyber bullying attempt, it’s entirely possible that the existence of a public image with a derogatory comment could be used to bully someone without the bullying activity ever being visible on inat, for example the target could be reminded of the image so as to cause them distress over the fear that many people are seeing it, even when it is really a casual obs in some obscure corner of inat, or the link to the obs could be shared in person or online outside inat for many to see without any evidence of this activity ever being visible on inat
Even when an obs is flagged for inappropriate language, the image usually stays up, so there needs to be some sort of way to immediately remove harassing or privacy violating images, similar to the copyright flag, and some policy for when to flag this type of content
The idea came from this flag https://www.inaturalist.org/flags/621044 where a patient in a hospital posted a sequence of images showing another patient struggling to eat due to a neurological disorder, in that case I just used the copyright flag to get rid of the images, but I think there should be a formal policy calling for the flagging and removal of certain human obs and a privacy/harassment flag that immediately removes the image, similar to the copyright flag
If creating another kind of flag is too technically difficult, a policy explicitly condoning the use of the copyright flag to hide images that are potentially harmful for reasons other than copyright, and describing what kind of images this encompasses, would be a good idea
To clarify, I am not arguing against human obs generally, only that there needs to be a way to flag and remove certain potentially malicious ones
I’m not sure exactly what the best policy on this would be, but I would like to create a discussion about the issue, I think photos with certain of the characteristics listed at the bottom of this post, or certain combinations of these characteristics, should be covered by the policy I am suggesting, but I am not saying exactly what the policy should be, as my goal with this post is to to create a discussion of how this should be approached, rather than a feature request for an exact policy
The questionable photo characteristics:
Main object of the photo is a human who is not intending to be photographed (appears unaware, angle suggests person is not trying to be in frame, ect)
Photo includes a human along with negative comments in the observation notes or comments related to said human
Photo is of a patient inside a medical facility
Photo is of a student at school (not counting photos targeting non-human organisms where a student is holding, posing with, or is incidentally visible next to the organism)
Photo includes a human with observation notes or comments containing personal or gossipy claims about the person (“this person is dating so-and-so”, ect) I thought of this one based on a real world incident I know of where rumors about who was dating who were fabricated and spread widely with malicious intent
Observation of human that the original observer intentionally misidentifies as somthing else, especially something with offensive connotations
EDIT: I also would like to propose that all human obs have auto obscured location, like how it is done for endangered species. Actually that is an open feature request https://forum.inaturalist.org/t/automatically-obscure-observations-marked-as-human/501
EDIT 2: this is 131 posts in now, and I have been looking at flags to get a better sense of how they are currently handled, in the process I have seen that flags of human observations are resolved by instructing the flagger to ID as human instead of flagging, I even saw at least one where there was a derogatory comment on a picture of a teen (I think at school) and it was resolved by a staff member (as opposed to a curator) by just IDing as human, which I think is an indication that the current policy is inadequate. I am not attacking the staff, my intent is only to point out that the current policy is inadequate