Human obs used for cyberbullying

I have seen a substantial number of human obs that are clearly posted by one schoolchild of another, sometimes the subject of the photo is looking up at the photographer confused why their picture is being taken, many of these are posted with derogatory comments like “dork” or claims that the person photographed is on drugs

When people object to these obs they are typically told to mark them as human and move on, unless there is flagrant abuse in the caption, but I think it’s problematic when people are photographed at school without their consent and then put on the internet forever with negative comments attached, its hard to tell from where we stand what is harmless goofing off, what is innocent but could still harm someones reputation, and what is being used as part of a serious cyber bullying attempt, it’s entirely possible that the existence of a public image with a derogatory comment could be used to bully someone without the bullying activity ever being visible on inat, for example the target could be reminded of the image so as to cause them distress over the fear that many people are seeing it, even when it is really a casual obs in some obscure corner of inat, or the link to the obs could be shared in person or online outside inat for many to see without any evidence of this activity ever being visible on inat

Even when an obs is flagged for inappropriate language, the image usually stays up, so there needs to be some sort of way to immediately remove harassing or privacy violating images, similar to the copyright flag, and some policy for when to flag this type of content

The idea came from this flag https://www.inaturalist.org/flags/621044 where a patient in a hospital posted a sequence of images showing another patient struggling to eat due to a neurological disorder, in that case I just used the copyright flag to get rid of the images, but I think there should be a formal policy calling for the flagging and removal of certain human obs and a privacy/harassment flag that immediately removes the image, similar to the copyright flag

If creating another kind of flag is too technically difficult, a policy explicitly condoning the use of the copyright flag to hide images that are potentially harmful for reasons other than copyright, and describing what kind of images this encompasses, would be a good idea

To clarify, I am not arguing against human obs generally, only that there needs to be a way to flag and remove certain potentially malicious ones

I’m not sure exactly what the best policy on this would be, but I would like to create a discussion about the issue, I think photos with certain of the characteristics listed at the bottom of this post, or certain combinations of these characteristics, should be covered by the policy I am suggesting, but I am not saying exactly what the policy should be, as my goal with this post is to to create a discussion of how this should be approached, rather than a feature request for an exact policy

The questionable photo characteristics:

Main object of the photo is a human who is not intending to be photographed (appears unaware, angle suggests person is not trying to be in frame, ect)

Photo includes a human along with negative comments in the observation notes or comments related to said human

Photo is of a patient inside a medical facility

Photo is of a student at school (not counting photos targeting non-human organisms where a student is holding, posing with, or is incidentally visible next to the organism)

Photo includes a human with observation notes or comments containing personal or gossipy claims about the person (“this person is dating so-and-so”, ect) I thought of this one based on a real world incident I know of where rumors about who was dating who were fabricated and spread widely with malicious intent

Observation of human that the original observer intentionally misidentifies as somthing else, especially something with offensive connotations

EDIT: I also would like to propose that all human obs have auto obscured location, like how it is done for endangered species. Actually that is an open feature request https://forum.inaturalist.org/t/automatically-obscure-observations-marked-as-human/501

EDIT 2: this is 131 posts in now, and I have been looking at flags to get a better sense of how they are currently handled, in the process I have seen that flags of human observations are resolved by instructing the flagger to ID as human instead of flagging, I even saw at least one where there was a derogatory comment on a picture of a teen (I think at school) and it was resolved by a staff member (as opposed to a curator) by just IDing as human, which I think is an indication that the current policy is inadequate. I am not attacking the staff, my intent is only to point out that the current policy is inadequate

62 Likes

I would include in your list derogatory IDs. I have seen school students IDed by the submitter as gorillas, pigs, and the like. Maybe it is a joke between friends, but it could also be bullying. I feel uncomfortable for the person photographed.

35 Likes

False IDs on a human observations could be similar to derogatory comments and be treated the same way, although a lot of people just put in false IDs becasue they find humor in the absurdity of an obviously wrong ID

I do think though that human obs that the original observer IDs as an animal with offensive connotations should be treated the same as derogatory comments, and have edited my post accordingly

12 Likes

Honestly, I would not be against removal of all observations of humans as a general policy.

84 Likes

this has been discussed here https://forum.inaturalist.org/t/why-do-we-have-observations-of-human/13807/56 and it seems many are against removing humans, so this seems unlikely to be implemented. My post here is really about flagging and immediately hiding certain problematic human obs, I’m not arguing for removing all humans

15 Likes

In Mexico it is illegal under the Constitution to upload the image of another identifiable individual without express consent: https://www.worldtrademarkreview.com/article/mexico-constitution-provides-solid-basis-image-rights-regime

There are additional protections for minors and statutes for cyberbullying as well, though many of those are enforced at state level.

25 Likes

Yes, a thousand times yes, this should be reportable. After wrangling my way into the human observations, its rather clear that its being used for bullying, invasion of privacy, or even the dehumanization of homeless or indigenous people. Bullying aside, the question of legality is hazy due to the global nature of iNat… regardless seeing pictures of feet under those observations, while humorous, could end up in the wrong hands! iNat should probably moderate this much more closely than they currently do. Scrolling through these observations isn’t funny, its sort of horrifying.

19 Likes

People are primates and all that but from a legal and ethical perspective use of images of indentifiable people without consent is a complicated business. I’m doubtful that iNat’s approach is up to the standard laid out in the Wikipedia Commons, for example.

12 Likes

If that happened, we would lose worthwhile observations like these:

I’d be in favor of there being something like unlocking the ability to post Homo sapiens for new users either time-based or count-based to discourage the voluminous number of selfies from new users who don’t post other observations, however.

13 Likes

Observations flagged as inappropriate should hide the image immediately. While going through human observations I’ve seen some pretty gnarly stuff (For example: someone’s rear, a child flipping off the camera, and most horrifyingly two balloon animals having intercourse). I don’t see any reason why it hiding images immediately would be bad. Especially because flags don’t always get reviewed and thus removed. This is especially a problem with bullying as it hiding the image would also hide the identity of the person being harassed/bullied

9 Likes

Much as I like the sound of this idea I think it would make the problem worse. People would still upload these pictures but they would have to do so with a false ‘Chimpanzee’ etc. ID.

7 Likes

I think you have hit every nail squarely on the head. If this additional flag were made a feature request I would vote for it. I think that Homo sapiens is a very useful taxon to have, but you are very right to highlight this issue. The tough thing might be to find the most appropriate wording for it, but perhaps ‘Privacy/harassment’ is adequate: then ‘This image has been flagged for privacy/harassment concerns’.

Edit: that was meant to be a reply to the original post by @insectobserver123.

11 Likes

I didn’t say you were. I said that I would not be adverse to the removal of observations of humans from iNat in general.

Personally I don’t feel that observations of H. sapiens adds anything to the site given the ubiquity of our species. At this point it’s a bit like making observations of sand or carbon dioxide.

16 Likes

I’ve been bothered by pictures of school children, even the seemingly innocuous ones, ever since I saw them first on iNat. I’m not even sure how this can be legal in light of current US privacy protection laws, especially since iNat observations also have locations in addition to the pictures.

I lead group hikes and will occasionally take pictures of the group and ask the participants if it is ok to post those on social media. Twice now the response has been to please not to post any pictures with certain children in them because they were fosters and there were safety concerns with having their pictures online and associated with a location. When I took pictures at a school where we installed a native plant garden with the help of the kids, we needed written consent from the parents for all the children in the pictures before we could post them on social media. Again, privacy concerns not just with having the children’s pictures but also associated with a certain school/place online.

Since this sort of thing gets posted on iNat all the time (usually kids taking pictures of other kids), I’m guessing there’s a loophole that skirts the legal issues with this (e.g. observer is made responsible for verifying consent). I wonder though how many of these are posted without consent or even without the observed human or their legal guardian even knowing the picture is online. In our recent university BioBlitz, we had one joke observation of a teacher, too. I informed the teacher and he was unaware one of his students had posted an observation of him. I that case, the observed person took it with humor and everyone involved was an adult at least. I can see how this sort of thing can easily slide into cyberbullying or cause privacy concerns though.

18 Likes

Auto-obscure photos of humans, until their written consent has been lodged with iNat?

Most especially since under thirteens are not allowed on iNat.

Whatever the age, a photo, with the time, and the location, is setting vulnerable people up for stalking.

Years ago a cousin was happy with the new toy for her camera. She could be chatting to someone … while her camera could take a sneaky picture off to the side of someone blissfully unaware. That is nasty!

A distant picture of ‘a person’ used for scale - I can see some value. But most human pictures on iNat are frass.

10 Likes

This should definitely be a concern for us as a community in the future. Things on iNat aren’t really going to go anywhere unless removed with flags or copyright strike, so pictures taken without consent - even if not initially explicit - can still be harmful for the person in the image and their family.

5 Likes

If a policy is created, we want it to cover audio observations, too.

7 Likes

Yikes. That is… just… not what iNat is for. None of that stuff is.

Sheesh… I want kids to use this app with adult supervision, and people posting things like that really is something. Sorry you had to see those, yikes.

5 Likes

Don’t you just flag the image and choose bullying as reason? Thats what I’ve always done and it has worked promptly.

2 Likes

I wouldn’t want to loose all the photos labeled Homo sapiens. (Those “fungus” photos are informative, and some of the others are funny.) However, getting rid of many of the photos would be good, especially because we can’t distinguish between bullying and friendly banter.

I didn’t know we could flag an image as bullying. That sounds like a good feature!

4 Likes