Opting Out of Community ID

I’m sure many people have seen this happen. An incorrect ID is made, however, it cannot be corrected because the user has opted out of community ID. As such, incorrect records get added to the database, and it could confuse other observers, as it could make it look like that species occurs in their area when it really doesn’t.

I was curious about what purpose this feature serves. I would imagine it is used to update life lists if the photo is unidentifiable, or an alternate ID is made awhile after the most recent activity (I have done this a couple times). That is, I think, a responsible use of it. The issue arises when it is used irresponsibly.

I encourage other people’s thoughts and suggestions. I’m interested to see what you think :)

There are 2 different ways to do this, at the individual observation, and totally on all observations. Can you clarify which you are asking about?

When occasionally users enter wrong IDs which mess with the community ID and are unresponsive about withdrawing it in the face of contrary evidence, I’ll opt out of the community ID for that observation.

I do feel like if a user chooses to opt out of community ID for all of their observations, though, that it then becomes their responsibility to be responsive in the case of ID disputes.


Sorry about that, I wasn’t aware you could do this on all observations. I was just thinking about the individual observation, however opting out of every ID does seem quite problematic.

I just started doing this recently for some of mine, mainly in cases where I think that the new Community ID makes it into an ignorable observation compared to the Community ID before. But when it gets straightened out I want to opt back in. I asked on another thread if there was a way to search for the ones I have opted out of to check on them, but ended up making myself a traditional project to stick these in temporarily. I have also been wondering whether there’s a way to give iNat permission to make my opt-outs into opt-ins if I become inactive for some length of time.

1 Like

A lot of new users opt-out for no apparent reason. I think it would be a good idea to make the option a little harder to find (such as only on the help page and not in a prominent place on an app).


Or, as has been proposed elsewhere on this forum, simply don’t have this option available to new users until they’ve added some minimum number of observations.

Those opted-out IDs can’t reach Research Grade unless they meet the community thresholds, so they are excluded from that subset in this case. I don’t know that those are any more problematic than hundreds of ‘casual’ IDs with no photo that can’t be verified and have some unknown degree of error. For those reasons, it seems like one would only use non-Research Grade IDs with a very high degree of caution when examining the data.

The opt-outs don’t reach research grade, but do they still get into the image browser page for the taxon, as other “needs id” ones do? That could be problematic if so.

But they do reach research grade, if everybody agrees. It’s just that the reviewers’ comments don’t count if they disagree, then the observation stays ‘needs ID’ in perpetuity.

If you eliminate the ‘mistakes’ from people who are learning the app etc, I’d be interested in knowing if the people who actually take the time to enter casual records are more or less likely to be experienced ‘power user’ types. My no-data gut assessment is that is the more likely profile of someone who takes the time to do this is that of a fairly dedicated observer, and the error rate may as such not be that horrible.

1 Like

I have seen many occasions where opt out is legit and practical. ID as you see it, if others differ, start a dialogue, if no response… move on :) There are too many cool obs out there to begetting bogged down with not-so-cool ones!

Sorry, I didn’t mean to imply the error rate was bad for those records, just that it’s not explicitly known. One of the strengths of iNat over many other community science initiatives is that you can actually get at that error rate (either exactly by vetting all records in a data set, or by averaging over a subset).

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.