Hello, This is a great platform for identifications and submitting data and observations. However, some people end up treating it kind of like a competition.
There are observations out there where people will confirm the uploader’s suggested ID without looking (I remember seeing a post about a Stellar’s Jay and the original account listed Blue Jay as a mistake. Somebody confirmed it was a Blue Jay [not] without thinking.)
Is there a way to stop this, I don’t want to submit any wrong IDs neither be responsible for creating a research-grade ID because somebody agreed to my suggestion without thinking
We can’t prevent it totally, because iNat is an open platform that everyone can use and contribute to. I believe that scientists using iNat data in their research are aware of it (or at least they should be) and don’t treat RG observations as 100% confirmed. There can be wrong IDs as well as fake observations (pictures downloaded from internet, intentionally putting wrong location etc.). Still, I think that allowing people to share and try to ID their observations freely has more advantages than disadvantages for the science.
You can comment to users, or disagree with their IDs. It’s best to keep notifications on at least for disagreeing-IDs (or in this case, agreeing-IDs too), if you want to check what people ID after you. If you notice a pattern of a user making some large mistake and think it’s not what iNat wants, you could also email iNat to look into it. Other than that people can just disagree by ID or comment, although technically no one’s compelled to change IDs/reply. A related issue is sometimes users don’t login/recheck for a long time or ever after making a misidentification others point out.
I think the general consensus is that it’s the least bad system - the current system leans slightly more towards simplicity and inclusion vs. every research grade being accurate. Although the top identifiers sometimes end up having to do quality control in areas they know.
To add on to what others have said, if you see one that is wrong, put the right ID and feel free to tag the identifiers that put the wrong ID.
A couple that I have done before:
I went to click agree on a Bee and accidentally clicked agree on a plant (I know nothing about plants), thankfully I caught it and withdrew, but I would have greatly appreciated a tag if that had gone through.
If I’m ID’ing a couple of hundred of a particular species then it is quite likely that I will agree on one that I shouldn’t have. If someone comes along later and notices that I got it wrong, again, I’d appreciate a tag.
IMO crediting people for agreeing with an existing ID, but not giving identification credit to the person who posted it (even though the poster may be an expert who identified it using characters that aren’t visible in the observation) is a big driver of this behavior.
As someone who knows how difficult it is to distinguish the species of various taxa that I post (I’m not even 100% certain of my IDs!), I guarantee that most of the people who are listed as “top identifiers” for them have no idea what they’re looking at.
It would be better if top identifier stats gave more weight to first-identifier IDs. Also users can reach the top of stats just by making many genus-rank IDs (just through regular IDs, not trying to game the system), but people often mistake that meaning they know all the species. Some kind of “user’s ID breakdown” might be useful for stats. I read that people strongly disagreed with notions of a “user score” or something (I forget what they defined it as), but I don’t see ID calculations as problematic since they’re just what people already IDed. Somewhere on the forum someone also posted a pie graph which calculates leading vs. confirming IDs, although it’s not integrated into regular stats.
Since the word “research” has a defined meaning for many people, using the phrase “Research grade” for one given agreement is a presumptuous exaggeration imo. I’d like it to be staggered. 1 agree=“agreed”, 2 agrees=“confirmed”, 3 agrees=“research grade” … something like that. This would reduce the misleading power of the parrots. It would give “research grade” a little bit more weight.
(Still gives a wrong meaning to the word research imo.)
Example: I found some websites that claim Phorcus lineatus to be a mediterranean species. Turns out they were citing iNat! There were 6 “research grade” IDs for this species in the Mediterraneis, and they simply copied the info without looking into it. When we looked through those obs (and even Extraneus rechecking his old finds) all those IDs were revoked. There are no P. lineatus in the Mediterraneis (as far as we know, with exception of the iberian coast), but those websites still claim there are. “The damage” has been done already, fake news is spreading. ;)
We should not underestimate the power of iNat. We are adding to the worldwide knowledge … but to the confusion as well. So: while we cannot stop parrots from parroting IDs, we could at least give more importance to observations that are labelled “research grade”.
Thanks for that reference. It’s not clear if those suggestions are intended for observers as well as identifiers. I think not.
I’m referring to observer behavior. The initial identification made by the observer is critical to the success of the observation. There should be a completely separate set of best practices for observers. “Be bold” isn’t one of them. Too many observers suggest species-level IDs without having the faintest idea what the organism actually is.
The ID etiquette are useful guidelines. I try to get the ID correct but I make errors. When I make an ID, I know I will likely be learning in the future. I do believe that I have improved over time, some of my old errors make me smile; I also know they were generally my best effort at the time.
One common error I have made is to not look at all photos in an observation (# 7). In my own observations, I have also learned to include multiple photos when I have them.
I think a problem with a guideline like this is that people are so enormously different. Some feel bold when they ID an observation as species X, because while being sure species X looks exactly like this they are not 100% sure there isn’t a lookalike. Others don’t feel bold at all when IDing a random blurry spot as a species they have never heard about, because after all the CV identified it as this.
I (only half jokingly) suggest these rules instead:
if you have doubts about your ID skills, don’t worry too much and proceed with caution, but DO proceed
if you don’t have doubts about your ID skills, restrict yourself to the kingdom level
The problem of misidentifications and careless agreement are well known here. This is an inevitable part of citizen science. We work on correcting the errors and preventing them, but they happen. iNaturalist provides lots of great information, but researchers using the data should (but too often don’t) check it themselves, too. If they don’t, it will bite them. That’s just real.
I think this is the best reply there is because it essentially covers the issue and all related issues.
We can’t control others and what they choose to do, though we can try working with them and help them out when it’s appropriate to do so. They can choose to listen and accept help or they can reject it. Either way, that’s all you can really do.
Observations used for data and whatnot should be inspected by those using them, which is the responsibility of that user, not the observer/identifier. In a perfect world, all 3 have equal responsibility but that’s not how it’s going to work all the time.