Agreeing with experts and "research grade"

I’m a little puzzled by the statement “blindly agreeing with identifiers on observations outside their area of expertise”. How do you know it is outside their area of expertise? And how exactly are you determining what is someone’s area of expertise?

I think the premise should be, you should not “agree” with an identification unless you are able to independently verify that identification with some outside investigation. You can use an “expert’s” identification as a starting off point for your investigation but iNat is for engaging people in nature and helping them learn, not just tallying up “research grade” observations for yourself or anyone else (though there does seem to be a very competitive component of that). I don’t think you should be worried about how others are IDing, just have a standard of your own that you don’t “agree” until you have enough information to do so.

4 Likes

By bringing up the subject. Something to the effect of:

“I noticed that you have X number of identifications for this taxa. If you have the time would you be willing to help me with tips/IDs?” and their response is “I don’t know anything about that taxa; I’m following what others say. My area of expertise is Y.”

1 Like

Well, if this is happening a lot, I would find it troubling. And you’ve had this happen in many instances?

Unfortunately, these type of “power identifiers” do exist. They add “agrees” to huge numbers of observations very quickly with a relatively high error rate. Most of the ones I have interacted with have been younger users, who likely are very enthusiastic and encouraged by the “top 10” lists to up their totals. They can usually be at least partially reigned in with some encouraging feedback (the last thing we want to do is scare off the next generation of naturalists!) and I’ve not seen any persist as a problem after a while.

The only reason I added on here is to stress that being in the top 10 for identifying an organism does not necessarily equate to expertise in that taxa. It just means you have clicked “agree” a lot.

10 Likes

I wonder, then, if it would be more helpful if those top 10 lists only showed the top 10 leading identifiers. In other words, supporting/agreeing IDs wouldn’t count, which might encourage those that want to be in the top 10 for a taxon to actually learn their stuff.

9 Likes

I would question whether incentivising people to be the first ID for a particular observation would be even worse. Instead of blindly clicking agree, those same people would be blindly selecting something that’s “close enough” (followed by its own flurry of blind agrees)

6 Likes

It does not even mean you clicked agree often. Apparently I show up as a top identifier of plants in a few places, driven entirely by doing coarse identifications getting them from unknown to family etc in the hopes someone better placed than I can more quickly find to finish the job.

4 Likes

How are you defining “many instances”? Have I noticed it in a large number of countries by a large number of users? No, I don’t have that kind of experience because I tend to concentrate on observations from one country. Have I noticed it in that one country? Sure, from a couple of power users, which might make the situation seem more prevalent than is actually the case. I’ll probably just try to ignore it from now on. As your questions have suggested - based on my interpretation of them, anyway - it’s hard to know what work went into a particular identification/agreement to say that care wasn’t taken without actually being there myself.

I think that might just shift the incentive structure to something worse - incentivizing users to erroneously add finer level IDs to up their totals. At the end of the day, there will always be an issue equating volume of identifications with quality of identifications.

EDIT: Not saying the iNat is explicitly labeling these “power identifiers” as experts or anything like that, but just that the existence of the Top 10 lists automatically makes that status a goal for some users.

2 Likes

Only agree if you now understand why it is a particular species, because you took the time to learn the details and different features.

The entire system does incentivize the 2/3 research grade (although I disagree with this term) results over accuracy. This a problem with the psychology of this method. Take everything here with scepticism.

https://www.inaturalist.org/journal/leafybye/22229-the-misidentification-of-lupinus-nanus-or-lupinus-bicolor

Take the case of Lupinus nanus vs. Lupinus bicolor. These two often look very similar in person and in photos. People seem to have jumped on each other’s band wagon when making identifications. Or their eyes aren’t trained to see the 3D in the 2D, or there is not a good scale so it is difficult to tell the actual size of the plants.

If we feel like we trust someone on this site, we are more likely to agree with their identification. We must refrain from doing this, but also the site needs a grant to hire experts to go correct everything. Or some other way to bring a level of expertise to the platform. Some photos can deceive, even if a plant is obviously a particular species when seen in person.

I saw one Lupinus identified incorrectly, then others agreed with the incorrect identification, then I came along and made the case for a different species, while my identification was “Maverick.” My arguements won over those people, and now Loarie is the “Maverick.” Psychology needs to be removed from the process somehow. Or more guidance on how to take good photos of certain species.

See here:
https://www.inaturalist.org/observations/621076

2 Likes

I think this is a wonderful example of the community ID system working - someone who knew the species involved better made a case that convinced other identifiers to agree and now the observation has the

I think there is a desire that the observations are not only correctly identified but quickly correctly identified and quickly corrected when incorrect. Science takes time as does the community ID process. Over time, the process works!

6 Likes

:-) Well, we ALL need to remember that identifying photos of tremendously differing quality and from folks with wildly divergent plant knowledge and experience is a tricky undertaking, indeed. When I’m not certain, I almost always say so in my comments. When I’m certain, I also say that. And I rarely venture outside my area of expertise. My biggest comment to iNat is that if it is to be taken seriously as a scientific resource, there HAS to be some way to better monitor the identifications. If I were not spending significant time most days reviewing the Castilleja and related genera submitted to iNat, there would be extensive misinformation on the site. At the risk of sounding like a pompous toad, without authoritative monitoring, we would have data on iNat identifying a Geranium leaf as a specific Castilleja species (3 cases and counting) included in amongst the data posted by folks who actually have some clue about plant identification. I love iNat, and I think it’s a great and innovative way of involving so many folks in the essential job of documenting biodiversity, and I’ve learned much from people’s postings, but I also think there should be some way to better deal with the problems flow from the interaction of experts in particular organismal groups and people who, quite understandably, are just starting to be interested in a way that does not discourage them but also prevents the data from being muddied and perverted by misidentifications and silly disputes.

5 Likes

Look I think what’s missing here is an independent quantitative assessment of identification accuracy across groups and regions. Ideally this should be compared to identification accuracy in other biodiversity collections data, such as herbaria, insect collections etc.

Anyone who has taken a serious look at herbaria and other collections has found some errors, clerical, documentary, and just straight-up identification errors. The question is how many errors are there. As scientists we need more than anecdotal evidence about error rates to get a full picture of what’s going on.

3 Likes

Reiterating the same stuff, but a) not a scientific site per se, and b) don’t underestimate the ability of statistical analysis on big data to completely factor in any error rates etc. You don’t worry about a few stray cents when looking at GDP, for example!

5 Likes

There’s a start to this in the blind ID study Scott worked on. I am not sure in how much detail he went into. It was confounded though (at least when i participated) by the issue of not knowing which organism was the subject of the study. I got several wrong by identifying the wrong plant because the comments and notes were hidden. So, it’s a hard thing to assess!

I will say in a total qualitative spit-balling way that in my area (Vermont) the quality of plant data is no worse than what you get out of most other methods of sampling - photoless observations by summer field techs, etc. With caveats, i don’t consider it a big issue. Within metropolitan LA however, it’s a whole different story, with duress users and bulk CNC that hasn’t ever been reviewed, etc etc… the data quality is pretty poor but once you get outside the urban area the rest of CA is mostly in good shape.

That’s totally anecdotal though, of course :)

This sounds like it works well, and this would be true if it always played out like you describe.

But in practice, most ID’s just sit there. I see precious little discussion about ID. People do seem to appreciate it when I try to spark ID discussion. But it doesn’t seem to be the dominant paradigm happening here on iNaturalist.

When I browse photos of plants, they’re nearly all littered with mis-ID’s. And nearly all of these mis-ID’s have no discussion of how the plants were “identified”. And these are only on the plants that I feel certain of knowing how to ID. I’m no expert on these things.

From my perspective, the photos are nearly useless for me to try to learn how to ID plants, because I can’t trust that they’re accurately ID’ed. I know from the few taxa I know well enough to ID with confidence, that the data cannot be trusted, so when I’m looking through photos of some taxon that I’m trying to learn, I don’t know that I’m really looking at photos of that taxon and not of stuff mis-ID’ed as it.

I also would strongly dislike this. One, I frequently make data-entry errors. Two, I also routinely go back and update my identifications once I’ve done further research, and this could involve both adding more specificity, i.e. family → genus or genus → species, and correcting errors in my own ID.

I can think of this being one of the more intensely aggravating features that might make me want to stop using the platform out of frustration.

3 Likes

What area are you looking at where nearly all the plant IDs are wrong? Here in Vermont the error rate appears well under 1% and even in areas with worse error it’s nowhere near 50% that I’ve seen.

1 Like

Hi all, I’ve been a member of iNaturalist for a while, but only recently started contributing seriously. I too have wondered at times when someone disagrees with an identification I’ve made. If it’s a taxon I’m not that familiar with I check the profile of the person disagreeing to try to establish their credentials so that I can learn, but profiles are sometimes not very helpful. I’ve found it helpful when the person disagreeing gives reasons. I wish more people did that.

Martyn

4 Likes

I agree it would be nice if people always have reasons for their comments. However if they don’t, you always have the option of tagging them on your observation and asking. There are a number of reasons why they might not have explained at first, but would be very happy to explain if asked.

8 Likes