iNaturalist Enhancement Suite Chrome extension v0.7.0: identifier stats

I am also not having success with the mouseover.

I would also definitely be interested to have the stats for myself :)

1 Like

Hi! Is there perhaps equivalent tool for firefox or no…
No worries if not, just wondering if there is something I missed :)

1 Like

From 257 to 291 Chrome users!

@kevinfaccenda @dianastuder @mydadguyfieri this change is now live :beers:


I did once upon a time say I was going to port it to Firefox, but I haven’t done it yet. Thanks for the reminder!

I think I was online … as … it went live :grin:

That being the case, what is the use case for this extension?

You can use it or not.
I do. If 3 people have IDed something I don’t know.
New name, few IDs - but their profile tells me ‘working on the taxonomy of …’ and they are new to iNat. It is a tool, which we have to evaluate. I collect new taxon specialists.

I find it a bit awkward in a specific situation: let’s say I made an initial ID of family. I get +1 in your count for the family ID.

Then somebody else refines it to species. I look at it carefully and agree to their species ID. As a result I lose the count for the family ID, but I don’t get a count for the species ID (because it’s not leading or improving). So the net result of this is that I lose a count.

It kind of disincentivizes me to follow up on observations I’ve previously identified. Ideally I’d like to keep the count for the higher order ID, but I don’t know how that would be feasible technically. Or maybe I shouldn’t care about these numbers.

Inasmuch as there is a flaw, IMO it lies in the stateless classifications of identifications. In your scenario you are disincentivized regardless of my extension logic; Supporting IDs don’t even get a flair. It would perhaps be more “fair” to preserve your Improving classification so that your own best interests (such as they are, intentionally gamified by the site) are not at odds with the best interests of the observation/the community. I’m sure there are probably multiple forum posts about this low-level tension.

Here’s a related scenario which is familiar to me: You come upon an observation of a beetle with these identifications:

  1. spider
  2. beetle
  3. beetle

You know it’s an insect, but you aren’t sure it’s a beetle. You identify it as an insect, which is inarguably helpful, as it allows the observation taxon to move to Insecta and for insect identifiers to more easily find it, and you are temporarily “rewarded” with an Improving ID. However, when someone else comes along and adds another beetle ID, your ID gets “downgraded” to Supporting. Does that actually matter, in any meaningful way? Not really. However, did your ID (and for that matter, ID #3, which is also Supporting) improve the observation, regardless of what happened after it was made? Yes.


After using it for a month now, I really like this extension and have recommended it quite a few times. The color coding for the suggestions is really handy in certain situations. Also knowing who has at least SEEN a number of this thing is certainly helpful, including being able to see one’s own numbers. Thanks for coding this.


All of this is technically right, and can’t really be argued. All the same, iNaturalist doesn’t tally my IDs so neatly, I have to actively go looking for it. So up until now I had no reason to care what fraction of my IDs are merely “supporting.” With this extension the “quality” of my IDs is in my face constantly. It doesn’t matter how well I understand why the numbers behave the way they do, smaller number suggests less expertise, so no, I don’t want that number to go down. All of this probably means I should (1) turn off displaying these numbers and (2) ID less.

The alternative is to just align myself with the AI and always pick the top suggestion. That way my ID may get corrected by others, but my count will never decrease (i.e. I won’t look dumber) just because I learned recognizing things better. But if I just blindly follow the AI then why am I doing this at all… Sure, most people do it that way, but.

1 Like

I think I stopped paying attention to “supporting” “leading” “improving” etc after my first couple of weeks on iNat, and it definitely improved my experience.

I make the IDs that meet my own standard of “confident” (which seems to be higher than many users) and don’t really look back.


I was also thinking about how to measure the reliability of an identifier. The metric that came to mind was the number of ids that agree with the community id determined for research grade observations. If RG is iNat’s definition of the “correct ID,” then wouldn’t it make sense to measure how often an id’er ended up in agreement with an RG observation?

This obviously sets aside Casual observations, but it does solve a potential problem with the number of leading or supporting ids - you don’t know if they were “correct” until it meets the threshold of the critical 2/3 community consensus. Before that, it’s just an opinion.

I guess it also is the case that identified RG observations could be wrong, but that’s another matter, too. The metrics can’t solve for everything.

1 Like

I really try to not care, as it would massively change what I am doing here. However, I have to force myself to not care, as my brain is wired in a way that these kind of gamifications usually work very well

If I would allow it to matter to me, I would probably not do what I am doing at the moment. The taxa I am trying to push out of needs ID has 134 pages of mostly agreeing with IDs I suspect… maybe 20 % will be corrections. The oldest observation is hanging there since 13 years (ok, that did even surprise me)… the oldest one with no other then the initial ID by the observer is only 3 pages later and also 5 years old… because IDers often refrain from going through these kind of taxa. Now they might have a reason more to do so.

If I would allow those kind of numbers to matter to me, I would just leave those observations hanging there in needs ID and search for easier targets.That is why I still do not really like that extention, as I still do not see how it would help evaluate the IDers work, but it has the potential to influence it anyways, if you are also motivated by numbers like me

1 Like

I use it as a blunt tool
For example
TonyR has 551 because he set up the spiders of Cape Peninsula project, then tidied up the data.
I have 395 because I went thru methodically and pushed our most observed spider to subspecies. That number rewards dogged persistence - but doesn’t translated to spider sense beyond the blindingly obvious ones.

If you go to the taxon leaderboard
I would pick out - wynand_uys ( = ID live spiders from pictures) hrodulf razorspider ( = spider taxonomy) djringer and spidermandan
That is not ‘by the numbers’ but lived experience on iNat.

1 Like

Yoou could also use this →

But change the user ID and the taxon ID. Or just delete ‘&taxon_id’ to see all the mavericks.

This isn’t much work, and gives you what you’re looking for.

What about certain obscure obs? There’s so many mushrooms which get ID’d by experts, but nobody agrees with them, since nobody knows anything about it. There’s a lot of fungi with only 1 ID. First for iNat happends so often in that scene. And many others aswell I assume.

The numbers wouldn’t show that expertise though.

Yeah, the more you think about these types of metrics, the more you see the limitations. The fact that RG is not always correct is a big issue, and not just with my proposal, but also with the current metric. If we’re actively disincentiving people from putting in Maverick ids when everyone else is wrong, but you’re right, then that is broken, too.

I also thought about just the total number of IDs (by taxon or as a whole) as a metric. That might tell you how much time an identifier has spent on that taxon or group, or on the site as a whole. I admit to being impressed when someone’s profile indicates they have done tens of thousands of identifications, let alone hundreds of thousands.