@kildor this should be fixed now; let me know if you notice any other issues.
I am also not having success with the mouseover.
I would also definitely be interested to have the stats for myself :)
Hi! Is there perhaps equivalent tool for firefox or noâŠ
No worries if not, just wondering if there is something I missed :)
From 257 to 291 Chrome users!
I did once upon a time say I was going to port it to Firefox, but I havenât done it yet. Thanks for the reminder!
I think I was online ⊠as ⊠it went live
That being the case, what is the use case for this extension?
You can use it or not.
I do. If 3 people have IDed something I donât know.
New name, few IDs - but their profile tells me âworking on the taxonomy of âŠâ and they are new to iNat. It is a tool, which we have to evaluate. I collect new taxon specialists.
I find it a bit awkward in a specific situation: letâs say I made an initial ID of family. I get +1 in your count for the family ID.
Then somebody else refines it to species. I look at it carefully and agree to their species ID. As a result I lose the count for the family ID, but I donât get a count for the species ID (because itâs not leading or improving). So the net result of this is that I lose a count.
It kind of disincentivizes me to follow up on observations Iâve previously identified. Ideally Iâd like to keep the count for the higher order ID, but I donât know how that would be feasible technically. Or maybe I shouldnât care about these numbers.
Inasmuch as there is a flaw, IMO it lies in the stateless classifications of identifications. In your scenario you are disincentivized regardless of my extension logic; Supporting IDs donât even get a flair. It would perhaps be more âfairâ to preserve your Improving classification so that your own best interests (such as they are, intentionally gamified by the site) are not at odds with the best interests of the observation/the community. Iâm sure there are probably multiple forum posts about this low-level tension.
Hereâs a related scenario which is familiar to me: You come upon an observation of a beetle with these identifications:
- spider
- beetle
- beetle
You know itâs an insect, but you arenât sure itâs a beetle. You identify it as an insect, which is inarguably helpful, as it allows the observation taxon to move to Insecta and for insect identifiers to more easily find it, and you are temporarily ârewardedâ with an Improving ID. However, when someone else comes along and adds another beetle ID, your ID gets âdowngradedâ to Supporting. Does that actually matter, in any meaningful way? Not really. However, did your ID (and for that matter, ID #3, which is also Supporting) improve the observation, regardless of what happened after it was made? Yes.
After using it for a month now, I really like this extension and have recommended it quite a few times. The color coding for the suggestions is really handy in certain situations. Also knowing who has at least SEEN a number of this thing is certainly helpful, including being able to see oneâs own numbers. Thanks for coding this.
All of this is technically right, and canât really be argued. All the same, iNaturalist doesnât tally my IDs so neatly, I have to actively go looking for it. So up until now I had no reason to care what fraction of my IDs are merely âsupporting.â With this extension the âqualityâ of my IDs is in my face constantly. It doesnât matter how well I understand why the numbers behave the way they do, smaller number suggests less expertise, so no, I donât want that number to go down. All of this probably means I should (1) turn off displaying these numbers and (2) ID less.
The alternative is to just align myself with the AI and always pick the top suggestion. That way my ID may get corrected by others, but my count will never decrease (i.e. I wonât look dumber) just because I learned recognizing things better. But if I just blindly follow the AI then why am I doing this at all⊠Sure, most people do it that way, but.
I think I stopped paying attention to âsupportingâ âleadingâ âimprovingâ etc after my first couple of weeks on iNat, and it definitely improved my experience.
I make the IDs that meet my own standard of âconfidentâ (which seems to be higher than many users) and donât really look back.
I was also thinking about how to measure the reliability of an identifier. The metric that came to mind was the number of ids that agree with the community id determined for research grade observations. If RG is iNatâs definition of the âcorrect ID,â then wouldnât it make sense to measure how often an idâer ended up in agreement with an RG observation?
This obviously sets aside Casual observations, but it does solve a potential problem with the number of leading or supporting ids - you donât know if they were âcorrectâ until it meets the threshold of the critical 2/3 community consensus. Before that, itâs just an opinion.
I guess it also is the case that identified RG observations could be wrong, but thatâs another matter, too. The metrics canât solve for everything.
I really try to not care, as it would massively change what I am doing here. However, I have to force myself to not care, as my brain is wired in a way that these kind of gamifications usually work very well
If I would allow it to matter to me, I would probably not do what I am doing at the moment. The taxa I am trying to push out of needs ID has 134 pages of mostly agreeing with IDs I suspect⊠maybe 20 % will be corrections. The oldest observation is hanging there since 13 years (ok, that did even surprise me)⊠the oldest one with no other then the initial ID by the observer is only 3 pages later and also 5 years old⊠because IDers often refrain from going through these kind of taxa. Now they might have a reason more to do so.
If I would allow those kind of numbers to matter to me, I would just leave those observations hanging there in needs ID and search for easier targets.That is why I still do not really like that extention, as I still do not see how it would help evaluate the IDers work, but it has the potential to influence it anyways, if you are also motivated by numbers like me
I use it as a blunt tool
For example
https://www.inaturalist.org/observations/106856930
TonyR has 551 because he set up the spiders of Cape Peninsula project, then tidied up the data.
I have 395 because I went thru methodically and pushed our most observed spider to subspecies. That number rewards dogged persistence - but doesnât translated to spider sense beyond the blindingly obvious ones.
If you go to the taxon leaderboard
https://www.inaturalist.org/observations?verifiable=true&taxon_id=904618&preferred_place_id=113055&locale=en&view=identifiers
I would pick out - wynand_uys ( = ID live spiders from pictures) hrodulf razorspider ( = spider taxonomy) djringer and spidermandan
That is not âby the numbersâ but lived experience on iNat.
Yoou could also use this â https://www.inaturalist.org/identifications?user_id=5346162&category=maverick&taxon_id=52672
But change the user ID and the taxon ID. Or just delete â&taxon_idâ to see all the mavericks.
This isnât much work, and gives you what youâre looking for.
What about certain obscure obs? Thereâs so many mushrooms which get IDâd by experts, but nobody agrees with them, since nobody knows anything about it. Thereâs a lot of fungi with only 1 ID. First for iNat happends so often in that scene. And many others aswell I assume.
The numbers wouldnât show that expertise though.