A great frustration many of us have with iNat is the ease with which inexperienced users can “agree” an observation to “research grade”. Many users seem to use this button as a sort of thank you. A possible solution to this that I’ve not seen suggested is to implement a pop-up form when the agree button is clicked (and, better, yet, for all identifications). This can be used to nudge the identifier (particularly new users to this site) into making more informed decisions.
For example, this could require a user to rate the confidence of their ID on a scale (1-10). This form would also be a great place to provide tips for inexperienced naturalists on HOW to accurately identify.
“Don’t guess at species-level IDs”
“Don’t agree with an ID unless you recognize diagnostic traits”
“Be sure the ID is geographically appropriate”
“Be sure the species can be identified from morphology alone”
Another useful addition to this form would be an expertise index. Have a second requirement to rate your expertise in the taxon you are identifying (1-10).
And how about additional (optional) boxes that ask specific questions related to the ID… “What morphological traits support your ID?” “What diagnostic traits are not visible in this observation?” “Are there other taxa that this might potentially be?” I suspect that by asking specific questions (versus the single empty box we currently have) there will be more of an impulse from users to add detail and open more of a dialogue.
The idea is to make the process of agreement and identification more thoughtful and communicative. In its current state, the vast majority of IDs are offered with no comments, which makes it difficult to know what weight they should be given. Of course, this identification form shouldn’t be made so onerous as to discourage participation, which is why I would suggest the only mandatory data to be the confidence and expertise indices.
Housekeeping note: even though there is another similar feature request open, I approved this one because it included more specific and detailed suggestions, and a wider scope of application. I recommend everyone read through the other discussion before chiming in here.
Just a suggestion to avoid this becoming exceptionally onerous is that if there were a way to cache or store your scoring and populate it when you do additional ID entries that would be much cleaner.
I can’t emphasize how little I would enjoy filling in I am a 10 on confidence and experience every time I wanted to ID a Blue Jay.
I’d also encourage you to think how this would impact people who try to do the coarse ID process to get stuff out of unknown etc.
i don’t think there is a need for a user to rate their expertise when iNat can do it automatically based on the stats of how often a user’s ID of a taxon ‘sticks’ i.e is the leading/dissenting ID that gains agreements from other users. So basically as tony said a reputation system. At the minimum would it not be better if a user should have to make a certain number of ‘correct’ IDs before their IDs are able to turn an obs to RG? I think there is already a threshold number of obs needed before a user can create projects & places, so there is a precedent.
From personal experience, I think Dunning-Krueger renders a self-assessment useless.
I do think some optional form fields would be great in a lot of situations, but that doesn’t really resolve the issue at hand, because our more prolific experts will not go to the trouble of justifying their IDs in many cases. I know I sometimes add comments on tricky taxa, sometimes I don’t, depending on whether I’m on mobile, how lazy I’m feeling, etc.
Only downside of this is that there are very specialised experts who make infrequent IDs only for a very niche group. They would struggle to reach this threshold.
But this is already a problem with the current system. How do we tell the difference between a specialised expert making their 1st ID and a new user making their 1st ID? A specialised expert has no value unless their status is known to the community, either via name recognition or they declare their expertised when their IDs are questioned. Either way, the value of their IDs soon becomes apparent to the community and their IDs quickly gain agreements. So in both the current system, or a threshold system the problem is/should be quickly solved by the community.
Yeah that was what I was considering. As an example in the Australasian Fishes project/community, there are quite a few experts who only use iNat when prompted externally by the project admin (fish curator at Australian Museum), so their ID counts are very low.
Could we just take the agree button off identifying?
What @joe_fish is suggesting would make me wan’t to leave iNaturalist. I have to say I am totally allergic to 1-10 scale analyses (I don’t think they don’t work on knowledge scales or pain scales in hospitals or any other “survey” that I’ve taken part in). As for having go through the suggested form, it would so put off an enthusiastic beginner or not so begginner like me.
The suggestion to cache the self-assessed “expertise index” would be a great quality of life improvement, or automating it by some algorithm of our correct:incorrect IDs. I think there is great value in adding a confidence score to every ID. For example, I’ve identified ~300 Tridacna maxima… some of those are based on clear evidence of the requisite diagnostic traits while others observations lack some of the necessary detail and are more akin to educated guesses. For the latter, I try to leave a note of some kind, but that’s more than most users bother to do, as far as I can tell.
there’s been lots of discussion about reputation systems in the past on these forums, and iSpot, which is now more or less defunct I think, used one. It’s a … controversial topic, to say the least. It’s also very hard to do well.
It would remove it or effectively redo it. The community ID would become whatever the ‘experts’ say. If no expert chimes it, it does not get a community ID (or at least not one that is considered validated)
I agree that blind agreeing is a problem on iNat. However, having to fill out a form like this when iding would drastically cut down on the speed and number of identifications I would do as an experienced user, and I think it would also turn off a lot of newcomers. I think the costs would outweigh the benefits.
While there are some obvious issues with other reputation systems proposed (which I don’t want to drag up). But, I think this system would effectively be a self-scored, rather than more objective reputation system, which would actually be worse than some of the other proposals for reputation systems I’ve seen. Biases would be a huge issue as some other commenters have suggested.
I’ll be honest, something like this would probably make me stop doing IDs entirely. There was a bug a while back where I couldn’t hit the enter key and had to mouse-click the button instead, and just that was frustrating enough to make me stop doing IDs for a week. Workflow is important.
I think the dramatic reduction in number of IDs would cause more problems than it fixes. We already have backlogs in the hundreds of thousands, and it’s only getting worse.
And I will always oppose any hint of a reputation system, because to me that conflicts with what Inaturalist is all about. I love the egalitarian nature of the site, where it doesn’t matter what degree you have. I see professional botanists make mistakes that get corrected by 14-year-old hobbyists, and that’s awesome.
There are several issues here:
Votes: on iSpot the experts got 1000 votes per ID in their group, novices got 1 vote to start. And yet, it was possible to novices to overturn an experts ID (with experience). The ratio of votes between novice and expert can be adjusted if needs be.
Experience: your statement assumes that novices are always novices. But on iSpot the reputation system allowed learning and improvement, and by posting IDs that got agreements, novices could improve their votes (to a maximum of 500 votes)
Research Grade. on iSpot research grade was 1000 votes - i.e. one expert or two very experienced (with over 100 correct IDs that had been agreed to by users with higher reputations than they) novices. This translated into an Expert Equivalence, where the IDs could be ranked (e.g. 3.5EE, 2.0EE, 1.1EE, 0.4EE or only one ID by a total novice: 0.001EE) - this was based on an outside standard: 1 EE is the ID that you would get if you went to your local herbarium or museum and asked the expert working there what the specimen was. Furthermore, in practice one could rank RG based on the number of experts in the group. For some groups (Fish in southern Africa) RG was downlisted to 0.5EE because of the lack of experts (although several very experienced amateurs (who if nominated by peers got 500 votes) existed).
Reputation systems can be quite sophisticated, and can be trivial to compute, and can be earned.
Whereas Reputation systems are highly controversial and antagonistically viewed on iNat, that does not mean that they dont work or are difficult.
After all iNat does have a reputation system: everybody gets only 1 vote, you cannot improve your votes, and two votes = Research Grade, and there is no way of evaluating this against an outside standard. That is a very elementary and basic reputation system that ignores the fact that some people are experts and some are novices, and that experts are more likely to be correct than novices. It also ignores the fact that some novices can learn and becomes the equivalent of experts (or even outperform them).
I agree. Making IDs for me is based on how difficult it is. The more difficult the less likely I will be to make IDs, and if too onerous, I will give up.
The biggest complaint with getting taxonomists to make IDs in southern Africa is “that it is too difficult”. Most would refuse where it not for the identify tool - that is an amazing curation tool, that allows me to tailor workflow for specific experts and make it more acceptable to them.
For the record, I would 100% be in favor of a reputation system. My suggestion in this post was intended as more of a compromise solution, as it seems there is a vocal opposition to introducing any non-egalitarian elements into the identifying process. I think this is a shame and something that fundamentally hamstrings the usefulness and potential of this site. Surely we can encourage participation without compromising the scientific integrity of the observations.
We should probably try to keep discussion of the reputation system in a thread associated with it. Otherwise that has the potential to hijack any other thread it is mentioned in.
Also i agree with others- if it gets significantly slower to go through the identification page because of popups or nag screens, i am likely to do a lot less ID help.