“pretty sure” means far more than 50% when with gastropods it’s 10% accuracy at best, so it’s clearly an error by iNat
This is quite the assertion. On one hand you have the team of professionals working on inaturalist, on the other the thousands of eight graders who are offered an app that will identify bugs for you. Who has the real agency here? How is the novice user supposed to know better?
What do you mean how? Can’t they read in 8th grade? I remember myself pretty like an adult in terms of abilities when I was in school. When I found iNat I didn’t have a single thought about how AI is what we must believe 100%, and probably one wrong AI sugestion can help you to understand it. I don’t see big problem in people choosing AI suggestions for a couple of times without much thought, after all suggestions are visually similar, and can confuse a human easily, but it doesn’t mean an error is not on a human side, it is.
Yes, I agree with that, but it’s better to focus on the user, not the AI. If observers are leading with preposterous IDs (which they are), then those IDs should be discounted by the system. A leading ID from an inexperienced observer should not count towards the community ID.
The hard part (for the system) is to know when a user is an inexperienced observer. We should work towards that. If the system knew an inexperienced observer when it sees one, then it can protect itself from user behavior that tends to lead to incorrect IDs.
If a phrase is frequently misinterpreted, then it doesn’t really matter “whose error” it is. “We’re pretty sure” sounds to me like more than one human being has considered the question and are considerably more than 50% confident in the answer. I know this isn’t actually true, because no human beings were directly involved, but that’s what it sounds like. The AI is doing the best it can, and as @melodi_96 point out it is remarkably good at some things and continues to get better at everything. But that wording is misleading people into thinking that the AI is more accurate than it really is.
That strikes me as a harder problem where a solution would have to involve something like the oft-discussed reputation system.
The easier problem in my mind would be for the AI to serve more higher-taxon identifications in order to
- primarily generate observations that identifiers can add their expertise to rather than observations that identifiers have to correct
- instill the a more conservative mindset in the user base
But maybe suggesting family level, or order level IDs is actually harder than it looks.
Thanks for saying it better than I could.
In addition, there is nothing a user with unrealistic expectations is able to do to help the next user who approaches the auto ID with unrealistic expectations. There is no way for them to know that their expectations are unrealistic. The folks who actually can do something are the ones developing and administering the AI.
Well said, but eventually someone has to suggest a species (or lower), and if that suggestion is incorrect, we’re back to square one.
It’s not enough for the AI to make suggestions—the AI must make appropriate suggestions to a particular user in a specific context. More importantly, every ID submitted to the system must be evaluated, and the system must know when to discount a suspect ID. Some IDs should not be counted toward the community ID.
And then what do you do when the ‘suspect ID’ is right. Have a voting system to overturn the overturning? Allow only specific users to enter 'suspect Id’s (ie an expert based system) ?
I mean by the way inaturalist likes to present itself, it is geared towards person-to-person interactions. That’s totally fine. Correcting a person’s judgement is fine. Being corrected is great. Hey, I like to interact with people, and I like to be correct. Go ahead, judge me.
Anyways, when the observer is really just an automaton that pushes a button because the AI strongly suggested pushing the button, then correcting that observer, and perhaps engaging with them, is just not interesting to me. You know? I’d much prefer if the AI did a little less of the suggesting and instead actively encouraged the users to form their own judgement. Higher-level taxon suggestions would do this, in my dreams at least.
Start with the simplest use case with the largest payback. If the system can address this use case, many incorrect IDs will be avoided.
Suppose the observer leads with a species-level ID. If the observer is an inexperienced observer, ignore the ID, that is, do not incorporate the ID into the community taxon. Consequently, at least two additional IDs are required.
If the ignored ID turns out to be the correct ID, that’s great. No correction or back-pedaling required.
The site has repeatedly said they will not implement a reputation/expertise/experience level or whatever you want to call it system where one user has different ‘power’ on the site regarding identifications. Saying we need to simply ignore input from a set of users is just another variant of that.
The world’s leading expert on family x with 400 scientific publications is ‘inexperienced’ from an inat perspective when they create their account. Tell them too bad, your input will be ignored?
Have every user required to submit their academic and professional qualifications as well as time they have been interested in nature ?
I’m not trying to be argumentative but simply saying inexperienced users should be ignored needs to be flushed out with regards to how you do that.
But if you want to start that debate yet again and call for it, I guess that is fine.
I think the only thing staff have said is that it wouldn’t be based on outside metrics.
If you do start with this, then don’t forget to be upfront about how you define the payback. If the goal was, say, to minimize the error rate, then one could think about how devaluing the observer’s initial ID would achieve this goal.
On the other hand, if the goal is to engage with inexperienced users, and to teach them how to think critically about what they see in the natural world, then devaluing their opinion does seem counterproductive. The problem as I see it is that the initial ID goaded on by the auto-ID is not actually the opinion of most inexperienced users that I’ve seen. It’s more of a reflexive response. Changing this situation is the kind of ‘payback’ I’d be much more interested investing in.
That is precisely the goal. In my experience (as an identifier), the initial ID by the observer is a critical determiner of the overall identification process. If the observer offers a species-level ID, and that ID is wrong, the stage is set for error.
If I could search for all Research Grade observations with less than two IDs (not counting the observer’s initial ID), I would expect to find numerous errors there.
The system already discounts the observer’s initial ID to some extent. That ID is not counted in the observer’s overall identification tally. Also, the user interface consistently ignores the observer’s initial ID when displaying the number of IDs attained by an observation.
That may be your goal, but I think you’re at odds with iNaturalist’s mission statement:
“[…] at its core, iNaturalist is an online social network of people sharing biodiversity information to help each other learn about nature”
The possibility of being wrong, and in a safe environment, is absolutely crucial for the learning to happen.
My beef is with who gets to learn: the users or the AI.
Suppose the observer’s initial ID is incorrect. The only way learning can take place is if an identifier disagrees with the ID. OTOH, if an identifer agrees with the ID and the observation becomes Research Grade prematurely, an opportunity to learn the correct ID is diminished (if not lost altogether).
A (true) story illustrates the situation. In the southern Appalachian Mountains, there are two species of Clintonia, C. umbellulata and C. borealis. The AI too often suggests the latter even if it’s not. When that happens, observers tend to submit incorrect IDs, which leads to numerous observations becoming Research Grade in error.
You know the rest of the story. Incorrect IDs lead to a poorly trained AI leads to more incorrect IDs. To make it worse, finding these errors is a difficult task. If we can prevent them in the first place, we should.