They may not know about/understand the policy, but that is not the same an unclear policy. Again, I agree it could be made even more explicit, but the policy itself is already clear.
You can think you’re contributing in good faith while also not knowing what the guidelines are. “understanding how to identify the taxon and rule out similar taxa” is something that blindly using CV doesn’t give you – you can split hairs and say “I know how to ID this, I ID it by using this tool!”, but “and rule out similar taxa” is pretty unambiguously not possible by blindly using CV IMO. But regardless, a line about not blindly using CV couldn’t hurt.
No, they completely understand and know about the policy. They think their actions are in accordance with the policy because they think that their use of AI IDs constitutes knowledge of identification.
Merlin aside, I’ve wondered for some time if a significant fraction of research grade observations don’t have two IDs, one by the observer who uncritically accepted CV’s first suggestion, and the other by another user who did exactly the same thing: the observation is research grade because CV guessed it twice. This is encouraged by the fact that, as far as I know, neither the observer nor the identifier gets to enter a suggested ID without CV popping up with its suggestions.
Ok, but that’s not a problem with the policy. That person has an unreasonable definition of “knowledge of identification”.
You and I agree that this is an unreasonable definition. But I wouldn’t be so sure that our opinion is universal!
From my interactions with this particular AI ID’er, he struck me as someone who was genuinely trying to follow the guidelines.
There are lots of people out there who are very excited about AI. This definition of “knowledge” that is so reasonable to us is not necessarily shared by them!
And seeing how people are using AI it will only go downhill from here. (Note that I am not opposed to AI, I even work in this area. But people use it without (instead of) thinking of their own.) So I agree this urgently needs to be made explicit.
It’s not clear to me how this situation differs from two people identifying a record and using the same field guide in which the wrong information is provided, e.g. species X differs from Y in having a white instead of yellow stripe. Both will incorrectly identify it, and non-independently because they used the same source. Unless you require people to provide the basis for how they identified something, then this problem is impossible to eliminate. Also, many initial identifications in iNat can be made by the CV tool, then someone checks other records in iNat, finds they look similar, and confirms the ID. Should people therefore be required to use a source other than iNat to help identify records? Overall, I’d agree with @raymie.
I’m someone who occasionally id’s things when I feel like I’m reasonably confident about the ID.
The way an IT person might be thinking of these modules might be that the modules have been created by humans and aren’t totally different from human ID. If it was humans who built all the AI modules and decided what traits should be focused on by those modules, then it could be thought of as a human Id (once removed?). I’m just throwing this out there because it might help conversations with people who aren’t following policy guidelines.
This might be the way an IT person might look at it. If you are the one writing the software that constitutes an AI module I could understand how you might come to think of it as a sum of human knowledge… similar to any other human Id. I don’t agree with thinking of it this way… but I could understand how someone might. Starting off the conversation with “We could see how these AI modules might seem like a distillation of human thought, but we still expect individual human identifiers to be at the center of our process”.
There may be more differences, but one is that it scales differently. Using an (incorrect) key takes time, also an individual (incorrect) key might not be available to many people. Using a publicly available AI tool does not take much time and basically everyone has access.
I’m a software developer. Maybe I find an API to such a tool. I could automate using it and ID everything posted on iNat (within the covered taxa for that tool). Another person might have the same idea. Suddenly everything will go to reasearch grade within seconds, without any person ever having looked at it. I am just playing devil’s advocate of course. But I ask: where is the difference between me using a tool manually but blindly following it and automating that process?
When there is no moderation/oversight (i.e. blindly following with no consideration of the results) of computer-generated suggestions, that’s potentially a violation of the community guidelines and potentially a suspendable offense.
send the links to help @ inaturalist to follow up
If the functionality is even available via the APIs, I’m sure it’s carefully authenticated and only available to trusted entities. The thread was probably directed at manual ID while blindly following the results of outside AI modules. You make a good point about scalability tho. Even when it’s manual entry of the ID, the external AI modules are still more scalable than a key.
Yes the original poster blindly duplicating the first identifier to make it RG is a major problem I find with the system. There needs to be a meaningful barrier or hurdle to that, even if it’s just a well-worded popup offering the option of withdrawing their original id rather than duplicating the new one if the OP’er doesn’t know the species well.
I’m certainly not in favour of AI being a confirming vote, it would be better just to write the AI’s ID as an ID-less comment…
d
I think it’s fine to add IDs based on a Merlin suggestion, as long as you’re not following it blindly, same as with the CV suggestions. If someone says “oh, that’s a cool bird sound”, checks it with Merlin, gets an ID suggestion, checks it out themselves and agrees with what Merlin said, and then adds the ID formally on iNat, I don’t see a problem. On the other hand, if someone just uses Merlin to get a name and then robotically adds that ID despite having no personal insight into whether that ID makes sense or not, that seems irresponsible.
AI doesn’t think the way a human thinks, and there are times when an AI or the CV suggests an ID that is so blatantly wrong to any human observer that it’s almost laughable. That doesn’t mean the CV/Merlin AI isn’t helpful, but it does mean that expecting humans to think a little bit before they click can really cut down on the misidentifications out there.
Ditto. Use the tool, but put some thought into it. Don’t let the tool use you.
Why would users not be allowed to use an ID from Merlin when the site is already encouraging them to just pick the top CV suggestion? That isn’t the user’s own expertise, either. What’s the difference?
There are many posts above in the thread that draw the distinction between using the iNat CV or Merlin as a tool and blindly/automatically choosing the suggestion. I would check those out as they provide some good answers to your question.
Sometimes it seems like people really think so.
I was trying to ID a thrasher song this morning while on a hike. Merlin suggested Crissal Thrasher but it seemed a little hesitant about proposing that species, even while the bird was clearly singing. Usually if it hears a song clearly the ID pops up fast and is highlighted. Eventually I was able to get a visual on the bird and some recordings and photos; it was indeed a Crissal. All the other birds it IDed singing and calling I was able to also confirm independently, so it was working pretty well this morning.
It’s a great tool but I do not rely on its IDs exclusively. If I can’t confirm the species myself I typically don’t accept it.