Signifying User IDs versus Computer IDs

(I searched for a similar topic but didn’t find anything-- if this is discussed elsewhere, please direct me that way)

I’ve often found myself starting (and ending) IDs/identification summaries assuming that the observer identified the organism and didn’t simply agree with the “Suggested Identifications” (I guess this is my “pre-robot” bias). I’m perhaps explaining differences between things that the user didn’t know existed to begin with and this might not be taken well by a beginning user who’s promptly being told “you’re wrong”.

A quick hypothetical example assuming a beginning user with zero knowledge of a group, say gulls:

User photographs a California Gull, computer suggests “Western Gull, California Gull, etc.”, user selects top suggestion of Western Gull (which happens to be wrong), user then receives several “That’s a California Gull because of x y z” comments from gull identifiers (human).

I haven’t gotten explicit negative feedback but I suspect that some new users who followed the suggested ID and then got a comment from someone basically telling them why that’s wrong will not be pleased-- they were just following what the app said.

Has anyone run into problems with this? Would it perhaps help to have an indicator showing “where” the ID came from? (which is not 100% foolproof, as I use the suggested IDs often for ‘easy’ stuff that I know and that I know that it knows as it saves typing time).


This little magic tab icon indicates someone selected a computer vision suggestion.

(As you say, it’s not possible to tell whether they selected it because they’re just following the suggestion, or if they’re using it as an autofill to save typing time.)


Gahh! I never even noticed that thing!

I clearly have no idea what I’m doing.


I never understood why the computer vision icon did not got a red colour in stead of a grey one. I use it as an autofill to save time as iPhone app does not keep the original value and always wipes the existing tex away…cause a lot of delay and faults and keeps me typing and typing over and over again. I must say the computer vision is not as fast as it used to be, it seems to slow down altough many factors influence the speed. And in the end, in time it will give better responses.

I do this all the time as well, so I would not assume that just because of the “computer vision” icon, someone is just guessing…


As someone who uses the app suggestions a lot for things I know nothing about, I often put a question mark, or two if I’m really doubtful, in the description box just to indicate I really don’t know. I don’t know if this actually helps anyone though.


I think that’s a good idea.


I make a lot of corrections (because mollusk IDs from the AI are very often wrong,) and I for one have never had anyone comment negatively when I correct an ID. Usually they seem to be grateful, whether they got the ID from the AI, or from a book, or from some other kind of guesswork.

I do sometimes, but by no means always, point out what visible features mean that my ID is correct and the original ID was wrong.


I approach this with the attitude that most people are on iNat to learn more about organisms, not prove how much they already know. So, as long as we are friendly and respectful in sharing our knowledge, they almost always seem happy to get the help.


Absolutely! If there’s a problem here, I think the solution is to encourage people to give and receive corrections graciously, not to refrain from offering corrections. We’re all here to teach and to learn.

Whether or not the computer vision feature was used is not really relevant, as it’s often used as a way to reduce typing as much as for the actual ID. I use it mostly for the little moment of astonishment as I see that once again, it correctly identified something from my mediocre cellphone photo :)


i was thinking about this, along with, and i think the thing that might address both items is to just have a separate button to trigger computer vision suggestions, maybe placed just to the right of the taxon input box. so if you just want to input the taxon by yourself, just type in the box. but if you need a computer vision suggestion, click the button. if you don’t like the suggestions, just start typing again to pick your own taxon. with separate workflows, you could also better separate and record whether the identification was vision-aided or not.


I don’t think that would address the people who use computer vision to save on typing, even though they know what the taxon is already.


true, but i don’t think that in itself makes the idea of having a button activate computer vision a bad idea.


Me neither!

1 Like

For beginning users it might be not be as nice - it depends how it was designed and onboarded, probably. I remember when I first saw suggestions pop up I wasn’t expecting it and I was rather delighted.

1 Like

I only just recently noticed it as well. but I agree that it should be more visible, I still find myself explaining differences that the uploader probably didn’t know existed.


This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.