(I searched for a similar topic but didn’t find anything-- if this is discussed elsewhere, please direct me that way)
I’ve often found myself starting (and ending) IDs/identification summaries assuming that the observer identified the organism and didn’t simply agree with the “Suggested Identifications” (I guess this is my “pre-robot” bias). I’m perhaps explaining differences between things that the user didn’t know existed to begin with and this might not be taken well by a beginning user who’s promptly being told “you’re wrong”.
A quick hypothetical example assuming a beginning user with zero knowledge of a group, say gulls:
User photographs a California Gull, computer suggests “Western Gull, California Gull, etc.”, user selects top suggestion of Western Gull (which happens to be wrong), user then receives several “That’s a California Gull because of x y z” comments from gull identifiers (human).
I haven’t gotten explicit negative feedback but I suspect that some new users who followed the suggested ID and then got a comment from someone basically telling them why that’s wrong will not be pleased-- they were just following what the app said.
Has anyone run into problems with this? Would it perhaps help to have an indicator showing “where” the ID came from? (which is not 100% foolproof, as I use the suggested IDs often for ‘easy’ stuff that I know and that I know that it knows as it saves typing time).