Make An Optional "Hide Current Identifications" Setting

I don’t think we fully realize the suggestion bias we have when we open an observation that already has an identification. The mental path of least resistance is to say, “Oh, I see how that thing could be that species,” rather than look closely at the picture and go through the rigors of identifying from scratch.

I wouldn’t want to remove the ability to see comments and observations in an observation, but for those who want the challenge, I propose:

Create an optional setting that hides existing comments and identifications from search views and in observation pages, and a button in each of those places to show them. For the search returns, a button in the corner of the thumbnail; for the observation pages, a button at the head of the comments/identification feed.

My hypothesis is that using this setting would cut down on erroneous Research Grade identifications by removing suggestion bias.

(As an aside, since iNat happens in a database, it has the capability to track the results, which would be a great research/thesis project for any sociology researchers out there.)

This is a pretty sweet idea for a research project. Not sure that makes it worth it to iNat staff to implement, but it would be cool to see if there is an effect, and, if so, how big it is.

One sort of issue is that some sort of ID process has to happen to get the observations close enough to an ID to make the observation relevant to the IDer’s expertise. For instance, when I’m IDing, I search, at the highest, at Order level.

1 Like

Those who weren’t around for the Google Group will probably not have seen this, but @loarie did a study of “blind” IDs a while ago - it’s reported in some blog posts by him here. I don’t know if that experiment is still active - if so perhaps Scott can let people know how they can participate.


Yes, we are still measuring identifications that are made ‘blind’ - ie IDs that have been made through the and they have been somewhat useful in our explorations into reputation modeling.

You can use (with appropriate place and taxon params) without registering for the experiment, but registering tells us where your expertise lies.

You’re correct that traditional reputation/crowdsourcing models assume that labels are made blind. But its hard to imagine a social network where participants can’t see each other’s IDs. The models we’ve been experimenting with accomodate this by having both ‘worker skill’ and ‘worker trust’ parameters - where the trust parameter is how much you’re influenced by existing ID’s. You can read a paper about some of the reputation explorations we did last year here:

Certainly this data from ‘blind’ IDs could help better understand the ‘trust’ parameter. This reputation modeling arch is quite long term if we implement any of it at all though so, if there are any urgent / short term issues with ‘research grade’ or the ‘community ID’, reputation modeling is probably too long term of a solution


After an id is made, the comments should be reinstated. Comments include info such as which taxon is to be identified so this will help to identify any mistakes after the fact.

I just tried out the blind ID and had the same question.

it turns out that you can sort of game this by looking at your identification list: It’s always sorted in reverse chronological order, so having the blind-ID modal in one tab, and the ID list in another and refreshing it, you can see the observation that you just ID’d, and follow up on your ID if you want to.

1 Like

My id error rate seemed a lot higher with blind id. I often didn’t know which organism was being IDed and chose the wrong one. Maybe an option to turn off who made the observation to avoid considering their skill level or something, but it seems more important to just remind people not to agree unless they are confident.

Wow, it’s super cool that you guys are already working on this and thanks for the links. I’ll read them the next chance I get.

Since there are so many ways of using iNat, I should probably clarify that 99% of the time I use the Explore tab to search my state in ‘grid’ view to see the most recent posts. Sometimes when things are slow I filter out RG and go back into the archive to work on older posts in the geographic boundary.

I’ve often had to intentionally keep myself from pressing ‘agree’ to a observation that has one ID. For better and for worse, I am motivated to see the Needs ID/Research Grade ratio move. For better, because this keeps me active and gives some numbers to gratify my efforts. For worse, because it greats bias conditioning. It takes effort to do a “from scratch” identification without taking into account another user’s contribution.

I think we’ve all seen situations where an observation ID can be confirmed by four people, and then someone comes along and argues for a different ID and suddenly everyone shifts over to the new ID. There is something fishy about that kind of thing! Just yesterday, someone IDs a white-tailed deer, but I suspected it might be a muley but I wasn’t sure so I said so in the comment. Another user made the ID of a muley so I agreed, because “It definitely must be a muley if he thinks so.” Wait. I didn’t agree because I had examined the photo more carefully, or done more diagnostic research. Who is making the ID here? If I was sure, why didn’t I just make the ID in the first place? Confirmation bias is why.


i don’t know. I think that’s just part of the community ID process. Yesterday i was doing some field work with some colleagues and we found some ‘goutweed’. I said ‘oh look goutweed’ and we discussed the presence of goutweed at the site, etc. Then another group of co-workers found more ‘goutweed’ in a different part of the wetlandbut there it had a more developed flower because it was in the sun. Turned out it was all actually Zizia aurea and i realized the stuff we saw was too. So we all went back and mentally reprocessed it as zizia aurea.

My point is, individuals make mistakes, small groups make mistakes, that get caught with more people. I don’t think it’s always bias, i think it’s just the strength of having photo vouchers. And… this stuff happens in ‘professional’ ecology too. The natural world is messy and people change their minds sometimes. There are lots more reasons than ‘confirmation bias’ and anyhow ‘confrimation bias’ isn’t even always a bad thing. We live in a society and if we constantly tried to process and figure out literally everything on our own without the community involvement, our error rates would be worse not better. I think that was the big takehome to me of the blind ID test. For every ID i got wrong with the community because of confirmation bias, i would get two wrong without the community discussion and commentary and focus direction of the community. In my example above, if we all were in the field alone, some of us would have recorded goutweed and some zizia. With us all together, i had the goutweed error but we all eventually came to zizia. Lower error rate.

So in short? I don’t think the main problem is confirmation bias, i think the main problem is people thinking of agreeing as a ‘like’ button or not critically evaluating the ID.


The power of social conformity is considerable…even with simple “objective facts” like the length of simple lines on a card–Solomon Asch’s classic experiments on social conformity in the 50s. Asch’s original experiment involved small face-to-face groups where the pressure to conform was understandably high. A later experiment where only written responses (= no face-to-face) were used showed less pressure to conform/conformity.

The question is what kind of pressure to conform exists within virtual groups like the iNat community. Unlike the Asch experiments where the group members were strangers with no particular ‘expertise’ in judging the length of lines, etc., iNat has users with considerable experience/expertise, etc., which arguably amplifies the pressure/temptation to “conform” by agreeing with them…beyond just the ordinary power to “fit in” with a social group.

Basic human social psychology reveals again and again that for many (most?) people, the power to conform, “fit in,” jump on the bandwagon, etc. is considerable. I think it would be unrealistic to assume that wasn’t the case here, where IDing organisms from photos is considerably more complicated/nuanced than judging the length of lines on Asch’s cards. I’m glad some of you folks are thinking about/studying this issue and the impact it might have on the accuracy/quality of iNat data :)


There is also an element of reward for being the (first) one to spot the mistake. I feel a sense of satisfaction on being the first to put an ID, especially if it is followed by confirmations from identifiers that I respect. I started in iNat not knowing many taxa, and i have worked hard to learn as many as I can. Every time I am “first” feels like I am becoming more useful to the community.


I couldn’t have said it better, @kiwifergus ! I love getting the chance to take a first crack at observations and see if others agree. Having a “blind” mode would provide a little more of that feeling, I think, by letting you make an ID and then check it against any existing IDs.

Tangentially, I think the iNat database is set up for an incredible ID training tool if used in a slightly different way, but that’s a separate thread. ;D


I love the idea of having the optional blind button for my own personal ID’ing tool, as stated above by @taitsougstad , to hone my skills and increase my confidence within a taxon, location or project.

And furthering the sentiment of @charlie, I think asking prior Identifiers to explain their choice (tactfully and respectfully of course) of taxa is the best way to substantiate the ID, foster conversation in the group, and to help INat be a space of teaching and learning. But mostly, if Identifiers know that they will be expected to explain their choice, they will probably tend toward rigorously reviewing their future ID’s and/or being more conservative with ID’s if they aren’t 100% certain.

A god feature is if people can practice on rg observations without affecting others, in a special function on the app. Then the answers can be supplied and influencing comments can be viewed.

Edited for clarity

@tchakamaura can you explain how to use that special function on the app. I’m not seeing it on first glance. Thanks.

I think it is a good implementation to solve problems from the thread without creating more. It is not now available. I edited the post to reflect that better.

1 Like