An Annual iNat Naturalist Survey?

I work in the software development / project management field as a business analyst, and I always enjoy checking out Stack Overflow’s annual developer survey.

Lately I’ve been thinking how cool it would be if there were a similar survey for the iNaturalist userbase. As a general user, and I’m sure even more so for staff, sponsors, and others who think about where the platform is and where it’s going, it would be very interesting to understand (to the extent can be understood with a survey like this) the types of people engaging with the platform: demographic info, education level, profession, tools they use (smartphone, microscope, stand-alone camera, etc), kingdoms of focus for prior observations/IDs, kingdoms of interest for future observations/IDs, etc.

Curious to know what everyone else thinks and if this has ever been entertained by staff in the past.


I like the idea! Though I think that it would be a lot of work to make this happen. One area of need would be to make sure that any survey is available to a variety of people across different languages and platforms (app users, web). With iNat, I feel like there’s such a bias in terms of where most users are from (wealthier countries, better internet) that it would be really important to be sure that a survey didn’t perpetuate that bias to the extent possible (and of course that it was taken into account when analyzing outcomes).

I do think that if done well this could yield some valuable insights. I think some info is potentially available via mining what is already on the platform (metadata in photos for devices, etc.)


I want to know how many people can use iNat in their first language.
Or is the language they use here their second? Or third?


I think it’s a great idea to consider such an annual survey!


Is that a bias in the negative connotation or just the nature of the tool? Rather than try to correct a perceived bias a better approach might be to find out what changes to iNat might better serve people in developing nations. I know from first-hand experience that many young people in South Asia have access to a smartphone or are working on getting one, so the lack of an imaging device probably isn’t a lasting problem.


I’m not sure exactly what you’re referring to here as a “negative connotation”, but the locational bias in iNaturalist users and observations is well known and backed by data - it is not just “perceived”. Users/observations from countries like the United States vastly outnumber those from other countries and what we would expect numbers of users/observations to be based on populations of those countries. For instance, just under half of all verifiable observations on iNat are from the US (47%) (and likely made by US based observers) while the US only contains about 4% of the world’s population.

Users in wealthier countries also tend to have better internet access (faster speeds, more reliable). Many users have expressed issues with using iNat without a higher quality internet connection on this forum, but there’s good empirical data to show the inequality of internet access/speed across the world as well: This pattern influences how often users can access iNat, their amount of activity, and potentially their ability/inclination to respond to a survey.

My post doesn’t talk about correcting this bias (there’s nothing iNat can do about internet speeds in the world, for instance), but just making sure that it isn’t perpetuated in a survey. How to do this is a question of survey design and implementation (making sure it’s available across platforms, doesn’t use a lot of bandwidth so it’s accessible to users with lower quality connections, making sure it’s publicized in multiple languages, featuring it on iNat portals and doing outreach to get a representative sample, etc.). The risk is, without those considerations, the survey wouldn’t hear from a representative sample of users and might overweight the responses of users from countries in which iNat usage is common and where access is easy.

I think a great goal/outcome of such a survey could be to

but to do that, one would need a survey that minimized bias by design.

RE: smartphones specifically, I agree that smartphones are pretty accessible in many countries. In fact, in many developing countries, smartphones are the primary way in which many people access the internet, and smartphone usage is much higher than computer use (see: for data).

This is actually a great example of the type of potential issue I’m thinking of. If survey respondents are disproportionately from the US and Europe, we might think that use of web access to iNat is more preferred than it actually is worldwide. Similarly, a lot of power iNat users who are on the forum or are very active (and would probably be likely to respond to such a survey) use the website. Without good survey design and implementation, one could draw a conclusion that web access is more important that app/smartphone based access to some degree (not saying that this would happen, just an example). I also think that this type of data is already assessable by iNat (iNat can determine which method is used to make an observation, and break that down by country, etc.)

So in summary, I’m all for gathering data to understand how iNat is used and using that data to target improvements and growth - I just think implementing such a survey is a complex undertaking, especially when it comes to ensuring that we get a clear picture of how iNat is being used by a diverse set of people in a variety of locations.


Perhaps we just have different definitions of bias. A negative connotation of bias would be along the lines of a preference or an inclination, especially one that inhibits impartial judgment or an unfair policy stemming from prejudice. A more positive use of the term would be along the lines of having a bias for a healthy lifestyle. Either way, it is not the tool (iNat) that is biased.

I think these two quotes contradict one another.

There’s no contradiction, we can’t magically get that bias out of iNat, but survey can do things to do that within itself (by its authors).


How can you not perpetuate a bias without correcting the bias?

Bias has multiple meanings. I think you’re using the first of the definitions below while cthawley is using the second.


He’s not, I don’t think, insinuating that iNaturalist necessarily has a problem with prejudice regarding race, country, or whatever. He’s just saying that the data it collects is skewed toward the United States, because that’s who’s using it the most. A good survey would remember that iNaturalist is and aims to be a global platform, and that the person answering the survey might not be an English-speaking, college degreed, white anglo-saxon protestant who lives in the United States and works in the biological sciences.


To perpetuate the bias here means to ignore that not everyone lives in the US and has the same circumstances, both when designing the survey and when interpreting its results. If you perpetuate the bias with the survey, then you’re likely to not correct it (that is, you’re likely to fail to attract more non-US users). If you contemplate the bias effectively in the survey design, dissemination, and interpretation, you can gain insights into those non-US users you want to attract more of, and in turn make changes to the platform to actually attract them.


That is a good point. I would argue however that iNat data is not “a systematic distortion of a statistical result due to a factor not allowed for in its derivation.” It is very obvious that an internet-based tool requires an internet connection. We all know that. The folks who run iNat know that. The iNat data is not biased. It is just limited by what it is.

1 Like

I think that is what I just said. Not perpetuating requires correcting. Or censoring I suppose.

i sort of suspect that this would only happen any time soon if a third party – a research company, a university department, etc. – approached the iNat staff and pitched a study that they would do for free.

although the information might be interesting, i sort of think it would be a waste of time if there would not be any potential findings that the staff could realistically do anything with.

i’m not sure that providing the community information about the community would be that useful either… but maybe i’m just not seeing the big picture.

1 Like

I think we’re all ultimately agreeing about the same things except for maybe the usage and nuance of the word “bias”.

In my experience, a person who thinks about datasets might call a dataset “biased” if it does not present a holistic view of the thing being studied. If I want to understand people’s food preferences, and I ask 50 men and 50 women to identify their favorite food, and 40 men and 5 women respond, the dataset could be said to be “biased” because it represents information that is more heavily weighted toward the perspective of men, or even, one might say, toward the perspective of those holding a binary view of gender. While there might not be bias in the design or dissemination of the survey, the resulting data could be called biased, and in turn actions I take based off of that data could be called biased too.

In this sense, and in that iNat presents itself as a global “organism occurrence recording tool”, its data might be said to be biased since it’s more heavily weighted toward the United States. Again, that’s not be design or intent, and it’s still wonderful data at the end of the day, it’s just the way things currently are.

Whether we call it bias or something else, it should definitely be contemplated in the design, execution, and interpretation of any survey attempt.


Agreed. You’d want it to be designed to primarily explore such insights as might help further the vision, mission, and adoption of iNat itself.

For folks like myself who just kind of keep iNat to ourselves, yeah, for sure. It might arguably help me to understand/engage with the rest of the community better, but that’s not of enough value to justify the cost. It’d really only satisfy curiosity. But maybe for iNat evangelists it might help them understand who they’re reaching effectively and who they’re not? It might also help the iNat team understand where to focus some attention? :man_shrugging:

Yes. I think words and definitions matter. I have yet to find a definition of bias that indicates not having a full data set despite the best efforts to do so is biased. As best as I can tell iNat has taken great efforts to include as many people and perspectives as possible. I see no evidence of iNat excluding data from any place in the world by design or by oversight.

Having said that what I worry about is that if an iNat naturalist survey starts with the assumption that iNat is biased it will become a self-fulfilling prophecy. The starting point of any survey needs to be as neutral as possible if it is going to provide ‘unbiased’ results.

1 Like

It seems to me that the available staff does not have the bandwidth to fulfill a long list of already approved objectives. There are many needs that have been waiting for years to be addressed with only a small staff. I don’t see a lot to be gained by a user survey under these circumstances .

1 Like

But we start from - anyone on social media, has (some) leisure and disposable income.
Not holding down two jobs.
Not - I wish I could sit around taking pictures of plants or bugs.

However broad the survey is designed, iNatters are a TINY group among the world’s population. But someone could run two polls. One here in the forum for the ‘power users and addicts’ Another on Facebook for the ‘got a life’ users. Two sets of very different data to play with.

iNat will have useful data already. And they chose what they share with us, either in blog posts, or answering comments. Anyone remember where we had a percentage of iNatters who identify? I have in mind, maybe 5% ?


I mostly agree with you. I certainly wouldn’t want such an endeavor to come at the expense (or delay) of things like the upcoming cross-platform mobile app, a better separation of the different scenarios that all get lumped into casual, and whatever else is out there.

On the other hand, one might argue that the insights obtained through such a survey, if well-designed, might help staff to more effectively prioritize those very initiatives. As has been pointed out though, a well-designed survey would require a lot of work. However, it also has the potential to provide long-term value, and would, I imagine, be comparatively less costly to execute in subsequent years.

I agree with pisum’s comment that realistically this would probably only happen if a third party approached iNaturalist with the idea of doing something like it for free.