Engaging with nature and the slippery slope of quality

Compared to many of you with a longer historical perspective, I’m a relative newcomer to iNaturalist, so correct me if I’m wrong, but I get the feeling that recently there has been an upsurge in people worried about the number of “poor quality” observations posted. The answer always seems to be that, as the primary mission of iNaturalist is to get people to “engage with nature”, no action can be taken as it might discourage people who might otherwise become valuable iNat users in the future, and/or that using iNaturalist involves a steep learning curve and we must be patient, and/or that everyone is free to use iNaturalist in whatever way they find most satisfying. These are all aims I am more than happy to share, but I find myself ever more wondering whether the “anything goes” approach is really the best or only way to achieve them. Might it not have the opposite effect, actually discouraging new truly interested observers who invest time and effort, but then get no feedback as their carefully crafted observations quickly get hidden in a rising tide of observations at the limit of unverifiability?
Snapping and posting a photo of anything that seems remotely “natural” without giving any consideration as to the ability of the photo(s) to illustrate the detail necessary to identify the subject might I suppose be a very, very first step towards “engaging with nature”, but unless there’s someone on the receiving end (in other words, an IDer, and there’s WAY too few of them) willing and able to explain what is actually needed to improve the quality of that observation in every sense (images, location, cultivated/wild, etc. etc.), then how is the observer going to climb up that learning curve? Or even understand that there’s a learning curve to be climbed? Or even realise they’re part of a community with any number of “real” people all over the world who will be investing time and effort in looking at their observation and trying to figure out what on earth it might be? Or even understand that if they don’t get the response they’re hoping for, it’s not that the “iNaturalist app doesn’t work", but that, for lack of detail, their observation simply isn’t verifiable by either “man or machine”?
One solution could be to make it more difficult to sign up for iNaturalist, making it clear that it is NOT just an app, but a community of people, thus hopefully incentivising those REALLY wanting to engage with nature, learn and participate, while at the same time encouraging those who just want an ID with no strings attached (perhaps the majority of new occasional users?) to use Seek, maybe better suited to their needs without overloading the super-stretched iNaturalist ID process.
This could be followed by a period of “mentoring” for new users, where their first observations (say 50 or so) would have to be approved by special mentors willing and able to actively intervene if necessary to explain/guide the new observers to get and give the most possible from/to the platform.
I did try this for a while, deliberately seeking out new users and giving them special attention, with positive comments and/or encouragement. But I gave it up after only about 0.01% ever responded. Perhaps because, as I understand it, comments are not notified or easily accessible on the app (I only use the website, so I can’t confirm)?
Am I too old-school, thinking that in a community, all participants have duties and responsibilities, not just rights and benefits? And without this, the downward slope risks becoming ever steeper and slipperier?


A better onboarding process could be discussed - actually I think it may already be under discsussion elsewhere. But a manual mentoring process would definitely not reduce the burden on the active iNatters.

Also one question, you mention a downward slope. To me that sounds you are claiming things are getting worse over time. Do you have any evidence for that?


While I am very much an appreciator of quality over quantity and tend to focus more on the photography and clarity side, I’m not terribly concerned about the range of quality on iNat.

Like you I take the long-term perspective and doing so opens your eyes to the long history of what we would now call citizen science and our increasing ability to extract useful information not only from past written records kept by individuals and institutions going back centuries, but from things like old oil paintings being used to check climate data and paleolithic cave art confirming genetic assessments of the coloration and patterns of certain species.

We can decry ‘poor quality’ images all we want, but pretty much all of them have useful data in them that can be mined later, and it may not even be for the species the picture is ostensibly of.

A while back I was contracted to develop a rare plant monitoring protocol for Shenandoah NP as part of my grad work, with the intention that this would be used to assist in monitoring and forecasting the effects of climate change on the species in question. I spent a lot of time thinking about this and developed a simple, cheap, easy to work with protocol targeted at the specific species in question, but I also intentionally built the protocol so that there was a lot more data captured which could be later revisited, mined, and correlated with the initial project purpose. I specifically mentioned this in a portion of my instruction manual for the protocol because I know how much extra data is floating around that can be very useful down the line, even if we don’t know exactly what that use will be later on.

My advice would be not to worry over much about the quality of the observations, there’s still useful data in there, even if it’s not a good observation right now for X species or genus.


Of course we should be patient and comprehensive but, as often, do-goodism does not lead to anything really good.

I have developed this point of view. Regarding duress users, I think that we should discourage teachers/educators to compel other people to use iNat just once if they are not expert in the use of iNat and without providing a necessary preparatory course. This is particularly true in the case of professionals who ask money to make people use iNat.
It is not something new that huge quantities of bad quality/wrongly identified/unmarked observations come from events or from “educational acitivities”. This can be understood since iNat can be difficult to be used in a good way by young or very young people who, after all, often just end up learning how to create an observation. But… did they really understand what they were doing?

Regarding “expert users” who are interested only or mainly in non-wild organisms and apparently do not care/do not understand how to mark an observation, I can say that I have found two kinds. One colaborative that seems just to need a reminder on the importance to mark observations/improve the precision of the position/add more photos etc. And one who is totally unresponsive to any recommendations.
I do not take things for granted and I know that there can be users who can have difficulties. Yet I think we should consider the possibility that a reiterated upload of unmarked observations of non-wild organisms or with wrong position, especially if the user has already been recommended many times to comply to iNat rules, should be considered a wrong DQA with all that follows.

It is a long time since I do not see the iNat page on Google play. I remember that some years ago I thought it was not so clear that it is not just an alternative to plantnet. Apart this, I think that a more and more exhaustive information towards new potential users is never too much.


The upsurge you’re noting is likely one of two things:
1, the school projects. This is a bit of a scourge on this site regrettably, during the school season teachers frequently assign students to make iNat observations, and then we are left with the annoying task of quality control. Happens every year, but we are given the tools to deal with it.

2, there has been a surge in observations since covid that hasn’t really slowed. For the stuff I work with, there were 66K Bombus observations uploaded in 2019, for 2020 there were 123K uploaded, and last year there were 210K. This surge is largely newcomers who don’t know what they need to be photographing on the organism, or don’t fully appreciate the scientific contributions of the site. But, these things improve, just go look at my early stuff compared to the later stuff. New people anywhere are going to be less skilled than others, but it’s really up to them to find help to improve. I did that for birds and bees, find people who know more than you do and hang around them. Also many of these lower quality photos are still identifiable, just look what we did with bees.

Thing is due to the Data Quality checklist on each observation, the fact that we are all able to place ID’s, and and the ability to communicate with other users, this site is much better equipped to absorb low quality data than a site like eBird is. To compare, since 2020 eBird has also had a spike in usage, and a problem has been a whole bunch of people uploading whatever MerlinID heard. We are now inundated with trash data that we can do nothing about, the eBird reviewers are overwhelmed just trying to weed out the surge of MerlinID rare birds being reported and aren’t able to deal with the fact that Merlin is a little too eager to call Northern Mockingbird, Carolina Wren, and Bobolink. Basically, that database is trashed and there’s not a lot that can be done about it.

Here we do have the tools, my suggestion is to use them.


I’ve been using iNaturalist for about five and a half years now. In each of those years, there has been lots of discussion about poor-quality observations in the month after the City Nature Challenge. Is what you’re seeing fitting that pattern or do you think something else is going on?

I haven’t particularly noticed any greater problems than usual myself. If anything, I’m seeing such a volume of good, identifiable observations in the region where I do most of my IDs (New England in the northeastern US), that I feel rather overwhelmed trying to keep up. But then I remind myself I feel that way every year in spring!


Honestly, I’m not exactly sure how the Computer Vision works, but if it calculates a percentage of similarity to other organisms it has been trained with. In the case of really poor observations, and where percentages are minimal for everything, can’t iNaturalist automatically give the poster a warning about the quality of the photograph and how he could improve and try again (or if he/she wants, just post it anyway) during the observation upload process?

1 Like

How do you engage with a teacher behind a wave of blurry green stuff planted around a school?
I can push them to Cultivated, but … discouraging - to me too!
My last straw was litter under a deck - no, wait - there is a white rabbit back there.

May simply be - more obs, more observers = more visible problems. I can no longer keep up with my chosen ID corner.


The iNat Teachers Guide has clear guidance with good example resources of how to use iNat with students. (Full disclosure, some resources I created are on that list).

I think Tony and team are doing a good job reaching educators when there isn’t (I infer) much funding or staff bandwidth to address this question. I note that the Educator forum isn’t very lively, and I appreciate that it is kept open despite lack of traffic.

There are steps that can be taken to continue to develop educators to use iNaturalist well to benefit education outcomes and the platform, but it’s a question of cost/benefit analysis. It would be, I think, time consuming and expensive to reach educators with the objective of improving data quality coming from the education sector, requiring at least one staff person.

Do you mean an official trainer certification so that those who train others on how to use iNat have demonstrated core competencies? That could be part of a more robust onboarding process alluded to above.


I’m glad this is being talked about - obviously not the first time, but I think there’s a bit of hesitancy to address this elephant in the room.

I’ve been an active identifier since 2016, and I’ve seen data quality have its ups and downs in that time. Some important changes have definitely resulted in improvement: anyone who has been here for a few years remembers the days before the CV incorporated geography into its suggestions.

The biggest difference now, though, is the sheer volume of users and observations, which is completely unmanageable for the small number of identifiers. Every year, CNC floods the site with massive volumes of poor quality, misidentified observations, and they’ve been persisting for longer and longer. We absolutely do need a better onboarding process to help new users, who are often novice naturalists, understand not just how to use the site but also how to observe nature.


I have also had trouble keeping up with the region I have been trying to ID and clean up unknowns starting this spring. Most of the observations have actually been pretty decent quality (other than a couple of school projects) but there have been numerous people uploading hundreds of plant observations with no ID’s. The users clearly seem to have some knowledge of plants so it would be very helpful if they added initial ID’s to their own observations instead of leaving them all blank. I’ve also gotten a few nasty responses when I ID them as “plant” (not having much knowledge to identify them myself). Some of those also appear to be part of non-school projects so some instruction from the instigator of the project would go a long way!


Yes, that was not a criticism towards anyone but… I am pretty sure that here some (or many) of these teachers/educators/etc. are unaware of what iNat is or are not able to look after a class of students.
Do you have in mind the “average” teacher who, maybe, has joined iNat since few days or is not even an user? Do you think we can expect something good from these people but the students adding observations of something?
Apart this, I think that it would be better for children to observe nature without an interface. For example, I would suggest to tell student to make a picture of a plant or of an animal that is in front of them since this would need to develop observation skills.
In the end, I wonder if the radiations produced by a smartphone are good for children’s health.


One solution could be to make it more difficult to sign up for iNaturalist, making it clear that it is NOT just an app, but a community of people, thus hopefully incentivising those REALLY wanting to engage with nature, learn and participate

I don’t think iNat should be made less accessible, “more difficult to sign up for”, but that is not what you meant right? You meant that it should be more accurately advertised as a citizen science platform where identifiers from around the world can interact with one’s observations of living organisms to arrive at an identification consensus. Am I correct? If so I totally agree. In my opinion, iNat is too often advertised as “just another identification” app that magically and instantaneously assigns a species level ID to any organism, regardless of the visibility or not of essential ID clues.

I agree that redirecting less serious users to Seek, in a friendly manner, could help improve iNat data quality. What bugs me is that Seek’s “Computer Vision” (basically one of iNat’s old CV models) is far inferior to iNat’s, often assigning incorrect IDs even to relatively common organisms, or no ID at all (which is probably for the best in the case of potentially toxic plants and fungi). I wonder how tedious it would be to “port” iNat’s current CV to Seek?

When faced with “low quality” (multiple species, extremely blurry photos, abiotic subjects) or unidentified observations (Unknowns), most of which (but definitely not all :face_with_diagonal_mouth:) are uploaded by new users, the most efficient method is to copy-paste prewritten comments. I use these.

Comments are easily accessible in the Android app under “Activity” → “My content”, and a green label is diplayed to the right of the “Activity” icon whenever a conflicting/improving ID or comment is added to one’s observations. However, the app cannot send push notifications, so casual users (again mostly new users) often miss updates to their observations.


As someone who doesn’t post all that many records to iNat in a given month and spends several minutes reviewing and editing one of my photos before uploading it, I find the fast and sloppy cellphone submissions with no ID to be disheartening. It’s almost like littering. I won’t waste time on any of these when I see a bunch dumped onto the site from my region. Others may feel an obligation to review these and that’s their choice but for me they reduce the enjoyment of using iNat. I’ll stick to helping those iNatters who actually seem to care about posting a decent pic and are genuinely curious about getting an ID.


If there is anything I hate, it is gatekeeping.
iNaturalist is great and a large part what makes it so great is the relative ease with which anybody can make an observation and with that observation be part of and actively be part of and support a community and maybe even science itself.
I think no feature should ever be implemented that detracts from that. No increased difficulty that will just lead to this platform becoming (even) more niche.

I see three main difficulties of making high quality, identifiable observations:

  1. You need quite a bit of knowledge of a taxon to know what it takes for photos to be identifiable. If you have that knowledge, you can probably just ID the organism yourself.

  2. What makes a good photo for identification differs from taxon to taxon. You cannot, IMO, hold observers to different standards just because of what they observe.

  3. The price of gear (cameras, microscopes, etc). Many things turn out meh at best with most phone cameras, but that is the thing people have. It would be unreasonable to demand users spending a few hundred minimum.

The fact that identifiers can ID to Magnoliopsida, Angiospermae, or even just Plantae if an image isn’t great is the best solution we will find without making iNaturalist unusable for anyone but taxon-experts, IMO.

I disagree (both with that quote and that it is the reply given to people worried about seemingly worse and worse quality (at least I have never really heard it)).
iNaturalist doesn’t have a steep learning curve, taxonomy has a steep learning curve. But iNat is great at teaching taxonomy (which is another reason why I don’t want it to be any more difficult to access).

I have identified quite a bit over the last two months, a very large portion of that through “unknown/life”, so exactly where most bad quality observations would fall. There are of course bad observations, but I have never had the feeling that it was that many. Of each page (with 30 observations) I used to skip 5 at most, usually just one or two, due to bad quality. I think there is not actually a problem and I’d even doubt that there is an increase in the relative amount of “bad” observations.
What may have made the amount of complaints rise recently is the City Nature Challenge, though.

I think the best way to address the issue is a change in “marketing” (both official, and by users). iNaturalist most often is introduced as an identification app. Obviously people want to identify anything and everything they have seen, therefore. I think however, the community and scientific relevance should be the main focus points. If someone asked me “What is iNaturalist?” and I said “it’s an app that lets you identify plants and animals and every other living thing”, that person has a totally different expectation for this platform than if I had said “it’s a community of people who like to observe nature and are interested in contributing to science”. That way, no gatekeeping is necessary.


The quality and content of images needed to ID an organism is wildly various. A lot of birds can be identified positively from the crunchiest, blurriest photo ever, whereas other organisms might need like 8 photos from different angles and a microscope, and there’s no way to predict which one you’re dealing with.


Have a coypasta ready - a politer version of - if you had used plant, or a better ID - then I wouldn’t have to waste my time on you. Grumbles off.

I beg to differ. Look at the Forum posts - which are from the small, dedicated group of enthusiasts. And the same questions cycle up again as new people join us.
How do I …
Why doesn’t …
Observers versus identifiers versus observers


Which suggests that the purpose and usefulness of the City Nature Challenge should be revisited from time to time.

This is one of those terms whose definition depends on one’s point of view.
“Vetting” is a term used by the person doing the gatekeeping.
“Gatekeeping” is a term used by the person being vetted.
With that said, though, I understand what you are saying. I disagree that there is has been an “upsurge” in the kinds of complaints that the OP described. I’ve been on the Forums for a few years now, and from the very beginning, there has always been an undercurrent of those who would prefer that casual observers be their unpaid field assistants, gathering research-quality data for free.

“You get what you pay for.”

I’ve said this before, but if a researcher wishes to hire me to gather data for them, then they get to specify what data I gather and at what quality; if I’m observing things of my own accord, they don’t.


For kids over 13 it seems like an interested teacher or administrator could look for kids already using iNat in the schoolyard. And reach out to those specific kids about starting a “project” at the school. In this way, the teacher would be “pulling” kids into a project that are already active on iNat. Instead of pushing iNat onto kids who haven’t already demonstrated an interest by using the tool.


I agree with the need for better on-boarding… does iNat have a dialogue mode that can be turned on and off (or required based on the number of observations made by a user)…

if observations == 0…
A pop-up would appear when the observation is submitted.

The observation you are submitting is marked as wild.

What are wild observations for?
a) plants that weren’t planted
b) animals that aren’t pets
c) an unintended weed growing in a pot alongside a cultivated plant
d) all of the above
e) none of the above

If the user answers correctly, the observation is submitted. Otherwise, they get to try again.
After the first 100 observations, the user could choose to turn dialogue mode off. It could also be turned back on for increasingly sophisticated tips. It could be based on a role chosen by the user… such as the role of “educator”… that provides tips on serving as a surrogate between a classroom of students and iNat.

Maybe some of this already exists or it’s been ruled out. If so, sorry. I would be willing to bet that iNat skews toward experiential learners who are less likely to read manuals.

1 Like