Why not empower recognised experts?

To be clear, my thoughts around “disempowering” newcomers were really with those who literally just signed up. They have less than 100 obs in a taxa… and no ID experience… and start adding 10s or 100s of species level IDs in taxa where its not possible, without realising the time it can take others to correct, or the impact this might have on the AI / dataset.

So when I said disempowering… I meant more limiting their powers until they understand the power they are being attributed.

The forum has basic levels of trust…why wouldn’t we have that with identification?


That way some real experts will never be at the same “level” as other users, there’re many of them who don’t upload any observations at all.

I’m not saying they need to upload observations per se…

I’m literally just on about an implementation akin to the forum intro.
Even just the smallest intervention possible to explain the basics to users on arrival.
A tutorial, a note, a popup, an email. How does it work with new users at present?

As it notes on the Discourse link mentioned by @bouteloua in the other thread, this is about :-
“Sandboxing new users in your community so that they cannot accidentally hurt themselves, or other users while they are learning what to do.”


I agree, more and more popups and tutorials, probably more is needed not to get new users separated in any way, but to make clear for them what iNat is about, probably having each new member reead through tutorial, more complicated than one we have today for the app, with links to forum and popular questions.


The great thing about iNaturalist data isn’t that it’s always correctly identified (it may not be) but that (1) there’s lots of it and (2) it’s verifiable. It may or may not be right, but if you need to know, you can find out. That’s really valuable.


You can see what new app users view here https://forum.inaturalist.org/t/guiding-new-users-without-scaring-them-off/2242/5?u=bouteloua


Ok, good to see, thanks!
Its a bit difficult for me to imagine, as I don’t use the app. Will have to download when I get my phone fixed :) I know trying to help my mum to use it remotely this last month, she has struggled a bit - especially when I tried to explain about things like withdrawing an ID in order to let a new ID take precedence.

I think withdrawing/agreeing is one of the crucial aspects to help explain, as people either leave their original ID incorrect without realising the impact, or they blindly agree without knowledge of ID or of identifier ( new users might even expect identifiers to be experts… without realising how iNaturalist works ).

A simple solution to this though could also just be having a withdraw button visible on ID itself in same way agree button is. Even then though, my mum struggles to understand the bigger picture.

I’m really enjoying seeing the pointers in the forum these last days - popups telling me not to limit conversation to only one person… not to post too many times in succession, etc…
I think this is the kind of thing that could really help outside the forums with guiding new users.

I can’t imagine many website users clicking through the links on the email.


This is what new web users view when they first log in. I’ve updated the link I sent above with this screenshot:

I agree onboarding could be a bit more hand-holdy. Feel free to submit some ideas as feature requests.


As has been stated before, this issue’s been discussed quite a few times throughout the course of iNat’s existence, and as bouteloua quoted me earlier, any possible “expert” rating would be based on iNat activity, not external factors.

I think better onboarding (we’re just starting to draw up some ideas now, I know it’s been a long time coming), disincentivizing unwanted behavior (eg blind agreeing), allowing to filter by identifier (as @nathantaylor suggested, I know it’s been a long time request), and other fixes can solve or mitigate a lot of the issues raised here.

I can’t speak for everyone, of course, but here’s what I’ve heard from two top identifiers on iNat what I’ve met who each focus on one difficult taxonomic group:

  • one expert has told me one motivation is that it’s an incredible way for them to practice and learn because they’re seeing photos of varying quality from all over the world of their taxon of interest.

  • another told me they really just like helping people and if they can give their time and expertise in a way that helps people learn more about what they see, it makes them happy and they believe it’s just a good thing to do.

Some others who I’ve talked to are motivated by generating the data they want and they understand it often takes outreach, humility, and patience to teach and empower people to get that data, make the right observations, and identify taxa. I understand not everyone has the skills or resources for that, but it’s possible, and benefits many members of the community.

I’m not an expert by any means, but I’m pretty good with some bits of California flora and fauna, and I just want to help people who are curious about what they saw. Maybe they won’t misID a spider or a snake and kill it next time see it, or maybe they’ll just be able to point out a flower to a friend the next time they’re on a hike. Whether that observation ever gets to research grade is beyond my power, and it’s not something I care about. And if it sounds like I have no ego involved here, that’s not the case because I still feel quite a sting if an ID of mine is corrected. :rage: But that fades quickly, and it’s a chance to learn both about the taxon in question and how I can improve myself.

I can’t find the exact words above, but I feel like there might also be a misunderstanding about the computer vision training set. We now train on ranks higher than species, so please don’t feel obligated to ID to species for the model. From the blog post about our last model:

For the first three models, we only trained them to recognize species. For the last two models, we’ve been able to train with coarser taxonomic ranks. For example, if each species in a genus has 10 photos, that might not be enough data to justify training the model to recognize any of those species, but if there are 10 species in the genus, that’s 100 photos, so we can now train the model to recognize the genus, even if it can’t recognize individual species in that genus. This approach allows the model to make more accurate suggestions for photos of organisms that are difficult (or impossible) to identify to species but are easy to identify to a higher rank


i agree with @tiwane about the identifier motivations - i don’t think you can flatten those into one idea of expertise. (i am not a trained expert in the contexts listed here but do rank pretty high up the leaderboard.) i respect both rationales he provided for myself. i’ll add two things in addition:

  • trained experts are in limited supply and i don’t think it’s sustainable to expect either long-term interaction or broad interaction here. and that’s considering other more specific projects, like bugguide, where those experts might participate. (as an aside, the species (any level) pages and curation are no different from field guides, etc, as references to expertise and more accessible to non-experts so it’s wild to me that those aren’t used more often to clarify an id with an in situ image more like what an observer has shared.) it highlights the need for that intermediate level of identifier that can get you to beardtongues which a) gives the observer a name to research if they’re inclined and b) better filtering for an expert/researcher to add more. it’s a complicated rubric where (again as a not-expert) i’d like to improve my own skills for when i’m out in the world but nudge the other observer to take that next step for their own id and knowledge while also being conservative knowing that either some things aren’t really identifiable from photos alone or have small differences i am not comfortable putting at a species level because folks are quick to accept and especially quick to accept if you’re on the little leaderboard.

  • sticking with the idea that one of the main reasons for inaturalist is connecting people to nature as an educational tool, i feel like the identification side is overlooked in that. it’s come up for me with the bioblitzes where, at least in my experiences, the feel i got was identification needed to be from a credentialed expert and i found it disempowering as someone interested in learning more deeply about my local area. like i could be told what it was but i couldn’t truly know myself. i also think that hurts when recruiting identifiers and there’s so. much. stuff. and not enough identfiers. ymmv but i think there’s a pretty good case to be made for identifying as a gateway into observing more kinds of things and that seems good for inaturalist.

anyway, i think the “follow->this observation” is not used enough and could benefit from putting those updates under “following” or flagging them in the notification stream as a starter step to new identifiers. have a hunch that some of the likes and some of the identifications are more about wanting to keep track of an observation when maybe you’d rather not have a public opinion. i also don’t mind being wrong in part because there is that “misidentifications” section so i figure something is learning from that mistake :joy:. (do people use that?)

i’m also not sure expertise really hedges against some of the less-than-good-faith issues on ids. there’s an uncommon but persistent pattern where someone will start their observation off with an impossible id and, when that’s contested, the poster will switch to a different rare possibility. like id’ing a butterfly in kansas as some british species and switching to something limited to the sierra nevadas after a more likely id is added. that’s something about the original poster that i doubt will respond to an expert. so if there’s some technical change to the system, id hope it addresses the underlying issue if possible. as an example.



Thanks for this @tiwane.
I’ll think about possible feature requests leading from all this, as @bouteloua suggested …

I’ve really appreciated hearing everyone’s thoughts…and could probably continue debating aspects of this for a long time yet :)

I’m really just trying to reflect back the stuff I hear from UK community mainly…it frustrates me when I hear iNaturalist being denigrated or ignored, when for me it seems to be a far superior platform than the other ones the recording schemes currently use. I just wish there was more integration of the UK expertise into the community here. But…perhaps I just need to be patient, also.

While I note it… the point you were referring to was perhaps the response by @upupa-epops

This comment from Kueda on the blog post helps clarify this, as follows…:

Training data gets divided into three sets:

Training: these are the labeled (i.e. identified) photos the model trains on, and include photos from observations that

have an observation taxon or a community taxon
are not flagged
pass all quality metrics except wild / naturalized (i.e. we include photos from captive obs; note that “quality metrics” are the things you can vote on in the DQA, not aspects of the quality grade like whether or not there’s a date or whether the obs is of a human))

Validation: these photos are used to evaluate the model while it is being trained. These have the same requirements as the Training set except they represent only about 5% of the total

Test: these photos are used to evaluate the model after training, and only include observations with a Community Taxon, i.e. observations that have a higher chance of being accurate b/c more than one person has added an identification based on media evidence

You’ll note that we’re potentially training on dubiously-identified stuff, but we are testing the results against less-dubious stuff (you can see what these results look like in the “Model Accuracy” section of https://forum.inaturalist.org/t/identification-quality-on-inaturalist/7507). The results are, strangely, not so bad. Ways we might train on less-dubious stuff (say, CID’d obs only, ignore all vision-based IDs, ignore IDs by new users, ignore IDs by users with X maverick IDs) all come with tradeoffs and all, ultimately, limit the amount of training data, which I’m guessing would be a bad thing at this point for the bulk of taxa for which we have limited photos.

I’m not sure if I’ve read this detail before… I certainly seem to constantly forget aspects of it at least! Some more things to stick in an FAQ somewhere perhaps?


Sounds good! We are lucky enough to have Chris Raper for UK Tachinidae… I think he covers European IDs too, but maybe not beyond that…


Not replying to anyone in particular, I just want to stand up for amateurs. Several people have used “amateur” as the opposite of “expert”. Amateur is the opposite of professional. Neither term tells you anything about their level of expertise.

Many experts are amateurs. One benefit of being an amateur is you don’t have targets and deadlines so can put as much time as you want in to nibbling away at an area of study.


Yes, we are. Chris Rap @chrisrap identifies in more places than just Europe. Also I am inspired by the dedication of Arturo Santos @aispinsects here in the US. There are others who I’m sure deserve a mention as well.


I may have been one of the ones guilty of this, offhandedly.
But very much not my intention in the initiating of this thread.

I’ve actually spent a good deal of my adult life fighting for the acknowledgement of “amateur” work in my field ( not natural history ). So, bit strange to feel like I’m arguing the case for the other side here.

I think what I want to see personally is more about recognition of expertise or experience…and this is certainly regardless of conventional notions of what that entails.


It was a dragonfly observation linked on the original Gerald page, I think people kept alternatively misidentifying it as a dragonfly and a damsefly until someone managed to identify it as a Julia Skimmer dragonfly. I’m not even sure if people have successfully outbalanced the misidentifications for that one.

I’m not aware of any biodiversity recording platforms that allow users to ‘pick their own taxonomy’ to document their observations. This is not meant as a challenge, but a legitimate question, are there any?

I mean I can’t go onto EBird and say I disagree with you and think the European Herring Gull is a full species, so I’m recording mine that way and expect to see all records that way. I cant say to EButterfly or BugGuide I disagre with the Celastrina taxonomy here, so I’m using my own preferred one.


“I’m not aware of any biodiversity recording platforms that allow users to ‘pick their own taxonomy’ to document their observations. This is not meant as a challenge, but a legitimate question, are there any?”

That’s an odd question to ask, since this has always been the default for specimen-based biodiversity information. iNaturalist data even gets ingested into a biodiversity information system with this feature–GBIF. In the broader world of biodiversity information, “pick your own taxonomy” is the established norm, and “use our taxonomy” the newcomer.

For what it’s worth, among the online database interfaces, I think the Symbiota system does the best job at handling this. The instantiation of Symbiota I use most is SEINet: swbiodiversity.org

A few of the relevant features of this system are: data can be uploaded using whichever names the source herbarium prefers; when doing a search, you can check the “include synonyms” box, or not; when viewing the list of taxa returned by a search, you can have the names run through one of several synonymy databases, or view the raw names. There are a couple things it might be nice to add, like being able to choose the synonymy database when running a search and being able to run names through one of these databases in all views of the search results rather than only the taxon list view.

1 Like

GBIF keeps synonyms, as well as iNat does, and shows which one was used, but there is one “main” anyway. So it’s not the same as choosing the whole taxonomy.


Why would experts identify things on iNaturalist? (I respond as a person who is an expert in some things but doesn’t restrict her identifying to those things.)

  1. It’s fun.
  2. It’s potentially useful education for me and for others.
  3. It helps build a verifiable database of what’s where when.
  4. It’s a virtuous-feeling way to avoid doing other things I should do like label plant specimens, finish the half written field guide we’re mostly not working on, or clean the garage.

A friend who’s a professor records iNaturalist identifications on his yearly report, under “outreach.”