Don't use computer vision

I do tag people on iNat. But, maybe once or twice a week (and then that would be 2 different people), and only for something unusual or curious, that I feel is worth asking for do you have a moment? Thank you.


I know. That’s why I said “I feel weird using @mentions”:
Because I personally don’t know what is going on with that person, and because it is solely a volunteer activity. I try to use them very sparingly to ask for assistance.

I use them more often in response to observation comments, though, just so they know which person in an observation’s comment thread I am replying to.


I didn’t mean using mentions. I was talking about the mycologists who don’t like iNat. I meant emailing the one I know pretty well.

I sometimes use mentions but try not to make a habit of it. If I do, I usually tag someone I know IRL or someone who has IDed something similar for me in the past.

I wasn’t suggesting a course of action for you.
You have connections that work for you. :slightly_smiling_face:

I was contrasting the methods available to me, as I don’t know any mycologists (or other specialists) outside of iNat or tangentially via my employer (and even there we work in very different departments so it would feel strange to me personally to look up a biologist who I have never spoken to before and ask for identification assistance).

1 Like

This topic is one that I would also like some clarification on. I have enough knowledge to make very coarse IDs about most things I usually observe. For instance, kingdom IDs (very easy) for observations I don’t have a lot of knowledge about. And these are certainly better than nothing. In pretty much every case, I don’t have enough knowledge to get down to, most times, the family or genus level. So, in these cases, do I just take whatever the AI throws at me (in terms of family or genus), in order to garner more traffic around my observation? It is very unclear to me as to why the AI suggestions would exist if they only cause problems. And, it seems to me that the only people that would really ever rely on them are people who don’t have any IDing experience, like myself. But, the fallacy is that someone with little ID experience will not be able to verify whether the AI suggestion is decent. So, what to do?


Stay with an ID at a level you feel confident with.
AI is happy to suggest things that are flat out wrong.

The AI suggestions make a good shortcut to typing out a long name.
A few suggested IDs can definitely be useful for deciding - is it A or B (if you have that knowledge), or to remind me of that name that has slipped my mind.


My way of AI use. I know lichens, but the rest - so so or nil. I have biological background, so in many cases (not always!) I have an idea to which class or even order the organism goes. I look at the first two suggestions on AI and check them on Google search: description, distribution, ecology, seasonality,etc., etc. If one of the suggestions fits very well with everything I have, I use it. In most caes it is correct, but not always. If AI offers a medley of distantly related species, I disregard it and use the lowest taxonomical level I feel comfortable at. Often it is order, sometimes class, especially with sea invertebrates and such. But @dianastuder gave a perfect advise: use the lowest taxonomical levet at which you are comfortable. Even if it is as high as phyllum.


I’m the furthest thing from an expert on iNaturalist tools, so take this with as much salt as you like. I’m new here and still trying to figure out the best way to deal with this, but I’ve found these sorts of conversations helpful so here’s my 2 cents’ worth.

If I don’t start typing immediately the system usually kicks out some ID in short order, but I’ve had a few obvious errors (taxa that don’t exist in the location where the photo was taken being the most common issue) that quickly taught me to never take it without some other supporting evidence. The AI-suggested ID is a starting point and the rest of the process is a learning experience. For taxa I’m experienced with I count myself as a valid expert and go straight to species. Taxa I’m comfortable figuring out I’ll usually go to species after digging through some guides, keys or or web pages - I’ve started adding notes if I’m not confident (should probably go back to a couple of older posts). For example, I have a passing understanding of butterfly ID that I picked up learning to identify the butterflies I encountered near a previous home. I don’t know a lot about moths, but the structures are more or less the same so I can work through a key. I have a moth photo submitted that is in the same group (litter moths) identified by the AI but, after digging around and figuring out what’s found where and some other details (and comparing photos from others on iNaturalist) submitted it as a species in a different genus than suggested by the AI. I am expecting one of three responses - agreement, agreement with genus but not species, agreement at some higher but not genus taxonomic level. That assumes that I get any response at all, of course, and I get that littler moths are probably an acquired taste with a limited following. On the other hand, I may be way off base or have a useless image that lacks some key feature, I suppose.

I have a bunch of photos (and recorded bird songs) about which I’m less certain and haven’t landed on how best to do them in a manner that’s most likely to get some sort of instructive response. Some of these I will inevitably end up posting to genus, family or order. My experience with bumble bees (the taxon that got me to join iNaturalist as a learning experience) has persuaded me that if you want responses, the way you post matters, at least for some taxa. Still haven’t figured out how, exactly (obviously - I have not yet had a response to any bumble bee submission).

Some of this stuff relates to a very helpful response I received from @janetwright in another thread: .


I’m a new user, having heard of iNat from a friend. I learned plants in a nearby state, so was hoping the app would help when I’m out and about and something looks almost but not quite like something I know. I expected that the suggestions would offer identification advice, like you get in Newcomb’s or other good guides, but was disappointed that the blurb is just a geographic summary that is redundant with the map of observations below it. Some of the suggestions are astonishingly wrong (e.g., I added a cultivated picture of an alpaca yesterday, and, when I went in to add its name, both giraffe and domestic dog were in the suggestions! I’ve also seen some puzzling plant identifications, but less amusing than that.). But the lack of ID advice (e.g., “Look for green bark with vertical stripes” or “Look for sharp points on the the lobes”) means that, well, I guess I do have to carry the heavy books if I want to learn.

I’m curious regarding the decision to have just geographic description rather than more specific ID advice in an app that seems to be about IDing stuff?

1 Like

Hi @acertsuga, welcome to the iNat Forum! Are you referring to the little blurbs like this?

iNaturalist just pulls the About blurbs from the first paragraph on the associated Wikipedia article, which is freely editable by anyone. Sometimes I expand these articles with useful tips about distinguishing the species from similar ones - there’s more discussion about this here:

It’s definitely commonly requested that iNat have more identification tips built in / compiled from comments somehow. For now, those tips are pretty much restricted to people providing them in comments, writing about them in journal posts or guides, or linking to content elsewhere (like Wikipedia). In the 2019 team retreat, the staff discussed this, noting in their summary:

See some previous discussions for more:


There are guides of various types in the Guides link (found under More on the header). They are not comprehensive but they are helpful.

The response to posts on iNat can be variable. I actually joined iNaturalist because I decided to make myself a Covid project to learn about bumblebees. I posted a few species and figured at least one of them would get a response. In the meantime I started posting other photographs (old and new) and rediscovered interests that had been dormant for years. No regrets on the bees, which alas remain unidentified. I’m sure somebody will weigh in eventually.

This forum is also a nice bonus. Some wonderful folks and some interesting stuff.



Thanks! Yup, that’s exactly it. There are a zillion wiki implementations for data repositories, and I would love it if iNat implemented its own from which to pull useful taxonomic information in place of the Wikipedia blurb. Thanks for finding those discussion pages for me! I tried a couple of different searches and wasn’t finding the right things.

1 Like

The odds of being wrong are less than when is not sure and in the next learning iteration will probably learn it, I think is a good balance, otherwise it becomes the problem the OP refers to. In my experience of almost 50 observations where I tried the CV prompt and there was not “we’re pretty sure” suggestion, the other suggestions were simply wrong.

Excellent, thank you. Hopefully curators and identifiers are aware of this

With the published books, there’s some incentive (fame and fortune!) for a person, or persons, to spend years of time putting together those descriptions. And the descriptions in those books are usually only useful regionally. We can’t copy copyrighted content from books and put it on iNat. And the content that you do find on iNat comes from the users of iNat–like yourself–and others that contribute to Wikipedia articles. There’s no staff for adding content to iNat itself. Hope that helps! It would be nice to have a giant Swiss army knife, but the more tools added to the knife, the more unwieldy it becomes–even if it’s possible to add the tools and financially feasible to build and maintain it.


For training the model, if you have more than 1000 photos for a species/taxon, are only photos from RG observations used?

And a different question, is there any way for an iNat user to know which species are on the bubble, e.g. close to having 100 photos and thus able to be used to train the model?
If there was a way for me to know which species were close and occur in my area, I could try to take more photos of those species so they could be included for the new model.

1 Like

They’re chosen pretty randomly, RG/not RG is not a factor.

Maybe using the API? Either way, I personally don’t think training the computer vision model should be a priority for users. Have fun exploring and observing what you want to observe. But to each their own. :-)

1 Like

I’m curious, why not?

For me, this is one of the primary incentives to use iNaturalist :

    Because its like a kind of puzzle with missing pieces…it can see some genera but has no data about another …so needs the blanks filling in. If I find a local species it doesn’t recognise to genus or even family, I have been actively aiming to achieve 50 observations of it to try and get it recognised.

    It feels like a long term goal. Unlike helping to identify organisms on other sites which might languish or anyway not be then entered into a dataset, here it feels like we’re all chipping away at something bigger which (if it ever became accurate enough) could potentially have serious impact down the line… in opening up and supporting society’s awareness and ability to perceive the natural world…and in turn, the larger ecological implications.

    Similarly to number 1, incorrect CV issues which propagate feedback loops of misidentification feel like they are holes that need fixing and something we can actively contribute to as users through correct identification.


This response might be slightly misleading, unless I’m misunderstanding the other posts around this. RG is not a factor in training, but it is in testing. ( many might not know how ML works…so understanding of the word “training” might anyway encompass testing for some readers ) …That might sound pedantic! :smile: But for me, as mentioned, its a core incentive, so I was happy to learn more about how it was working this week ( and will be happy if someone corrects this with further info).

My current understanding :

HELPING FIX MISSING SPECIES with less than 100 photos
If there are less than 100 photos of a species… like @matthias55, we can try to help train it.
We do not need to reach RG, just accumulate the 100. This should be roughly visible through exploring observations though, no need to use API(?)…
e.g. on a blank I think I’ve nearly filled… :
30 obs with 1-6 photos should be approaching the necessary 100 total.
Currently, CV suggest is some sort of ant for this species, so wrong taxonomic order entirely…and a nice sense of achievement to fix I think!

If however, over 1000 photos already exist and a user rather wishes to help fix a recurrent error with the CV, adding more won’t necessarily help… this is more about ensuring there is a cleanliness to the existing test dataset, in which case, helping with quality control, as an identifier, might contribute to resolve the issue more directly and prevent more incorrect obs being placed in the dataset.
A core issue at present - as visible in the computer vision clean-up wiki - seems to be issues which have been created due to this feedback loop of misidentification >> wrong auto-suggest >> further misidentification.

If there are between 100 and 1000 photos for the species set, helping with identification quality control and increasing the amount of training data both seem like valid ways to help. Both should contribute to overall accuracy. If I understand correctly.


This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.