Don't use computer vision

Please, do not encourage use of AI suggestions without at least some knowledge of the object. Cleaning the resulting mess is one of the reasons which discourage qualified IDers, because it is very tedious to get into the lengthy discussions as to why the ID is not correct. The argument being – but it was suggested and the Picture looks like that!


This seems to go against the point of the AI suggestions, no? They are there exactly to help people that have no or little knowledge about the “objects”, providing a starting point to id it.


The suggestions are useful depending on where the observation is made. The suggestions are done based off research grade observations, but I don’t think it takes location into account. This means for places such at the USA, the AI suggestions are more reliable since there are so many observations and research grade observations, compared to a place with less observations. As an example I recall many observations from Hong Kong given ID’s of species from the USA. I have had to spend a lot of time trying to explain to them the limitations of relying on the AI suggestions feature.

That is to say, I am not bashing the system at all. In fact I think its pretty amazing that a feature like that is possible. Its just that for now we need to use it with a little more caution instead of just blindly trusting it.


double check at least a bit on computer suggestions; I’ve had it suggest very odd things. It’s a bigger issue if part of the subject is obscured, but I’ve had green anoles suggested as everything from green anoles to amevias to day geckos (the latter two don’t occur anywhere near me). But I do think at least checking the autosuggest is generally a good idea for vertebrates.


Doesn’t Inaturalist note on observations whether or not the AI was used? So if someone immediately knows what something is and types in “common goldenrod” or what have you it’s going to look slightly different than if someone just hit “common goldenrod” because it was at the top of the list.

I’m definitely guilty of using the system myself, sometimes I’m in a rush and hitting the top suggestion is easier than typing something in, sometimes I genuinely don’t know if it’s a fungi or a protozoan.


Yes it does note that, but we have sooo many new members agreeing without second thought, and almost all of us are guilty of doing that at least once, so it’s better to not add a species id based on AI only, and the guideline is to not add any id without certainty.


Not at all. Yes, AI is of much help when you know (at least approximately) what you are dealing with. Besides, it works almost unerringly for birds and is practically hopeless for lichens, non-lichenized fungi and in most cases for insects. In fact, every month I go through Europe checking for Niebla, Physcia millegrana or Cladonia cristatella, all exceptionally American species and very popular on AI for coastal Ramalinas, all Physcias and most of red-fruited Cladonias, respectively. My colleague, a myxomycete specialist, stopped identifying because she has less patience than I and found it too tedious to explain that not every white blob is Brefeldia maxima,especially after some quite aggressive defence of their decisions from the AI based identifiers. AI is a great tool, but as every tool, it is great for one who knows how to handle it and dangerous for one who does not.


I find easier to draw a specialist attention if I accept one of the AI suggestions even when I think that is probably wrong. But I always try to filter the AI results as better as I can and I don’t agressively defend IDs that I’m not sure of.

I really think that we shouldn’t discourage people to use the AI suggestions but instead we should try to make people understand how AI works and how to interact with specialists. Also, if you are a scientist or a specialist in some group, please make that clear in you profile… that is SO useful.


That’s you. But do you realize how many other users there are, especially kids, who think that AI cannot be wrong, who do not read IDers profiles (do not read anything, actually) , who do not know how and where to search additional information and for them it is impossible to understand, why it is Laetiporus sulphureus and not Laetiporus gilbertsonii - they look identical. Just yesterday I had a very lengthy and time consuming discussion with a kid trying to explain that it is not possible and why it is not possible to ID Diptera to species from the “pictures” unless you are a specialist. And the specialists mostly ID them to family level, genus at best. But then, I appreciate very much when people use ID for higher taxonomical levels, just to be on a safe side.


Yes, we should teach people how to use it, I don’t want to loose experts helping me and others iding stuff because there’re thousands of people who were learned to do the thing wrong way, and people being silent about mistakes is a teaching process of itself. We have tons of iders for birds and almost none to anything else, and it’s so hard to keep them in here because of such usage of AI, experts are usually busy irl, why whould we want to get them tired of things easily avoided? From last graphics we got the amount of experts is the same while we definitely get new ones, it’s because they leave iNat iding much faster than observers do.


Problem is some people just blindly accept IDs. A couple of while back there was a college assignment project and along with it was an influx of new users who just blindly accepts the ID suggestions and their friends agreed to the ID the observers chose, making the observation research grade even though the ID is completely wrong. Mind you, this was in Indonesia while most of the ID suggested by the AI are american/european species.


Maybe Computer Vision id’s should not count for the Community taxon.


I have often thought that the AI should be labelled “Beta Version” or something simple like that, to let people know that the AI’s abilities are imperfect.

And… I think it might be great for new users if the first 10 times the AI was used, there could be a short pop-up window that pointed out the limitations of the AI.

And I still don’t like it that the AI says “Here are our top species suggestions…” because “our” and “we” make it sound to newcomers as if the entire iNat staff and observers combined believe in these IDs.


I find the Computer Vision to be very instructive and helpful. Perhaps I live in an area with more data, but it usually does quite well. I use Suggest Species even when I am sure I know the organism as a double check.

Other iNat users have taught me, through feedback, to be a bit more discriminating about it’s use, so I appreciate that.

I also find it to be both interesting and a learning experience to read about the various alternatives offered by the AI.

Perhaps, similar to the Agree button, preferred usage for this feature could be better explained in the UI. Pop-up tutorials, someday?

I’m sure this is too ambitious for the Guides and AI, but I would think would be so cool if there was a kind of ‘difference between’ clause that describes the difference between similar organisms. E.g., the difference between a between a Fox squirrel and an Eastern Gray Squirrel is…

Anyhoo, I really admire the Computer Vision function.


The AI does use location as a criteria for the suggestion. You just have to provide the location data before you check the AI suggestions.

So, imagine you’re uploading an observation on the desktop version:

  1. You upload the photo
  2. The field “date” is usually automatically filled in (if the photo as a date associated to it).
  3. The field “location” is not automatically filled in (unless your camera has GPS and attributes a GPS location to every photo).
  4. If you check the AI suggestions before the location data is provided, the AI will suggest a species only based on the photo itself.
  5. You fill in the location data
  6. Now, if you check the AI suggestion, you will notice there will be suggestions that are “visually similar” AND “seen at the same location”

We have had 2 city bioblitzes for Cape Town. The first was last year and AI battled with something as blindingly obvious as King Protea (Protea cynaroides)
This year AI is noticeably better.

For all of us in the Rest of the World, the emphasis on Californian or USA species, means newbies need to be prompted to check if what AI suggests is reasonable.

@jurga_li perhaps the heated arguments about ‘you can’t ID that from this photo’ need to be passed on to help @ iNat. Rather than wasting the goodwill of people (like you whose IDs I appreciate and rely on) Or downvote the quality of the obs, can’t be IDed from this?

Or iNat needs tweaking. If the first ID is computer vision, then it needs 2 more to reach Research Grade? I had a father and son playing tandem, confirming each other’s IDs. Less about malice, than not understanding that iNat isn’t just another online game to compete in.


It’s not just the Rest of the World. I’m constantly seeing computer vision IDs here in the Southeastern United States of species that don’t occur in the USA at all. Just the other day, I found computer vision identifying a beetle grub as a species endemic to New Zealand (an ID the user accepted without question).

Perhaps this would be worth the developers’ consideration: excluding computer vision suggestions that are more than x kilometers out of range. iNat has range information for all taxa, so if the computer vision’s suggestion would be far out of range, it could be dropped from the list of suggestions or bumped up a taxonomic level before being suggested.

I’ve seen the tag-teamers too. But I’m more concerned about people who blindly agree with expert IDs on their observations, enabling the observation to go to Research Grade on the strength of one person’s say-so. That seems to be a pervasive problem.


Oh yes that is indeed correct. There are many observations though that have location data but are given ID suggestions of a species not from that area. My guess is that there was not enough research grade taxons for that particular location so it then looked at research grade photos from elsewhere.


Even if you do that, though (I geotag all my photos before upload), iNat still provides out-of-range suggestions. It just flags the “in-range” suggestions as “seen nearby.” I don’t think that’s a strong enough way of noting which suggestions are range-appropriate and which are not.


I think we are overlooking the consequences of not having AI helping suggest IDs - thousands of identifications that just say “Life”. The AI does narrow it down to the right order or family generally and that attracts the attention of reviewers.

Maybe we could change the app so that it won’t make a suggestion until after a location is entered?