Inat Ai for dummies

Love to under stand how Ai is positively helping Inaters and wish to read it in a over the fence
conversation… I recently saw some graphs and they did not help…
I believe Ai can be so helpful to us all. If we understand it we can embrace it

The simplest statements could be something like." Currently Inat2024 completed x observations an hour with a X percent accuracy which is x percent of our hourly IDs. Currently our Inat X amount ID ers complete x per hour.
Currenllly Inat2024 can ID a Rosa Nutkana if it can see ? 2 prickles or the petals?
New user / senior / creek restoration volunteer / thrilled by Inaters support! / on my way to 2K ids / morning coffee ID instead of the news !

1 Like

AI vision doesn’t explicitly identify “2 prickles” (or any other particular feature) in images per se. Rather, it uses a neural network approach to distilling an image to its essence in a much much more abstract sense. Its approach is less logical or analytical, and more akin to a human’s gut instinct. We often can’t say why a particular person looks kind, for example; we just take in the generalised essence of all the features of a group of people and get an immediate instinctive sense, based consciously on nothing in particular, of which of those people is likely to be kind (or rude, or intelligent, or witty, or selfish, or whatever) and to an impressive degree that rapid gut instinct is effective and reliable.

AI is doing something broadly along these lines. (It’s why AI chatbots are hopeless at very logical tasks like counting the how many letters are in a particular word or listing words that start with a particular letter.) So, it’s not really possible to know – even for the programmers responsible for a vision AI – exactly what particular features it is picking out when making an ID.

This is both its strength and its weakness. On the one hand, it can leverage many subtle cues that would never even occur to a human identifier. On the other hand, it is sometimes susceptible to errors that seem ridiculous to humans (eg: if a particular species of frog is often seen sitting on lilypads, the AI can start identifying even empty lilypads as frogs.)

24 Likes

Wow Daniel ! … thanks for this thoughtful understandable helpful post…( even learned how to bookmark a post ) … now to figure out where the bookmark is. I will read it a few times to have the esscense sink in. You are a huge asset to the inat community… Yes I recognise that gut feeling or familliar comfort when I see a easily recognised observation. I also enjoy the detective work when I dont and use other new to me resources.

3 Likes

iNat’s AI (“computer vision”) does not automatically provide IDs for observations.

It provides a list of ID suggestions which humans may choose to use on an observation or not.

12 Likes

Hi Spiphany. thks for this. I thought the Ai was a solo ID er. This clarifys a lot. I also see with 1 click that your a super helper bee on this site. so fun!

This is why it is important to for the average or new identifier to know this. At present they may assume the suggested ID is correct, and authoritative, it is not. It may be good to relook at the wording that comes with the CV.

2 Likes

My experience over the years with AI identification of moths (Europe) has been a very positive one, so far. Starting in 2018, I needed my own ID knowledge in more than 50% of the cases. I can honestly tell that by 2024 the system outperforms me in most cases! Almost all of the larger moths are readily suggested by AI (and can be identified, perhaps combined with the “compare” feature). That is gorgeous! Congrats!

A few hints for newbies:

  1. always put the best picture as the 1st one; that means for moths: the wing pattern of the upper side of the forewing; lack of focus isn’t necessarily a problem; this is important as the AI seems to pick up only the 1st picture; it’s also important to attract the attention of another iNatter who hopefully confirms the ID
  2. cropping the 1st picture to make the moth about half the width of the picture is well worth it; rotation doesn’t matter
  3. make sure the picture has a decent brightness + contrast; underexposed is not a problem for AI, but overexposed is hopeless; besides, it is a “service” to the potential iNatter who is trying to look at your observation
  4. if you add more than one picture to the observation, make sure they show different angles of the moth; often, the features important for ID are best seen from a different angle; if using flash, the reflections may differ which is important to see the “real” wing pattern
  5. if your moth is in a horrible state, like soaking wet, half-eaten, very worn (loss of scales on the wing, “bold” spot on thorax), or currently developing it’s wings, do not trust the AI identification; it will always come up with a list of suggestions

p.s. I found out about the above during my ID work; very often I copy an observer’s photo onto my PC, crop it and resubmit it to iNat with an approximate geographical location. The AI results are often much improved. After that I will delete it again, of course.

10 Likes

I do the same thing especially with photos that are not first. iNat only uses the first photo to generate suggestions, but sometimes second or third photos are better for identification purposes. So, feeding those photos (cropped or otherwise) back into the CV as you described is a useful tool to generate suggestions.

3 Likes

Very useful hints!

When I am posting my own observations with multiple photos, I often change the order of photos, making different ones the default (first) photo, and check the CV suggestions each time. Often you get wildly different suggestions depending on which photo is default.

Sometimes the photo that you think is best can actually lead to broader (or incorrect) suggestions, but a different photo can help the AI more - at least in the case of plants.
For example, with some taxa such as Asters, where many of the flower heads look remarkably similar, the type of photos that seem like they would be easiest to identify (like a top view of the flowers), are sometimes less likely to be IDed to species. But a photo showing the side or underside of the flowers, including closeup of sepals, stem and leaves, can get you closer to a species ID.

3 Likes

appreciated petezani !

appreciated danly !

On the app i get updated suggestions with each photo in an observation though? Often different suggestions. Does it only use the first photo for PC uploads?

1 Like

https://www.inaturalist.org/computer_vision_demo You do not necessarily have to submit to get the Suggestions

4 Likes

To all of you taking screen-shot copies of observation photos and creating temporary observations to get CV suggestions - please vote for this feature request: https://forum.inaturalist.org/t/use-computer-vision-on-each-photo-in-an-observation/4210.

2 Likes

Yeah be wary of seeing AI as an “automatic ID service”. It’s a brilliant time saver to get you closer than you would from scratch, especially with limited literature at your disposal, and iNat’s is probably the best out there. But there are some groups of life where its top suggestions are really overoptimistic from even great photos, and sometimes it is just plain wrong - always follow it up with a bit of research and sometimes you may disagree or realise it’s suggesting a species that can’t be IDed conclusively from photos. Use it as a learning tool, not a definitive answer.

It wasn’t iNat’s, but I used another nature recording app’s AI once and it told me a picture of some Lesser Pocket Moss was a Ptarmigan with 80% accuracy. These things are weird sometimes.

A tip as well is never use it before you’ve put location in otherwise you find yourself with some rare species from half way around the world.

4 Likes

In the app, the CV picks up whichever picture I have open. So, if I have three pictures, I can toggle between them and see three sets of suggestions. This is helpful for seeing if some taxa occur in all three sets.

4 Likes

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.