Real Time Computer Vision in iNaturalist

Lots of organisms that are uploaded to iNaturalist are shunted into broad catagories such as unknown, plants, insects, etc. By implementing real-time computer vision, such as the one used in Seek, observations will be more organized from the moment they are posted.

For those unaware how Seek works, you open the app’s camera and point it at an organism. At the top of the screen, even before you snap a photo, an identification is given. Usually this identification starts out broad, however changing angles can allow the app to narrow down specifics. Then, when a picture is taken, Seek’s ID is paired with the organism.

I am not recommending that this be used to identify individual species, however the software’s suggestion of a family or genus would help broader categories become less cluttered.

Ex. of how this would work with an Eastern Hemlock.

  1. Open iNaturalist camera and find target Eastern Hemlock.
  2. New software would identify genus as Tsuga in camera view.
  3. Picture is captured. Where normal recommendations are, Tsuga would appear as recommended.
  4. Observation would then be treated as a normal submission.

Would observations that are submitted in this fashion be verifiable? If so, what type of evidence would be generated by this functionality that other folks then could review? A series of images taken at regular intervals? A video? A single image?

Yes, the identification would still go through the same process as any normal submission. Only one image would be taken during the real-time ID but more can be added, the same with any current observations.

The main purpose of the real time identification is to help narrow identifications straight from the beginning. For example, instead of someone uploading a picture of a flowering dogwood and just calling it plant, the real time computer vision would be able to identify that the tree belonged to the dogwood family.

1 Like

it’s helpful if you are primarily using the algorithm, because instead of taking pictures then removing them if the algorithm can’t figure them out, you can use the ‘live’ mode to kind of pan around an organism until you find the right angle, etc to get an algorithm ID. Thus i am guessing it would primarily be useful for plants and fungi rather than animals which are less tolerant of that sort of behavior.


What’s the point of having two separate apps then?

IMHO, I always felt that iNat app is for more experienced users, who’d rather not use CV in their identifications, whereas Seek is “play-around” type of app. Sorry for “bad” wording, not to make anyone unhappy.

I’d rather not have RT CV in iNat - it will slow it down (ammount of additional code required) and also think about the places with really slow internet - they won’t be able to use this at all.

Besides, how are you gonna learn something if you rely on CV and constantly looking things up? That’s not how the memory works.


I’m not sure it would slow too much down, seeing as the seek app runs pretty quickly, and the CV would only run when the camera was open.

And this function could be turned on or off, so if there is a plant or fungi that someone has no idea about, it won’t just end up in a broad category.

I’m not saying it should be relied upon, rather it’s to help new observers learn. I understand that people feel iNaturalist is for more experienced identifiers, but how is someone going to learn what their target is if they keep posting it under tree?

Anyways, now that you can post from Seek into iNaturalist (through a long process that isn’t streamlined) there is no reason not to just add CV into iNaturalist camera.

The main point of Seek is to allow those who don’t want to/aren’t old enough to participate in the citizen science be able to identify plants and animals.

1 Like

Seek doesn’t require Internet connection. That is one thing that would be nice with this, the ability to use the algorithm without cell service. I’m definitely an experienced user and i do use the algorithm from time to time.

I learned most of my plant species from others showing them to me. If you personally don’t learn well from the algorithm, it’s a good reason not to use it. But it’s certainly not true at all that it will make it harder for everyone to learn species. I have a friend who really got into the seek app and now she knows a bunch more plants and points them out and stuff. For many people it may be a better way to learn than whatever you are suggesting, presumably a field guide which is also constantly ‘looking things up’, or else having someone tell you what the plants are, which i find useful, but is basically what the app does too.


Yep, I misunderstood that :upside_down_face:
I didn’t really get into Seek, since it’s useless around “quiet” (in terms of obs) places yet. Attenborough got me installing it, but I deleted it soon afterwards.


The live CV suggestions in Seek are meant to actually a) teach people about the higher levels of taxonomy and b) help them take identifiable photos of organisms. If, for example, you start scanning a plant, you might see something like “Dicots” if you are far away or only have leaves in view, but if you get closer and get a flower in view, the suggestions might get down to genus or species and you’ll take a photo then. The hope is that people not familiar with taxonomy or what are important diagnostic features of an organism will learn from this process. As to whether that is actually happening, we don’t know yet, it’s a bit too soon, but I think it has a lot of potential.

I personally would like to see this in use on iNat. It would be great to see people add, say, beetle observations at the family level rather than CV suggesting species to them which are likely incorrect, which they will then choose. It would also (hopefully) encourage people to take photos of, say, a tree’s leaves rather than post a faraway photo of the tree, which isn’t helpful in identification.