Good points! I also spend a lot of time comparing and using my field guides and internet searches to try to be sure about IDs. As a relative iNat user I still have a lot to learn!
Is only the first photo of an observation used in the training? I thought I remembered reading that somewhere.
@bbinsecte, see https://forum.inaturalist.org/t/computer-vision-clean-up-wiki/7281. This is not an automatic clean-up, but it helps to direct attention to identifications that are systematically incorrect.
I think tiwane said they use all the photos
I think it only does a compare on the first photo in a new observation.
e.g a new ābirdsā observation with multiple photos is uploaded. The CV only looks at the first photo on this birds observation to make its suggestions.
We only send the first photo of an observation to the computer vision model.
I think the question is if all photos in an observation of a qualifying species are sent into the big mysterious box that does the training.
Not which photos in an observation are used if you ask to run the CV on your record.
That makes it vital for us to choose the first photo to provide the best information.
Not, you can see the ā¦ on the third photo.
Thanks @cmcheatle. I asked our developers for some confirmation, and hereās what they said in regard to training the model:
-
If a taxon, eg Anas platyrhynchos, has more than 1,000 images, we randomly choose 1,000 images of it for training, and those could be drawn from any photo in an observation.
-
If a taxon has 999 photos or less, we train on basically all of them, with a few left out for testing and validation.
Well, sure, but new users may not know that.
I know I used to just add multiple photos by
(1) batch selecting them my phone gallery without regard to order,
(2) putting them in the order I took them,
(3) taking pics directly from the app, without considering that I needed to put the ābestā pic first (I especially used to put wide shots of plants before the close-ups).
Apart from ātraining iNatā people are time pressed, so I try to crop and sort my own photos.
It can sometimes be a mission to work out what are we looking at, please
Yesterday I said moss, and a bryologist said no, itās a tiny flowering plant. Zoomed in, I could see Crassula leaves
Yep, Iāve learned (and am still learning) tricks of iNat and now I try to put the pic I think is most diagnostic first.
Not for ātraining iNatā - as cmcheatle & tiwane clarified, the first pic is not what feeds the CV, but what it looks at on new records before offereing suggestions - but for making it easier on Identifiers, so they donāt have to go through every pic I attached.
Iāve talked to a few mycologists who hate iNaturalist. They wonāt use it, neither submitting or IDing.
I think there are a lot of plant peeps on iNat. Donāt know why no one has been checking out your plant obs. Iāll take a look at them for you.
As an amateur mycologist Iām curious why mycologists wonāt use. I know two who do one of whom practically preaches iNat.
I will say that Iāve seen an inordinate number of dead wrong fungal CV idās lately. Seems to be mostly new observers.
Oh, you donāt need to feel obligated to do that.
Plants of Texas is a collection project that picks up a bunch of my stuff. Between that and a few other projects I manually add stuff to, if I donāt get an ID itās usually because I didnāt include anything diagnostic in my pics.
Thatās a shame. I can think of at least 2 on iNat who seem to do a lot of IDing.
I tried to ask but they didnāt want to talk about it.
It is winter here and mushrooms are everywhere. I tip Cape Townās recent Unknowns into Fungi - the limit of my knowledge. And they seem to get IDed soon.
You could add them to the World Fungal Diversity project, as well as fungal projects local to the observationās area.
I often join projects just to be able to add other userās observations and get eyes on them, and sometimes that helps.
The Seen Nearby feature is one reason for trying to clean up observations from school projects. These are often full of misidentifications, whether by kids just enjoying the strange names they see, or from AI misidentifications. If the misidentifications from the first semester class are left as is, well meaning students in the second semester to be assigned an iNat project will be more likely to click on the āSeen Nearbyā misidentification that looks plausible, thinking this means it is actually occurring on their campus.
Ideally, the person who set up the project would review the observations at the end of the assignment and help reduce the bad data produced. For many reasons, discussed in many other threads, this does not often happen.
I wonder if having a field for projects which could be used to indicate that this is a schoolproject and aimed at young children, high school, or college age would help identifiers understand what is going on, and tailor their correction comments appropriately.