"Helping" the computer vision - is this wrong?

So let’s say I discover that the CV is offering a misleading suggestion for a common plant (in this case, it’s suggesting Triantha glutinosa, Sticky False Asphodel, which is a US - Great Lakes region species, for plants on the Gulf of Mexico coast which are T. racemosa, Coastal False Asphodel). I just put out the word to several iNat users in my coastal area to take more observations of our species so that in the next AI cycle, they can train the computer. However I ALSO suggested that they particularly make observations that emphasize the long inflorescence, which is one feature that distinguishes them. Do you think this is cheating? (I suspect this kind of selective observation is common in herbarium specimens too.)

4 Likes

2 comments:

  • I don’t think it is wrong to do, but note that once there are 1000 photos of a taxa, no idea how many there are here, the photos are randomly chosen for the training model
  • the one risk is that it gets a high percentage of the photos showing something the average photo wont show. As an example (I know them better) high magnification photos of reproductive organs will conclusively id many dragonflies, its great to submit them, but if they become too high a percent of the photos, will the cv ‘think’ that closeup is what the species looks like.
8 Likes

Has any work been done to show the variation in error rate based on which 1000 photos are chosen to train the model?
I guess this would involve picking a species (or group of species) that has had >1000 photos in the training model for many cycles and comparing the error rate for different iterations of the model.

Questions I would have would be:

  1. how much variation in error rate is there normally?
  2. are there cases where the cv gets confused based on oversampling of certain kinds of photos?
  3. is the CV always getting better, or is there a point of diminishing returns?
8 Likes

The Computer Vision/AI seems to be able to deal very well with one species being represented by two or three extremely different-looking kinds of photos, for example, photos of different life stages, or things like extreme close-ups and micro images mixed in with regular images, so I don’t think that is a problem.

11 Likes

I don’t think that this is wrong, but I do think, since the AI is a bit of a black box, it would be a lot of work for a solution that wouldn’t necessarily have much of an impact (or at least a highly uncertain one). I would guess that a more time-efficient strategy would just be to make sure all the IDs on iNat for those species are correct.

One exception/difference might be if there is a species that has reached the threshhold for # of observations to be included in the AI training. Targeted observations to get it over that threshhold and included in the next training run could have a big impact I would think.

5 Likes

You might be interested in this post: https://forum.inaturalist.org/t/identification-quality-on-inaturalist/7507 which briefly touches on accuracy based on number of training images.

8 Likes

Yes, it is important for us all to bear in mind that the AI does not “learn” in the same ways that humans learn. And therefore taking photos that would help humans to understand the difference between two species does not necessarily help the AI discriminate them.

And of course it is important to remember that the AI is not “learning” to discriminate species, as we humans would, but to discriminate photos. I notice, for example, that it is heavily influenced in its matching by backgrounds such as stone walls, water surfaces, and sand.

12 Likes

Interesting about the AI being influenced by background. I contribute a rather high proportion of the observations of beaksedges. These are spindly plants and hard to focus, so I tend to hold the plant up to the sky as a background. It will be interesting to see if that has an impact (even in silhouette it’s hard to tell one species from another).

5 Likes

With sedges I usually pick the flowering/fruiting head and lay it flat on the palm of my hand to photograph it close up.

With some plants I have held them up to the sky, but sometimes the lighting and contrast is too harsh that way.

I also find that posing mollusk shells on my hand often works the best for getting the angle and the lighting just right, as well my hand serving as a built-in approximate scale object.

And sometimes I will put my hand behind something like a spider on a web, or an extremely “ferny” plant, to help the autofocus.

I have never seen any evidence that using my hand in photographs is a problem for the AI.

5 Likes

I think what’s important is to take photos that show the features by which the species can be identified. Don’t let it influence you whether that identification is later done by other humans or by the AI (which is in any case based on the work of human identifiers).

6 Likes

I find that insects and spiders photographed against skin are often suggested to be bloodsuckers like black flies and mosquitos, regardless of what the animal itself looks like. Cropping the photo closer to the organism seems to help with this.

8 Likes

Can you exclude photos for Computer Vision, e.g. the plant is on the photo, but only for a small part and is not dominant.

No, this isn’t currently possible. And you may be training it on typical habitat / surroundings of that plant. :)

2 Likes

thanks for sending that link. Lots of interesting stuff to read there.

1 Like

I have a related question. I photograph a lot of moths and I’d like some guidance on whether it’s helpful, once an ID is pretty certain to add examples as I find them, possibly each day, or to just add species that are new for me.
For example, every day I’ll have rosy maple moth. The ID is unambiguous. Does it help the model to add observations whenever I photograph a known moth, or does it just create work for others who are checking/adding IDs to get it to “Research Grade”?

1 Like

So long as they meet the criteria for a valid observation, you are free to add as many individuals on as many days etc as you wish / have patience to do

4 Likes

I have a related question. I’ve noticed that the AI ALWAYS suggests Clogmia albipunctata or at least Clogmia no matter what genus in Psychodidae is being offered. This has a way of feeding on itself if people accept the suggestion, giving a positive feedback to the wrong suggestion. Is there any way to remedy this? For example, can the whole family be zeroed out and start over? How does the refresh work? Maybe there are simply insufficient numbers of observations of other species? In the case of this and many other creatures I can think of, it is possible to be certain that something is NOT a particular species but not know what the correct species is. Does this sort of feedback go to the AI training? I note that if I change something from species to genus I get asked if I know it’s not the species or if I’m not sure. In the scenario described, I would select that I know it’s not right. Does that help the AI?

3 Likes

I can help shed some light on this case, Victor: of ~8,000 observations of Psychodidea, the only species with more than 50 photos in September of 2019 was Clogmia albipunctata with almost 4,000. The model cannot learn any of the other species or genera without samples, but it has enough data to learn C. albipunctata very well.

You can see the number of observations for each child of a taxon by looking at the Taxonomy tab of the taxon page:
https://www.inaturalist.org/taxa/326684-Psychodinae
In this example you can see that the iNat dataset for this family is hugely imbalanced.

I peeked back into the archives, and I saw that the very first computer vision model iNat ever trained back in 2017 had several hundred examples of C. albipunctata in the training set but nothing else from that family. So this imbalance predates computer vision. Whether it’s a result of true relative abundance, human preference, detectability, or human misidentification, I couldn’t say.

Until there is a dramatic change in the ability of computer vision systems to train on imbalanced or very small datasets, only more correctly labelled data will improve things.

8 Likes

I think this is one of those tricky situations where one member of the family is identifiable to species (my impression is that the shape + white spots on the wing = C. albipunctata, unless that’s changed) but all the other members are unidentifiable. I don’t know that the CV has any way of distinguishing observations that just aren’t identified yet vs observations that are definitely not that species and are unidentifiable as a different species.

Ideally a CV issue like that can be addressed by either moving all obs back (i.e. no species-level obs left) or re-identifying obs to a variety of species/genera, either of which means the CV will choose a high-than-species suggestion. If my understanding of the situation is accurate, I don’t think either are possible here though.

2 Likes

Hi Matthias,

These are super good questions. We haven’t done this kind of analysis, but it’s an excellent idea and I’ll add it to my list. We have been asking ourselves “what is the best way to produce a computer vision dataset from the iNaturalist community created dataset” and this is exactly the kind of question that will point us to better answers. So thanks!

Unfortunately we only train new models twice a year, and we regularly change how we export the data to our training system in order to either improve accuracy, add more taxa, or decrease training time, so not all our historical models are comparable in this way. We also don’t do very much additional computer vision experimentation because it’s simply too expensive for us.

However, I believe I have enough data to do compare our current production model with the previous production model, since they were both trained with the same database export rules. And for sure we’ll add it to the analysis we do of new models after they’re trained, where we vet them to make sure they’re ready to be released to the community.

I can’t promise when I’ll be able to report results, but when I do I’ll share them here in the forum and @mention you, @matthias55!

Thanks,
alex

8 Likes